WorldWideScience

Sample records for memory computational systems

  1. Memory systems, computation, and the second law of thermodynamics

    International Nuclear Information System (INIS)

    Wolpert, D.H.

    1992-01-01

    A memory is a physical system for transferring information form one moment in time to another, where that information concerns something external to the system itself. This paper argues on information-theoretic and statistical mechanical grounds that useful memories must be of one of two types, exemplified by memory in abstract computer programs and by memory in photographs. Photograph-type memories work by exploring a collapse of state space flow to an attractor state. (This attractor state is the open-quotes initializedclose quotes state of the memory.) The central assumption of the theory of reversible computation tells us that in any such collapsing, regardless of whether the collapsing must increase in entropy of the system. In concert with the second law, this establishes the logical necessity of the empirical observation that photograph-type memories are temporally asymmetric (they can tell us about the past but not about the future). Under the assumption that human memory is a photograph-type memory, this result also explains why we humans can remember only our past and not our future. In contrast to photo-graph-type memories, computer-type memories do not require any initialization, and therefore are not directly affected by the second law. As a result, computer memories can be of the future as easily as of the past, even if the program running on the computer is logically irreversible. This is entirely in accord with the well-known temporal reversibility of the process of computation. This paper ends by arguing that the asymmetry of the psychological arrow of time is a direct consequence of the asymmetry of human memory. With the rest of this paper, this explains, explicitly and rigorously, why the psychological and thermodynamic arrows of time are correlated with one another. 24 refs

  2. Multiple-User, Multitasking, Virtual-Memory Computer System

    Science.gov (United States)

    Generazio, Edward R.; Roth, Don J.; Stang, David B.

    1993-01-01

    Computer system designed and programmed to serve multiple users in research laboratory. Provides for computer control and monitoring of laboratory instruments, acquisition and anlaysis of data from those instruments, and interaction with users via remote terminals. System provides fast access to shared central processing units and associated large (from megabytes to gigabytes) memories. Underlying concept of system also applicable to monitoring and control of industrial processes.

  3. Single-Chip Computers With Microelectromechanical Systems-Based Magnetic Memory

    NARCIS (Netherlands)

    Carley, L. Richard; Bain, James A.; Fedder, Gary K.; Greve, David W.; Guillou, David F.; Lu, Michael S.C.; Mukherjee, Tamal; Santhanam, Suresh; Abelmann, Leon; Min, Seungook

    This article describes an approach for implementing a complete computer system (CPU, RAM, I/O, and nonvolatile mass memory) on a single integrated-circuit substrate (a chip)—hence, the name "single-chip computer." The approach presented combines advances in the field of microelectromechanical

  4. Energy-aware memory management for embedded multimedia systems a computer-aided design approach

    CERN Document Server

    Balasa, Florin

    2011-01-01

    Energy-Aware Memory Management for Embedded Multimedia Systems: A Computer-Aided Design Approach presents recent computer-aided design (CAD) ideas that address memory management tasks, particularly the optimization of energy consumption in the memory subsystem. It explains how to efficiently implement CAD solutions, including theoretical methods and novel algorithms. The book covers various energy-aware design techniques, including data-dependence analysis techniques, memory size estimation methods, extensions of mapping approaches, and memory banking approaches. It shows how these techniques

  5. Computational and empirical simulations of selective memory impairments: Converging evidence for a single-system account of memory dissociations.

    Science.gov (United States)

    Curtis, Evan T; Jamieson, Randall K

    2018-04-01

    Current theory has divided memory into multiple systems, resulting in a fractionated account of human behaviour. By an alternative perspective, memory is a single system. However, debate over the details of different single-system theories has overshadowed the converging agreement among them, slowing the reunification of memory. Evidence in favour of dividing memory often takes the form of dissociations observed in amnesia, where amnesic patients are impaired on some memory tasks but not others. The dissociations are taken as evidence for separate explicit and implicit memory systems. We argue against this perspective. We simulate two key dissociations between classification and recognition in a computational model of memory, A Theory of Nonanalytic Association. We assume that amnesia reflects a quantitative difference in the quality of encoding. We also present empirical evidence that replicates the dissociations in healthy participants, simulating amnesic behaviour by reducing study time. In both analyses, we successfully reproduce the dissociations. We integrate our computational and empirical successes with the success of alternative models and manipulations and argue that our demonstrations, taken in concert with similar demonstrations with similar models, provide converging evidence for a more general set of single-system analyses that support the conclusion that a wide variety of memory phenomena can be explained by a unified and coherent set of principles.

  6. Soft-error tolerance and energy consumption evaluation of embedded computer with magnetic random access memory in practical systems using computer simulations

    Science.gov (United States)

    Nebashi, Ryusuke; Sakimura, Noboru; Sugibayashi, Tadahiko

    2017-08-01

    We evaluated the soft-error tolerance and energy consumption of an embedded computer with magnetic random access memory (MRAM) using two computer simulators. One is a central processing unit (CPU) simulator of a typical embedded computer system. We simulated the radiation-induced single-event-upset (SEU) probability in a spin-transfer-torque MRAM cell and also the failure rate of a typical embedded computer due to its main memory SEU error. The other is a delay tolerant network (DTN) system simulator. It simulates the power dissipation of wireless sensor network nodes of the system using a revised CPU simulator and a network simulator. We demonstrated that the SEU effect on the embedded computer with 1 Gbit MRAM-based working memory is less than 1 failure in time (FIT). We also demonstrated that the energy consumption of the DTN sensor node with MRAM-based working memory can be reduced to 1/11. These results indicate that MRAM-based working memory enhances the disaster tolerance of embedded computers.

  7. Paging memory from random access memory to backing storage in a parallel computer

    Science.gov (United States)

    Archer, Charles J; Blocksome, Michael A; Inglett, Todd A; Ratterman, Joseph D; Smith, Brian E

    2013-05-21

    Paging memory from random access memory (`RAM`) to backing storage in a parallel computer that includes a plurality of compute nodes, including: executing a data processing application on a virtual machine operating system in a virtual machine on a first compute node; providing, by a second compute node, backing storage for the contents of RAM on the first compute node; and swapping, by the virtual machine operating system in the virtual machine on the first compute node, a page of memory from RAM on the first compute node to the backing storage on the second compute node.

  8. Programs for Testing Processor-in-Memory Computing Systems

    Science.gov (United States)

    Katz, Daniel S.

    2006-01-01

    The Multithreaded Microbenchmarks for Processor-In-Memory (PIM) Compilers, Simulators, and Hardware are computer programs arranged in a series for use in testing the performances of PIM computing systems, including compilers, simulators, and hardware. The programs at the beginning of the series test basic functionality; the programs at subsequent positions in the series test increasingly complex functionality. The programs are intended to be used while designing a PIM system, and can be used to verify that compilers, simulators, and hardware work correctly. The programs can also be used to enable designers of these system components to examine tradeoffs in implementation. Finally, these programs can be run on non-PIM hardware (either single-threaded or multithreaded) using the POSIX pthreads standard to verify that the benchmarks themselves operate correctly. [POSIX (Portable Operating System Interface for UNIX) is a set of standards that define how programs and operating systems interact with each other. pthreads is a library of pre-emptive thread routines that comply with one of the POSIX standards.

  9. Large scale particle simulations in a virtual memory computer

    International Nuclear Information System (INIS)

    Gray, P.C.; Million, R.; Wagner, J.S.; Tajima, T.

    1983-01-01

    Virtual memory computers are capable of executing large-scale particle simulations even when the memory requirements exceeds the computer core size. The required address space is automatically mapped onto slow disc memory the the operating system. When the simulation size is very large, frequent random accesses to slow memory occur during the charge accumulation and particle pushing processes. Assesses to slow memory significantly reduce the excecution rate of the simulation. We demonstrate in this paper that with the proper choice of sorting algorithm, a nominal amount of sorting to keep physically adjacent particles near particles with neighboring array indices can reduce random access to slow memory, increase the efficiency of the I/O system, and hence, reduce the required computing time. (orig.)

  10. Large-scale particle simulations in a virtual-memory computer

    International Nuclear Information System (INIS)

    Gray, P.C.; Wagner, J.S.; Tajima, T.; Million, R.

    1982-08-01

    Virtual memory computers are capable of executing large-scale particle simulations even when the memory requirements exceed the computer core size. The required address space is automatically mapped onto slow disc memory by the operating system. When the simulation size is very large, frequent random accesses to slow memory occur during the charge accumulation and particle pushing processes. Accesses to slow memory significantly reduce the execution rate of the simulation. We demonstrate in this paper that with the proper choice of sorting algorithm, a nominal amount of sorting to keep physically adjacent particles near particles with neighboring array indices can reduce random access to slow memory, increase the efficiency of the I/O system, and hence, reduce the required computing time

  11. Projection multiplex recording of computer-synthesised one-dimensional Fourier holograms for holographic memory systems: mathematical and experimental modelling

    Energy Technology Data Exchange (ETDEWEB)

    Betin, A Yu; Bobrinev, V I; Verenikina, N M; Donchenko, S S; Odinokov, S B [Research Institute ' Radiotronics and Laser Engineering' , Bauman Moscow State Technical University, Moscow (Russian Federation); Evtikhiev, N N; Zlokazov, E Yu; Starikov, S N; Starikov, R S [National Reseach Nuclear University MEPhI (Moscow Engineering Physics Institute), Moscow (Russian Federation)

    2015-08-31

    A multiplex method of recording computer-synthesised one-dimensional Fourier holograms intended for holographic memory devices is proposed. The method potentially allows increasing the recording density in the previously proposed holographic memory system based on the computer synthesis and projection recording of data page holograms. (holographic memory)

  12. Self-Testing Computer Memory

    Science.gov (United States)

    Chau, Savio, N.; Rennels, David A.

    1988-01-01

    Memory system for computer repeatedly tests itself during brief, regular interruptions of normal processing of data. Detects and corrects transient faults as single-event upsets (changes in bits due to ionizing radiation) within milliseconds after occuring. Self-testing concept surpasses conventional by actively flushing latent defects out of memory and attempting to correct before accumulating beyond capacity for self-correction or detection. Cost of improvement modest increase in complexity of circuitry and operating time.

  13. Computing with memory for energy-efficient robust systems

    CERN Document Server

    Paul, Somnath

    2013-01-01

    This book analyzes energy and reliability as major challenges faced by designers of computing frameworks in the nanometer technology regime.  The authors describe the existing solutions to address these challenges and then reveal a new reconfigurable computing platform, which leverages high-density nanoscale memory for both data storage and computation to maximize the energy-efficiency and reliability. The energy and reliability benefits of this new paradigm are illustrated and the design challenges are discussed. Various hardware and software aspects of this exciting computing paradigm are de

  14. Distributed-memory matrix computations

    DEFF Research Database (Denmark)

    Balle, Susanne Mølleskov

    1995-01-01

    The main goal of this project is to investigate, develop, and implement algorithms for numerical linear algebra on parallel computers in order to acquire expertise in methods for parallel computations. An important motivation for analyzaing and investigating the potential for parallelism in these......The main goal of this project is to investigate, develop, and implement algorithms for numerical linear algebra on parallel computers in order to acquire expertise in methods for parallel computations. An important motivation for analyzaing and investigating the potential for parallelism...... in these algorithms is that many scientific applications rely heavily on the performance of the involved dense linear algebra building blocks. Even though we consider the distributed-memory as well as the shared-memory programming paradigm, the major part of the thesis is dedicated to distributed-memory architectures....... We emphasize distributed-memory massively parallel computers - such as the Connection Machines model CM-200 and model CM-5/CM-5E - available to us at UNI-C and at Thinking Machines Corporation. The CM-200 was at the time this project started one of the few existing massively parallel computers...

  15. Persistent Memory in Single Node Delay-Coupled Reservoir Computing.

    Science.gov (United States)

    Kovac, André David; Koall, Maximilian; Pipa, Gordon; Toutounji, Hazem

    2016-01-01

    Delays are ubiquitous in biological systems, ranging from genetic regulatory networks and synaptic conductances, to predator/pray population interactions. The evidence is mounting, not only to the presence of delays as physical constraints in signal propagation speed, but also to their functional role in providing dynamical diversity to the systems that comprise them. The latter observation in biological systems inspired the recent development of a computational architecture that harnesses this dynamical diversity, by delay-coupling a single nonlinear element to itself. This architecture is a particular realization of Reservoir Computing, where stimuli are injected into the system in time rather than in space as is the case with classical recurrent neural network realizations. This architecture also exhibits an internal memory which fades in time, an important prerequisite to the functioning of any reservoir computing device. However, fading memory is also a limitation to any computation that requires persistent storage. In order to overcome this limitation, the current work introduces an extended version to the single node Delay-Coupled Reservoir, that is based on trained linear feedback. We show by numerical simulations that adding task-specific linear feedback to the single node Delay-Coupled Reservoir extends the class of solvable tasks to those that require nonfading memory. We demonstrate, through several case studies, the ability of the extended system to carry out complex nonlinear computations that depend on past information, whereas the computational power of the system with fading memory alone quickly deteriorates. Our findings provide the theoretical basis for future physical realizations of a biologically-inspired ultrafast computing device with extended functionality.

  16. Associative Memory Computing Power and Its Simulation

    CERN Document Server

    Volpi, G; The ATLAS collaboration

    2014-01-01

    The associative memory (AM) system is a computing device made of hundreds of AM ASICs chips designed to perform “pattern matching” at very high speed. Since each AM chip stores a data base of 130000 pre-calculated patterns and large numbers of chips can be easily assembled together, it is possible to produce huge AM banks. Speed and size of the system are crucial for real-time High Energy Physics applications, such as the ATLAS Fast TracKer (FTK) Processor. Using 80 million channels of the ATLAS tracker, FTK finds tracks within 100 micro seconds. The simulation of such a parallelized system is an extremely complex task if executed in commercial computers based on normal CPUs. The algorithm performance is limited, due to the lack of parallelism, and in addition the memory requirement is very large. In fact the AM chip uses a content addressable memory (CAM) architecture. Any data inquiry is broadcast to all memory elements simultaneously, thus data retrieval time is independent of the database size. The gr...

  17. Associative Memory computing power and its simulation

    CERN Document Server

    Ancu, L S; The ATLAS collaboration; Britzger, D; Giannetti, P; Howarth, J W; Luongo, C; Pandini, C; Schmitt, S; Volpi, G

    2014-01-01

    The associative memory (AM) system is a computing device made of hundreds of AM ASICs chips designed to perform “pattern matching” at very high speed. Since each AM chip stores a data base of 130000 pre-calculated patterns and large numbers of chips can be easily assembled together, it is possible to produce huge AM banks. Speed and size of the system are crucial for real-time High Energy Physics applications, such as the ATLAS Fast TracKer (FTK) Processor. Using 80 million channels of the ATLAS tracker, FTK finds tracks within 100 micro seconds. The simulation of such a parallelized system is an extremely complex task if executed in commercial computers based on normal CPUs. The algorithm performance is limited, due to the lack of parallelism, and in addition the memory requirement is very large. In fact the AM chip uses a content addressable memory (CAM) architecture. Any data inquiry is broadcast to all memory elements simultaneously, thus data retrieval time is independent of the database size. The gr...

  18. Persistent Memory in Single Node Delay-Coupled Reservoir Computing.

    Directory of Open Access Journals (Sweden)

    André David Kovac

    Full Text Available Delays are ubiquitous in biological systems, ranging from genetic regulatory networks and synaptic conductances, to predator/pray population interactions. The evidence is mounting, not only to the presence of delays as physical constraints in signal propagation speed, but also to their functional role in providing dynamical diversity to the systems that comprise them. The latter observation in biological systems inspired the recent development of a computational architecture that harnesses this dynamical diversity, by delay-coupling a single nonlinear element to itself. This architecture is a particular realization of Reservoir Computing, where stimuli are injected into the system in time rather than in space as is the case with classical recurrent neural network realizations. This architecture also exhibits an internal memory which fades in time, an important prerequisite to the functioning of any reservoir computing device. However, fading memory is also a limitation to any computation that requires persistent storage. In order to overcome this limitation, the current work introduces an extended version to the single node Delay-Coupled Reservoir, that is based on trained linear feedback. We show by numerical simulations that adding task-specific linear feedback to the single node Delay-Coupled Reservoir extends the class of solvable tasks to those that require nonfading memory. We demonstrate, through several case studies, the ability of the extended system to carry out complex nonlinear computations that depend on past information, whereas the computational power of the system with fading memory alone quickly deteriorates. Our findings provide the theoretical basis for future physical realizations of a biologically-inspired ultrafast computing device with extended functionality.

  19. Method of computer generation and projection recording of microholograms for holographic memory systems: mathematical modelling and experimental implementation

    International Nuclear Information System (INIS)

    Betin, A Yu; Bobrinev, V I; Evtikhiev, N N; Zherdev, A Yu; Zlokazov, E Yu; Lushnikov, D S; Markin, V V; Odinokov, S B; Starikov, S N; Starikov, R S

    2013-01-01

    A method of computer generation and projection recording of microholograms for holographic memory systems is presented; the results of mathematical modelling and experimental implementation of the method are demonstrated. (holographic memory)

  20. Computer programming and computer systems

    CERN Document Server

    Hassitt, Anthony

    1966-01-01

    Computer Programming and Computer Systems imparts a "reading knowledge? of computer systems.This book describes the aspects of machine-language programming, monitor systems, computer hardware, and advanced programming that every thorough programmer should be acquainted with. This text discusses the automatic electronic digital computers, symbolic language, Reverse Polish Notation, and Fortran into assembly language. The routine for reading blocked tapes, dimension statements in subroutines, general-purpose input routine, and efficient use of memory are also elaborated.This publication is inten

  1. Dynamic computing random access memory

    International Nuclear Information System (INIS)

    Traversa, F L; Bonani, F; Pershin, Y V; Di Ventra, M

    2014-01-01

    The present von Neumann computing paradigm involves a significant amount of information transfer between a central processing unit and memory, with concomitant limitations in the actual execution speed. However, it has been recently argued that a different form of computation, dubbed memcomputing (Di Ventra and Pershin 2013 Nat. Phys. 9 200–2) and inspired by the operation of our brain, can resolve the intrinsic limitations of present day architectures by allowing for computing and storing of information on the same physical platform. Here we show a simple and practical realization of memcomputing that utilizes easy-to-build memcapacitive systems. We name this architecture dynamic computing random access memory (DCRAM). We show that DCRAM provides massively-parallel and polymorphic digital logic, namely it allows for different logic operations with the same architecture, by varying only the control signals. In addition, by taking into account realistic parameters, its energy expenditures can be as low as a few fJ per operation. DCRAM is fully compatible with CMOS technology, can be realized with current fabrication facilities, and therefore can really serve as an alternative to the present computing technology. (paper)

  2. Raster Scan Computer Image Generation (CIG) System Based On Refresh Memory

    Science.gov (United States)

    Dichter, W.; Doris, K.; Conkling, C.

    1982-06-01

    A full color, Computer Image Generation (CIG) raster visual system has been developed which provides a high level of training sophistication by utilizing advanced semiconductor technology and innovative hardware and firmware techniques. Double buffered refresh memory and efficient algorithms eliminate the problem of conventional raster line ordering by allowing the generated image to be stored in a random fashion. Modular design techniques and simplified architecture provide significant advantages in reduced system cost, standardization of parts, and high reliability. The major system components are a general purpose computer to perform interfacing and data base functions; a geometric processor to define the instantaneous scene image; a display generator to convert the image to a video signal; an illumination control unit which provides final image processing; and a CRT monitor for display of the completed image. Additional optional enhancements include texture generators, increased edge and occultation capability, curved surface shading, and data base extensions.

  3. Resistive content addressable memory based in-memory computation architecture

    KAUST Repository

    Salama, Khaled N.; Zidan, Mohammed A.; Kurdahi, Fadi; Eltawil, Ahmed M.

    2016-01-01

    Various examples are provided examples related to resistive content addressable memory (RCAM) based in-memory computation architectures. In one example, a system includes a content addressable memory (CAM) including an array of cells having a memristor based crossbar and an interconnection switch matrix having a gateless memristor array, which is coupled to an output of the CAM. In another example, a method, includes comparing activated bit values stored a key register with corresponding bit values in a row of a CAM, setting a tag bit value to indicate that the activated bit values match the corresponding bit values, and writing masked key bit values to corresponding bit locations in the row of the CAM based on the tag bit value.

  4. Resistive content addressable memory based in-memory computation architecture

    KAUST Repository

    Salama, Khaled N.

    2016-12-08

    Various examples are provided examples related to resistive content addressable memory (RCAM) based in-memory computation architectures. In one example, a system includes a content addressable memory (CAM) including an array of cells having a memristor based crossbar and an interconnection switch matrix having a gateless memristor array, which is coupled to an output of the CAM. In another example, a method, includes comparing activated bit values stored a key register with corresponding bit values in a row of a CAM, setting a tag bit value to indicate that the activated bit values match the corresponding bit values, and writing masked key bit values to corresponding bit locations in the row of the CAM based on the tag bit value.

  5. Ring interconnection for distributed memory automation and computing system

    Energy Technology Data Exchange (ETDEWEB)

    Vinogradov, V I [Inst. for Nuclear Research of the Russian Academy of Sciences, Moscow (Russian Federation)

    1996-12-31

    Problems of development of measurement, acquisition and central systems based on a distributed memory and a ring interface are discussed. It has been found that the RAM LINK-type protocol can be used for ringlet links in non-symmetrical distributed memory architecture multiprocessor system interaction. 5 refs.

  6. System of common usage on the base of external memory devices and the SM-3 computer

    International Nuclear Information System (INIS)

    Baluka, G.; Vasin, A.Yu.; Ermakov, V.A.; Zhukov, G.P.; Zimin, G.N.; Namsraj, Yu.; Ostrovnoj, A.I.; Savvateev, A.S.; Salamatin, I.M.; Yanovskij, G.Ya.

    1980-01-01

    An easily modified system of common usage on the base of external memories and a SM-3 minicomputer replacing some pulse analysers is described. The system has merits of PA and is more advantageous with regard to effectiveness of equipment using, the possibility of changing configuration and functions, the data protection against losses due to user errors and some failures, price of one registration channel, place occupied. The system of common usage is intended for the IBR-2 pulse reactor computing centre. It is designed using the SANPO system means for SM-3 computer [ru

  7. Computational modelling of memory retention from synapse to behaviour

    Science.gov (United States)

    van Rossum, Mark C. W.; Shippi, Maria

    2013-03-01

    One of our most intriguing mental abilities is the capacity to store information and recall it from memory. Computational neuroscience has been influential in developing models and concepts of learning and memory. In this tutorial review we focus on the interplay between learning and forgetting. We discuss recent advances in the computational description of the learning and forgetting processes on synaptic, neuronal, and systems levels, as well as recent data that open up new challenges for statistical physicists.

  8. Computational modelling of memory retention from synapse to behaviour

    International Nuclear Information System (INIS)

    Van Rossum, Mark C W; Shippi, Maria

    2013-01-01

    One of our most intriguing mental abilities is the capacity to store information and recall it from memory. Computational neuroscience has been influential in developing models and concepts of learning and memory. In this tutorial review we focus on the interplay between learning and forgetting. We discuss recent advances in the computational description of the learning and forgetting processes on synaptic, neuronal, and systems levels, as well as recent data that open up new challenges for statistical physicists. (paper)

  9. Spin-transfer torque magnetoresistive random-access memory technologies for normally off computing (invited)

    International Nuclear Information System (INIS)

    Ando, K.; Yuasa, S.; Fujita, S.; Ito, J.; Yoda, H.; Suzuki, Y.; Nakatani, Y.; Miyazaki, T.

    2014-01-01

    Most parts of present computer systems are made of volatile devices, and the power to supply them to avoid information loss causes huge energy losses. We can eliminate this meaningless energy loss by utilizing the non-volatile function of advanced spin-transfer torque magnetoresistive random-access memory (STT-MRAM) technology and create a new type of computer, i.e., normally off computers. Critical tasks to achieve normally off computers are implementations of STT-MRAM technologies in the main memory and low-level cache memories. STT-MRAM technology for applications to the main memory has been successfully developed by using perpendicular STT-MRAMs, and faster STT-MRAM technologies for applications to the cache memory are now being developed. The present status of STT-MRAMs and challenges that remain for normally off computers are discussed

  10. The computational nature of memory modification.

    Science.gov (United States)

    Gershman, Samuel J; Monfils, Marie-H; Norman, Kenneth A; Niv, Yael

    2017-03-15

    Retrieving a memory can modify its influence on subsequent behavior. We develop a computational theory of memory modification, according to which modification of a memory trace occurs through classical associative learning, but which memory trace is eligible for modification depends on a structure learning mechanism that discovers the units of association by segmenting the stream of experience into statistically distinct clusters (latent causes). New memories are formed when the structure learning mechanism infers that a new latent cause underlies current sensory observations. By the same token, old memories are modified when old and new sensory observations are inferred to have been generated by the same latent cause. We derive this framework from probabilistic principles, and present a computational implementation. Simulations demonstrate that our model can reproduce the major experimental findings from studies of memory modification in the Pavlovian conditioning literature.

  11. Advanced topics in security computer system design

    International Nuclear Information System (INIS)

    Stachniak, D.E.; Lamb, W.R.

    1989-01-01

    The capability, performance, and speed of contemporary computer processors, plus the associated performance capability of the operating systems accommodating the processors, have enormously expanded the scope of possibilities for designers of nuclear power plant security computer systems. This paper addresses the choices that could be made by a designer of security computer systems working with contemporary computers and describes the improvement in functionality of contemporary security computer systems based on an optimally chosen design. Primary initial considerations concern the selection of (a) the computer hardware and (b) the operating system. Considerations for hardware selection concern processor and memory word length, memory capacity, and numerous processor features

  12. A 32-bit computer for large memory applications on the FASTBUS

    International Nuclear Information System (INIS)

    Kellner, R.; Blossom, J.M.; Hung, J.P.

    1985-01-01

    A FASTBUS based 32-bit computer is being built at Los Alamos National Laboratory for use in systems requiring large fast memory in the FASTBUS environment. A separate local execution bus allows data reduction to proceed concurrently with other FASTBUS operations. The computer, which can operate in either master or slave mode, includes the National Semiconductor NS32032 chip set with demand paged memory management, floating point slave processor, interrupt control unit, timers, and time-of-day clock. The 16.0 megabytes of random access memory are interleaved to allow windowed direct memory access on and off the FASTBUS at 80 megabytes per second

  13. The computational nature of memory modification

    Science.gov (United States)

    Gershman, Samuel J; Monfils, Marie-H; Norman, Kenneth A; Niv, Yael

    2017-01-01

    Retrieving a memory can modify its influence on subsequent behavior. We develop a computational theory of memory modification, according to which modification of a memory trace occurs through classical associative learning, but which memory trace is eligible for modification depends on a structure learning mechanism that discovers the units of association by segmenting the stream of experience into statistically distinct clusters (latent causes). New memories are formed when the structure learning mechanism infers that a new latent cause underlies current sensory observations. By the same token, old memories are modified when old and new sensory observations are inferred to have been generated by the same latent cause. We derive this framework from probabilistic principles, and present a computational implementation. Simulations demonstrate that our model can reproduce the major experimental findings from studies of memory modification in the Pavlovian conditioning literature. DOI: http://dx.doi.org/10.7554/eLife.23763.001 PMID:28294944

  14. Modified stretched exponential model of computer system resources management limitations-The case of cache memory

    Science.gov (United States)

    Strzałka, Dominik; Dymora, Paweł; Mazurek, Mirosław

    2018-02-01

    In this paper we present some preliminary results in the field of computer systems management with relation to Tsallis thermostatistics and the ubiquitous problem of hardware limited resources. In the case of systems with non-deterministic behaviour, management of their resources is a key point that guarantees theirs acceptable performance and proper working. This is very wide problem that stands for many challenges in financial, transport, water and food, health, etc. areas. We focus on computer systems with attention paid to cache memory and propose to use an analytical model that is able to connect non-extensive entropy formalism, long-range dependencies, management of system resources and queuing theory. Obtained analytical results are related to the practical experiment showing interesting and valuable results.

  15. Parallel structures in human and computer memory

    Science.gov (United States)

    Kanerva, Pentti

    1986-08-01

    If we think of our experiences as being recorded continuously on film, then human memory can be compared to a film library that is indexed by the contents of the film strips stored in it. Moreover, approximate retrieval cues suffice to retrieve information stored in this library: We recognize a familiar person in a fuzzy photograph or a familiar tune played on a strange instrument. This paper is about how to construct a computer memory that would allow a computer to recognize patterns and to recall sequences the way humans do. Such a memory is remarkably similar in structure to a conventional computer memory and also to the neural circuits in the cortex of the cerebellum of the human brain. The paper concludes that the frame problem of artificial intelligence could be solved by the use of such a memory if we were able to encode information about the world properly.

  16. System and method for programmable bank selection for banked memory subsystems

    Energy Technology Data Exchange (ETDEWEB)

    Blumrich, Matthias A. (Ridgefield, CT); Chen, Dong (Croton on Hudson, NY); Gara, Alan G. (Mount Kisco, NY); Giampapa, Mark E. (Irvington, NY); Hoenicke, Dirk (Seebruck-Seeon, DE); Ohmacht, Martin (Yorktown Heights, NY); Salapura, Valentina (Chappaqua, NY); Sugavanam, Krishnan (Mahopac, NY)

    2010-09-07

    A programmable memory system and method for enabling one or more processor devices access to shared memory in a computing environment, the shared memory including one or more memory storage structures having addressable locations for storing data. The system comprises: one or more first logic devices associated with a respective one or more processor devices, each first logic device for receiving physical memory address signals and programmable for generating a respective memory storage structure select signal upon receipt of pre-determined address bit values at selected physical memory address bit locations; and, a second logic device responsive to each of the respective select signal for generating an address signal used for selecting a memory storage structure for processor access. The system thus enables each processor device of a computing environment memory storage access distributed across the one or more memory storage structures.

  17. A Simulation-Based Soft Error Estimation Methodology for Computer Systems

    OpenAIRE

    Sugihara, Makoto; Ishihara, Tohru; Hashimoto, Koji; Muroyama, Masanori

    2006-01-01

    This paper proposes a simulation-based soft error estimation methodology for computer systems. Accumulating soft error rates (SERs) of all memories in a computer system results in pessimistic soft error estimation. This is because memory cells are used spatially and temporally and not all soft errors in them make the computer system faulty. Our soft-error estimation methodology considers the locations and the timings of soft errors occurring at every level of memory hierarchy and estimates th...

  18. Computer System Design System-on-Chip

    CERN Document Server

    Flynn, Michael J

    2011-01-01

    The next generation of computer system designers will be less concerned about details of processors and memories, and more concerned about the elements of a system tailored to particular applications. These designers will have a fundamental knowledge of processors and other elements in the system, but the success of their design will depend on the skills in making system-level tradeoffs that optimize the cost, performance and other attributes to meet application requirements. This book provides a new treatment of computer system design, particularly for System-on-Chip (SOC), which addresses th

  19. Holographic memory system based on projection recording of computer-generated 1D Fourier holograms.

    Science.gov (United States)

    Betin, A Yu; Bobrinev, V I; Donchenko, S S; Odinokov, S B; Evtikhiev, N N; Starikov, R S; Starikov, S N; Zlokazov, E Yu

    2014-10-01

    Utilization of computer generation of holographic structures significantly simplifies the optical scheme that is used to record the microholograms in a holographic memory record system. Also digital holographic synthesis allows to account the nonlinear errors of the record system to improve the microholograms quality. The multiplexed record of holograms is a widespread technique to increase the data record density. In this article we represent the holographic memory system based on digital synthesis of amplitude one-dimensional (1D) Fourier transform holograms and the multiplexed record of these holograms onto the holographic carrier using optical projection scheme. 1D Fourier transform holograms are very sensitive to orientation of the anamorphic optical element (cylindrical lens) that is required for encoded data object reconstruction. The multiplex record of several holograms with different orientation in an optical projection scheme allowed reconstruction of the data object from each hologram by rotating the cylindrical lens on the corresponding angle. Also, we discuss two optical schemes for the recorded holograms readout: a full-page readout system and line-by-line readout system. We consider the benefits of both systems and present the results of experimental modeling of 1D Fourier holograms nonmultiplex and multiplex record and reconstruction.

  20. Computing betweenness centrality in external memory

    DEFF Research Database (Denmark)

    Arge, Lars; Goodrich, Michael T.; Walderveen, Freek van

    2013-01-01

    Betweenness centrality is one of the most well-known measures of the importance of nodes in a social-network graph. In this paper we describe the first known external-memory and cache-oblivious algorithms for computing betweenness centrality. We present four different external-memory algorithms...

  1. Human Memory Organization for Computer Programs.

    Science.gov (United States)

    Norcio, A. F.; Kerst, Stephen M.

    1983-01-01

    Results of study investigating human memory organization in processing of computer programming languages indicate that algorithmic logic segments form a cognitive organizational structure in memory for programs. Statement indentation and internal program documentation did not enhance organizational process of recall of statements in five Fortran…

  2. The Memory System You Can't Avoid it, You Can't Ignore it, You Can't Fake it

    CERN Document Server

    Jacob, Bruce

    2009-01-01

    Today, computer-system optimization, at both the hardware and software levels, must consider the details of the memory system in its analysis; failing to do so yields systems that are increasingly inefficient as those systems become more complex. This lecture seeks to introduce the reader to the most important details of the memory system; it targets both computer scientists and computer engineers in industry and in academia. Roughly speaking, computer scientists are the users of the memory system and computer engineers are the designers of the memory system. Both can benefit tremendously from

  3. Injecting Artificial Memory Errors Into a Running Computer Program

    Science.gov (United States)

    Bornstein, Benjamin J.; Granat, Robert A.; Wagstaff, Kiri L.

    2008-01-01

    Single-event upsets (SEUs) or bitflips are computer memory errors caused by radiation. BITFLIPS (Basic Instrumentation Tool for Fault Localized Injection of Probabilistic SEUs) is a computer program that deliberately injects SEUs into another computer program, while the latter is running, for the purpose of evaluating the fault tolerance of that program. BITFLIPS was written as a plug-in extension of the open-source Valgrind debugging and profiling software. BITFLIPS can inject SEUs into any program that can be run on the Linux operating system, without needing to modify the program s source code. Further, if access to the original program source code is available, BITFLIPS offers fine-grained control over exactly when and which areas of memory (as specified via program variables) will be subjected to SEUs. The rate of injection of SEUs is controlled by specifying either a fault probability or a fault rate based on memory size and radiation exposure time, in units of SEUs per byte per second. BITFLIPS can also log each SEU that it injects and, if program source code is available, report the magnitude of effect of the SEU on a floating-point value or other program variable.

  4. Space-Bounded Church-Turing Thesis and Computational Tractability of Closed Systems.

    Science.gov (United States)

    Braverman, Mark; Schneider, Jonathan; Rojas, Cristóbal

    2015-08-28

    We report a new limitation on the ability of physical systems to perform computation-one that is based on generalizing the notion of memory, or storage space, available to the system to perform the computation. Roughly, we define memory as the maximal amount of information that the evolving system can carry from one instant to the next. We show that memory is a limiting factor in computation even in lieu of any time limitations on the evolving system-such as when considering its equilibrium regime. We call this limitation the space-bounded Church-Turing thesis (SBCT). The SBCT is supported by a simulation assertion (SA), which states that predicting the long-term behavior of bounded-memory systems is computationally tractable. In particular, one corollary of SA is an explicit bound on the computational hardness of the long-term behavior of a discrete-time finite-dimensional dynamical system that is affected by noise. We prove such a bound explicitly.

  5. The MUSOS (MUsic SOftware System) Toolkit: A computer-based, open source application for testing memory for melodies.

    Science.gov (United States)

    Rainsford, M; Palmer, M A; Paine, G

    2018-04-01

    Despite numerous innovative studies, rates of replication in the field of music psychology are extremely low (Frieler et al., 2013). Two key methodological challenges affecting researchers wishing to administer and reproduce studies in music cognition are the difficulty of measuring musical responses, particularly when conducting free-recall studies, and access to a reliable set of novel stimuli unrestricted by copyright or licensing issues. In this article, we propose a solution for these challenges in computer-based administration. We present a computer-based application for testing memory for melodies. Created using the software Max/MSP (Cycling '74, 2014a), the MUSOS (Music Software System) Toolkit uses a simple modular framework configurable for testing common paradigms such as recall, old-new recognition, and stem completion. The program is accompanied by a stimulus set of 156 novel, copyright-free melodies, in audio and Max/MSP file formats. Two pilot tests were conducted to establish the properties of the accompanying stimulus set that are relevant to music cognition and general memory research. By using this software, a researcher without specialist musical training may administer and accurately measure responses from common paradigms used in the study of memory for music.

  6. Distributed Memory Parallel Computing with SEAWAT

    Science.gov (United States)

    Verkaik, J.; Huizer, S.; van Engelen, J.; Oude Essink, G.; Ram, R.; Vuik, K.

    2017-12-01

    Fresh groundwater reserves in coastal aquifers are threatened by sea-level rise, extreme weather conditions, increasing urbanization and associated groundwater extraction rates. To counteract these threats, accurate high-resolution numerical models are required to optimize the management of these precious reserves. The major model drawbacks are long run times and large memory requirements, limiting the predictive power of these models. Distributed memory parallel computing is an efficient technique for reducing run times and memory requirements, where the problem is divided over multiple processor cores. A new Parallel Krylov Solver (PKS) for SEAWAT is presented. PKS has recently been applied to MODFLOW and includes Conjugate Gradient (CG) and Biconjugate Gradient Stabilized (BiCGSTAB) linear accelerators. Both accelerators are preconditioned by an overlapping additive Schwarz preconditioner in a way that: a) subdomains are partitioned using Recursive Coordinate Bisection (RCB) load balancing, b) each subdomain uses local memory only and communicates with other subdomains by Message Passing Interface (MPI) within the linear accelerator, c) it is fully integrated in SEAWAT. Within SEAWAT, the PKS-CG solver replaces the Preconditioned Conjugate Gradient (PCG) solver for solving the variable-density groundwater flow equation and the PKS-BiCGSTAB solver replaces the Generalized Conjugate Gradient (GCG) solver for solving the advection-diffusion equation. PKS supports the third-order Total Variation Diminishing (TVD) scheme for computing advection. Benchmarks were performed on the Dutch national supercomputer (https://userinfo.surfsara.nl/systems/cartesius) using up to 128 cores, for a synthetic 3D Henry model (100 million cells) and the real-life Sand Engine model ( 10 million cells). The Sand Engine model was used to investigate the potential effect of the long-term morphological evolution of a large sand replenishment and climate change on fresh groundwater resources

  7. Optical computing, optical memory, and SBIRs at Foster-Miller

    Science.gov (United States)

    Domash, Lawrence H.

    1994-03-01

    A desktop design and manufacturing system for binary diffractive elements, MacBEEP, was developed with the optical researcher in mind. Optical processing systems for specialized tasks such as cellular automation computation and fractal measurement were constructed. A new family of switchable holograms has enabled several applications for control of laser beams in optical memories. New spatial light modulators and optical logic elements have been demonstrated based on a more manufacturable semiconductor technology. Novel synthetic and polymeric nonlinear materials for optical storage are under development in an integrated memory architecture. SBIR programs enable creative contributions from smaller companies, both product oriented and technology oriented, and support advances that might not otherwise be developed.

  8. Limbic systems for emotion and for memory, but no single limbic system.

    Science.gov (United States)

    Rolls, Edmund T

    2015-01-01

    The concept of a (single) limbic system is shown to be outmoded. Instead, anatomical, neurophysiological, functional neuroimaging, and neuropsychological evidence is described that anterior limbic and related structures including the orbitofrontal cortex and amygdala are involved in emotion, reward valuation, and reward-related decision-making (but not memory), with the value representations transmitted to the anterior cingulate cortex for action-outcome learning. In this 'emotion limbic system' a computational principle is that feedforward pattern association networks learn associations from visual, olfactory and auditory stimuli, to primary reinforcers such as taste, touch, and pain. In primates including humans this learning can be very rapid and rule-based, with the orbitofrontal cortex overshadowing the amygdala in this learning important for social and emotional behaviour. Complementary evidence is described showing that the hippocampus and limbic structures to which it is connected including the posterior cingulate cortex and the fornix-mammillary body-anterior thalamus-posterior cingulate circuit are involved in episodic or event memory, but not emotion. This 'hippocampal system' receives information from neocortical areas about spatial location, and objects, and can rapidly associate this information together by the different computational principle of autoassociation in the CA3 region of the hippocampus involving feedback. The system can later recall the whole of this information in the CA3 region from any component, a feedback process, and can recall the information back to neocortical areas, again a feedback (to neocortex) recall process. Emotion can enter this memory system from the orbitofrontal cortex etc., and be recalled back to the orbitofrontal cortex etc. during memory recall, but the emotional and hippocampal networks or 'limbic systems' operate by different computational principles, and operate independently of each other except insofar as an

  9. Static Memory Deduplication for Performance Optimization in Cloud Computing

    Directory of Open Access Journals (Sweden)

    Gangyong Jia

    2017-04-01

    Full Text Available In a cloud computing environment, the number of virtual machines (VMs on a single physical server and the number of applications running on each VM are continuously growing. This has led to an enormous increase in the demand of memory capacity and subsequent increase in the energy consumption in the cloud. Lack of enough memory has become a major bottleneck for scalability and performance of virtualization interfaces in cloud computing. To address this problem, memory deduplication techniques which reduce memory demand through page sharing are being adopted. However, such techniques suffer from overheads in terms of number of online comparisons required for the memory deduplication. In this paper, we propose a static memory deduplication (SMD technique which can reduce memory capacity requirement and provide performance optimization in cloud computing. The main innovation of SMD is that the process of page detection is performed offline, thus potentially reducing the performance cost, especially in terms of response time. In SMD, page comparisons are restricted to the code segment, which has the highest shared content. Our experimental results show that SMD efficiently reduces memory capacity requirement and improves performance. We demonstrate that, compared to other approaches, the cost in terms of the response time is negligible.

  10. Static Memory Deduplication for Performance Optimization in Cloud Computing.

    Science.gov (United States)

    Jia, Gangyong; Han, Guangjie; Wang, Hao; Yang, Xuan

    2017-04-27

    In a cloud computing environment, the number of virtual machines (VMs) on a single physical server and the number of applications running on each VM are continuously growing. This has led to an enormous increase in the demand of memory capacity and subsequent increase in the energy consumption in the cloud. Lack of enough memory has become a major bottleneck for scalability and performance of virtualization interfaces in cloud computing. To address this problem, memory deduplication techniques which reduce memory demand through page sharing are being adopted. However, such techniques suffer from overheads in terms of number of online comparisons required for the memory deduplication. In this paper, we propose a static memory deduplication (SMD) technique which can reduce memory capacity requirement and provide performance optimization in cloud computing. The main innovation of SMD is that the process of page detection is performed offline, thus potentially reducing the performance cost, especially in terms of response time. In SMD, page comparisons are restricted to the code segment, which has the highest shared content. Our experimental results show that SMD efficiently reduces memory capacity requirement and improves performance. We demonstrate that, compared to other approaches, the cost in terms of the response time is negligible.

  11. Retrieval and organizational strategies in conceptual memory a computer model

    CERN Document Server

    Kolodner, Janet L

    2014-01-01

    'Someday we expect that computers will be able to keep us informed about the news. People have imagined being able to ask their home computers questions such as "What's going on in the world?"…'. Originally published in 1984, this book is a fascinating look at the world of memory and computers before the internet became the mainstream phenomenon it is today. It looks at the early development of a computer system that could keep us informed in a way that we now take for granted. Presenting a theory of remembering, based on human information processing, it begins to address many of the hard problems implicated in the quest to make computers remember. The book had two purposes in presenting this theory of remembering. First, to be used in implementing intelligent computer systems, including fact retrieval systems and intelligent systems in general. Any intelligent program needs to use and store and use a great deal of knowledge. The strategies and structures in the book were designed to be used for that purpos...

  12. Memory controllers for real-time embedded systems predictable and composable real-time systems

    CERN Document Server

    Akesson, Benny

    2012-01-01

      Verification of real-time requirements in systems-on-chip becomes more complex as more applications are integrated. Predictable and composable systems can manage the increasing complexity using formal verification and simulation.  This book explains the concepts of predictability and composability and shows how to apply them to the design and analysis of a memory controller, which is a key component in any real-time system. This book is generally intended for readers interested in Systems-on-Chips with real-time applications.   It is especially well-suited for readers looking to use SDRAM memories in systems with hard or firm real-time requirements. There is a strong focus on real-time concepts, such as predictability and composability, as well as a brief discussion about memory controller architectures for high-performance computing. Readers will learn step-by-step how to go from an unpredictable SDRAM memory, offering highly variable bandwidth and latency, to a predictable and composable shared memory...

  13. EPS Mid-Career Award 2011. Are there multiple memory systems? Tests of models of implicit and explicit memory.

    Science.gov (United States)

    Shanks, David R; Berry, Christopher J

    2012-01-01

    This article reviews recent work aimed at developing a new framework, based on signal detection theory, for understanding the relationship between explicit (e.g., recognition) and implicit (e.g., priming) memory. Within this framework, different assumptions about sources of memorial evidence can be framed. Application to experimental results provides robust evidence for a single-system model in preference to multiple-systems models. This evidence comes from several sources including studies of the effects of amnesia and ageing on explicit and implicit memory. The framework allows a range of concepts in current memory research, such as familiarity, recollection, fluency, and source memory, to be linked to implicit memory. More generally, this work emphasizes the value of modern computational modelling techniques in the study of learning and memory.

  14. Neuromorphic cognitive systems a learning and memory centered approach

    CERN Document Server

    Yu, Qiang; Hu, Jun; Tan Chen, Kay

    2017-01-01

    This book presents neuromorphic cognitive systems from a learning and memory-centered perspective. It illustrates how to build a system network of neurons to perform spike-based information processing, computing, and high-level cognitive tasks. It is beneficial to a wide spectrum of readers, including undergraduate and postgraduate students and researchers who are interested in neuromorphic computing and neuromorphic engineering, as well as engineers and professionals in industry who are involved in the design and applications of neuromorphic cognitive systems, neuromorphic sensors and processors, and cognitive robotics. The book formulates a systematic framework, from the basic mathematical and computational methods in spike-based neural encoding, learning in both single and multi-layered networks, to a near cognitive level composed of memory and cognition. Since the mechanisms for integrating spiking neurons integrate to formulate cognitive functions as in the brain are little understood, studies of neuromo...

  15. Perspective: Memcomputing: Leveraging memory and physics to compute efficiently

    Science.gov (United States)

    Di Ventra, Massimiliano; Traversa, Fabio L.

    2018-05-01

    It is well known that physical phenomena may be of great help in computing some difficult problems efficiently. A typical example is prime factorization that may be solved in polynomial time by exploiting quantum entanglement on a quantum computer. There are, however, other types of (non-quantum) physical properties that one may leverage to compute efficiently a wide range of hard problems. In this perspective, we discuss how to employ one such property, memory (time non-locality), in a novel physics-based approach to computation: Memcomputing. In particular, we focus on digital memcomputing machines (DMMs) that are scalable. DMMs can be realized with non-linear dynamical systems with memory. The latter property allows the realization of a new type of Boolean logic, one that is self-organizing. Self-organizing logic gates are "terminal-agnostic," namely, they do not distinguish between the input and output terminals. When appropriately assembled to represent a given combinatorial/optimization problem, the corresponding self-organizing circuit converges to the equilibrium points that express the solutions of the problem at hand. In doing so, DMMs take advantage of the long-range order that develops during the transient dynamics. This collective dynamical behavior, reminiscent of a phase transition, or even the "edge of chaos," is mediated by families of classical trajectories (instantons) that connect critical points of increasing stability in the system's phase space. The topological character of the solution search renders DMMs robust against noise and structural disorder. Since DMMs are non-quantum systems described by ordinary differential equations, not only can they be built in hardware with the available technology, they can also be simulated efficiently on modern classical computers. As an example, we will show the polynomial-time solution of the subset-sum problem for the worst cases, and point to other types of hard problems where simulations of DMMs

  16. Virtual memory support for distributed computing environments using a shared data object model

    Science.gov (United States)

    Huang, F.; Bacon, J.; Mapp, G.

    1995-12-01

    Conventional storage management systems provide one interface for accessing memory segments and another for accessing secondary storage objects. This hinders application programming and affects overall system performance due to mandatory data copying and user/kernel boundary crossings, which in the microkernel case may involve context switches. Memory-mapping techniques may be used to provide programmers with a unified view of the storage system. This paper extends such techniques to support a shared data object model for distributed computing environments in which good support for coherence and synchronization is essential. The approach is based on a microkernel, typed memory objects, and integrated coherence control. A microkernel architecture is used to support multiple coherence protocols and the addition of new protocols. Memory objects are typed and applications can choose the most suitable protocols for different types of object to avoid protocol mismatch. Low-level coherence control is integrated with high-level concurrency control so that the number of messages required to maintain memory coherence is reduced and system-wide synchronization is realized without severely impacting the system performance. These features together contribute a novel approach to the support for flexible coherence under application control.

  17. Data systems and computer science space data systems: Onboard memory and storage

    Science.gov (United States)

    Shull, Tom

    1991-01-01

    The topics are presented in viewgraph form and include the following: technical objectives; technology challenges; state-of-the-art assessment; mass storage comparison; SODR drive and system concepts; program description; vertical Bloch line (VBL) device concept; relationship to external programs; and backup charts for memory and storage.

  18. A simplified computational memory model from information processing.

    Science.gov (United States)

    Zhang, Lanhua; Zhang, Dongsheng; Deng, Yuqin; Ding, Xiaoqian; Wang, Yan; Tang, Yiyuan; Sun, Baoliang

    2016-11-23

    This paper is intended to propose a computational model for memory from the view of information processing. The model, called simplified memory information retrieval network (SMIRN), is a bi-modular hierarchical functional memory network by abstracting memory function and simulating memory information processing. At first meta-memory is defined to express the neuron or brain cortices based on the biology and graph theories, and we develop an intra-modular network with the modeling algorithm by mapping the node and edge, and then the bi-modular network is delineated with intra-modular and inter-modular. At last a polynomial retrieval algorithm is introduced. In this paper we simulate the memory phenomena and functions of memorization and strengthening by information processing algorithms. The theoretical analysis and the simulation results show that the model is in accordance with the memory phenomena from information processing view.

  19. Inovation of the computer system for the WWER-440 simulator

    International Nuclear Information System (INIS)

    Schrumpf, L.

    1988-01-01

    The configuration of the WWER-440 simulator computer system consists of four SMEP computers. The basic data processing unit consists of two interlinked SM 52/11.M1 computers with 1 MB of main memory. This part of the computer system of the simulator controls the operation of the entire simulator, processes the programs of technology behavior simulation, of the unit information system and of other special systems, guarantees program support and the operation of the instructor's console. An SM 52/11 computer with 256 kB of main memory is connected to each unit. It is used as a communication unit for data transmission using the DASIO 600 interface. Semigraphic color displays are based on the microprocessor modules of the SM 50/40 and SM 53/10 kit supplemented with a modified TESLA COLOR 110 ST tv receiver. (J.B.). 1 fig

  20. A learnable parallel processing architecture towards unity of memory and computing.

    Science.gov (United States)

    Li, H; Gao, B; Chen, Z; Zhao, Y; Huang, P; Ye, H; Liu, L; Liu, X; Kang, J

    2015-08-14

    Developing energy-efficient parallel information processing systems beyond von Neumann architecture is a long-standing goal of modern information technologies. The widely used von Neumann computer architecture separates memory and computing units, which leads to energy-hungry data movement when computers work. In order to meet the need of efficient information processing for the data-driven applications such as big data and Internet of Things, an energy-efficient processing architecture beyond von Neumann is critical for the information society. Here we show a non-von Neumann architecture built of resistive switching (RS) devices named "iMemComp", where memory and logic are unified with single-type devices. Leveraging nonvolatile nature and structural parallelism of crossbar RS arrays, we have equipped "iMemComp" with capabilities of computing in parallel and learning user-defined logic functions for large-scale information processing tasks. Such architecture eliminates the energy-hungry data movement in von Neumann computers. Compared with contemporary silicon technology, adder circuits based on "iMemComp" can improve the speed by 76.8% and the power dissipation by 60.3%, together with a 700 times aggressive reduction in the circuit area.

  1. A learnable parallel processing architecture towards unity of memory and computing

    Science.gov (United States)

    Li, H.; Gao, B.; Chen, Z.; Zhao, Y.; Huang, P.; Ye, H.; Liu, L.; Liu, X.; Kang, J.

    2015-08-01

    Developing energy-efficient parallel information processing systems beyond von Neumann architecture is a long-standing goal of modern information technologies. The widely used von Neumann computer architecture separates memory and computing units, which leads to energy-hungry data movement when computers work. In order to meet the need of efficient information processing for the data-driven applications such as big data and Internet of Things, an energy-efficient processing architecture beyond von Neumann is critical for the information society. Here we show a non-von Neumann architecture built of resistive switching (RS) devices named “iMemComp”, where memory and logic are unified with single-type devices. Leveraging nonvolatile nature and structural parallelism of crossbar RS arrays, we have equipped “iMemComp” with capabilities of computing in parallel and learning user-defined logic functions for large-scale information processing tasks. Such architecture eliminates the energy-hungry data movement in von Neumann computers. Compared with contemporary silicon technology, adder circuits based on “iMemComp” can improve the speed by 76.8% and the power dissipation by 60.3%, together with a 700 times aggressive reduction in the circuit area.

  2. A simplified computational memory model from information processing

    Science.gov (United States)

    Zhang, Lanhua; Zhang, Dongsheng; Deng, Yuqin; Ding, Xiaoqian; Wang, Yan; Tang, Yiyuan; Sun, Baoliang

    2016-01-01

    This paper is intended to propose a computational model for memory from the view of information processing. The model, called simplified memory information retrieval network (SMIRN), is a bi-modular hierarchical functional memory network by abstracting memory function and simulating memory information processing. At first meta-memory is defined to express the neuron or brain cortices based on the biology and graph theories, and we develop an intra-modular network with the modeling algorithm by mapping the node and edge, and then the bi-modular network is delineated with intra-modular and inter-modular. At last a polynomial retrieval algorithm is introduced. In this paper we simulate the memory phenomena and functions of memorization and strengthening by information processing algorithms. The theoretical analysis and the simulation results show that the model is in accordance with the memory phenomena from information processing view. PMID:27876847

  3. Contrasting single and multi-component working-memory systems in dual tasking.

    Science.gov (United States)

    Nijboer, Menno; Borst, Jelmer; van Rijn, Hedderik; Taatgen, Niels

    2016-05-01

    Working memory can be a major source of interference in dual tasking. However, there is no consensus on whether this interference is the result of a single working memory bottleneck, or of interactions between different working memory components that together form a complete working-memory system. We report a behavioral and an fMRI dataset in which working memory requirements are manipulated during multitasking. We show that a computational cognitive model that assumes a distributed version of working memory accounts for both behavioral and neuroimaging data better than a model that takes a more centralized approach. The model's working memory consists of an attentional focus, declarative memory, and a subvocalized rehearsal mechanism. Thus, the data and model favor an account where working memory interference in dual tasking is the result of interactions between different resources that together form a working-memory system. Copyright © 2016 Elsevier Inc. All rights reserved.

  4. Memory intensive functional architecture for distributed computer control systems

    International Nuclear Information System (INIS)

    Dimmler, D.G.

    1983-10-01

    A memory-intensive functional architectue for distributed data-acquisition, monitoring, and control systems with large numbers of nodes has been conceptually developed and applied in several large-scale and some smaller systems. This discussion concentrates on: (1) the basic architecture; (2) recent expansions of the architecture which now become feasible in view of the rapidly developing component technologies in microprocessors and functional large-scale integration circuits; and (3) implementation of some key hardware and software structures and one system implementation which is a system for performing control and data acquisition of a neutron spectrometer at the Brookhaven High Flux Beam Reactor. The spectrometer is equipped with a large-area position-sensitive neutron detector

  5. Visual software system for memory interleaving simulation

    Directory of Open Access Journals (Sweden)

    Milenković Katarina

    2017-01-01

    Full Text Available This paper describes the visual software system for memory interleaving simulation (VSMIS, implemented for the purpose of the course Computer Architecture and Organization 1, at the School of Electrical Engineering, University of Belgrade. The simulator enables students to expand their knowledge through practical work in the laboratory, as well as through independent work at home. VSMIS gives users the possibility to initialize parts of the system and to control simulation steps. The user has the ability to monitor simulation through graphical representation. It is possible to navigate through the entire hierarchy of the system using simple navigation. During the simulation the user can observe and set the values of the memory location. At any time, the user can reset the simulation of the system and observe it for different memory states; in addition, it is possible to save the current state of the simulation and continue with the execution of the simulation later. [Project of the Serbian Ministry of Education, Science and Technological Development, Grant no. III44009

  6. Optical interconnection networks for high-performance computing systems

    International Nuclear Information System (INIS)

    Biberman, Aleksandr; Bergman, Keren

    2012-01-01

    Enabled by silicon photonic technology, optical interconnection networks have the potential to be a key disruptive technology in computing and communication industries. The enduring pursuit of performance gains in computing, combined with stringent power constraints, has fostered the ever-growing computational parallelism associated with chip multiprocessors, memory systems, high-performance computing systems and data centers. Sustaining these parallelism growths introduces unique challenges for on- and off-chip communications, shifting the focus toward novel and fundamentally different communication approaches. Chip-scale photonic interconnection networks, enabled by high-performance silicon photonic devices, offer unprecedented bandwidth scalability with reduced power consumption. We demonstrate that the silicon photonic platforms have already produced all the high-performance photonic devices required to realize these types of networks. Through extensive empirical characterization in much of our work, we demonstrate such feasibility of waveguides, modulators, switches and photodetectors. We also demonstrate systems that simultaneously combine many functionalities to achieve more complex building blocks. We propose novel silicon photonic devices, subsystems, network topologies and architectures to enable unprecedented performance of these photonic interconnection networks. Furthermore, the advantages of photonic interconnection networks extend far beyond the chip, offering advanced communication environments for memory systems, high-performance computing systems, and data centers. (review article)

  7. Exploiting short-term memory in soft body dynamics as a computational resource.

    Science.gov (United States)

    Nakajima, K; Li, T; Hauser, H; Pfeifer, R

    2014-11-06

    Soft materials are not only highly deformable, but they also possess rich and diverse body dynamics. Soft body dynamics exhibit a variety of properties, including nonlinearity, elasticity and potentially infinitely many degrees of freedom. Here, we demonstrate that such soft body dynamics can be employed to conduct certain types of computation. Using body dynamics generated from a soft silicone arm, we show that they can be exploited to emulate functions that require memory and to embed robust closed-loop control into the arm. Our results suggest that soft body dynamics have a short-term memory and can serve as a computational resource. This finding paves the way towards exploiting passive body dynamics for control of a large class of underactuated systems. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  8. Computer Icons and the Art of Memory.

    Science.gov (United States)

    McNair, John R.

    1996-01-01

    States that key aspects of "memoria," the ancient Art of Memory, especially its focus on vivid representational images set against distinct backgrounds, can be helpful in creating memorable, universal, and easily retrievable computer icons. (PA)

  9. The Spacetime Memory of Geometric Phases and Quantum Computing

    CERN Document Server

    Binder, B

    2002-01-01

    Spacetime memory is defined with a holonomic approach to information processing, where multi-state stability is introduced by a non-linear phase-locked loop. Geometric phases serve as the carrier of physical information and geometric memory (of orientation) given by a path integral measure of curvature that is periodically refreshed. Regarding the resulting spin-orbit coupling and gauge field, the geometric nature of spacetime memory suggests to assign intrinsic computational properties to the electromagnetic field.

  10. Metal oxide resistive random access memory based synaptic devices for brain-inspired computing

    Science.gov (United States)

    Gao, Bin; Kang, Jinfeng; Zhou, Zheng; Chen, Zhe; Huang, Peng; Liu, Lifeng; Liu, Xiaoyan

    2016-04-01

    The traditional Boolean computing paradigm based on the von Neumann architecture is facing great challenges for future information technology applications such as big data, the Internet of Things (IoT), and wearable devices, due to the limited processing capability issues such as binary data storage and computing, non-parallel data processing, and the buses requirement between memory units and logic units. The brain-inspired neuromorphic computing paradigm is believed to be one of the promising solutions for realizing more complex functions with a lower cost. To perform such brain-inspired computing with a low cost and low power consumption, novel devices for use as electronic synapses are needed. Metal oxide resistive random access memory (ReRAM) devices have emerged as the leading candidate for electronic synapses. This paper comprehensively addresses the recent work on the design and optimization of metal oxide ReRAM-based synaptic devices. A performance enhancement methodology and optimized operation scheme to achieve analog resistive switching and low-energy training behavior are provided. A three-dimensional vertical synapse network architecture is proposed for high-density integration and low-cost fabrication. The impacts of the ReRAM synaptic device features on the performances of neuromorphic systems are also discussed on the basis of a constructed neuromorphic visual system with a pattern recognition function. Possible solutions to achieve the high recognition accuracy and efficiency of neuromorphic systems are presented.

  11. Computer-aided protective system (CAPS)

    International Nuclear Information System (INIS)

    Squire, R.K.

    1988-01-01

    A method of improving the security of materials in transit is described. The system provides a continuously monitored position location system for the transport vehicle, an internal computer-based geographic delimiter that makes continuous comparisons of actual positions with the preplanned routing and schedule, and a tamper detection/reaction system. The position comparison is utilized to institute preprogrammed reactive measures if the carrier is taken off course or schedule, penetrated, or otherwise interfered with. The geographic locater could be an independent internal platform or an external signal-dependent system utilizing GPS, Loran or similar source of geographic information; a small (micro) computer could provide adequate memory and computational capacity; the insurance of integrity of the system indicates the need for a tamper-proof container and built-in intrusion sensors. A variant of the system could provide real-time transmission of the vehicle position and condition to a central control point for; such transmission could be encrypted to preclude spoofing

  12. Present SLAC accelerator computer control system features

    International Nuclear Information System (INIS)

    Davidson, V.; Johnson, R.

    1981-02-01

    The current functional organization and state of software development of the computer control system of the Stanford Linear Accelerator is described. Included is a discussion of the distribution of functions throughout the system, the local controller features, and currently implemented features of the touch panel portion of the system. The functional use of our triplex of PDP11-34 computers sharing common memory is described. Also included is a description of the use of pseudopanel tables as data tables for closed loop control functions

  13. Method and apparatus for managing access to a memory

    Science.gov (United States)

    DeBenedictis, Erik

    2017-08-01

    A method and apparatus for managing access to a memory of a computing system. A controller transforms a plurality of operations that represent a computing job into an operational memory layout that reduces a size of a selected portion of the memory that needs to be accessed to perform the computing job. The controller stores the operational memory layout in a plurality of memory cells within the selected portion of the memory. The controller controls a sequence by which a processor in the computing system accesses the memory to perform the computing job using the operational memory layout. The operational memory layout reduces an amount of energy consumed by the processor to perform the computing job.

  14. Data processing system with a micro-computer for high magnetic field tokamak, TRIAM-1

    International Nuclear Information System (INIS)

    Kawasaki, Shoji; Nakamura, Kazuo; Nakamura, Yukio; Hiraki, Naoharu; Toi, Kazuo

    1981-01-01

    A data processing system was designed and constructed for the purpose of analyzing the data of the high magnetic field tokamak TRIAM-1. The system consists of a 10-channel A-D converter, a 20 K byte memory (RAM), an address bus control circuit, a data bus control circuit, a timing pulse and control signal generator, a D-A converter, a micro-computer, and a power source. The memory can be used as a CPU memory except at the time of sampling and data output. The out-put devices of the system are an X-Y recorder and an oscilloscope. The computer is composed of a CPU, a memory and an I/O part. The memory size can be extended. A cassette tape recorder is provided to keep the programs of the computer. An interface circuit between the computer and the tape recorder was designed and constructed. An electric discharge printer as an I/O device can be connected. From TRIAM-1, the signals of magnetic probes, plasma current, vertical field coil current, and one-turn loop voltage are fed into the processing system. The plasma displacement calculated from these signals is shown by one of I/O devices. The results of test run showed good performance. (Kato, T.)

  15. Data processing system with a micro-computer for high magnetic field tokamak, TRIAM-1

    Energy Technology Data Exchange (ETDEWEB)

    Kawasaki, S; Nakamura, K; Nakamura, Y; Hiraki, N; Toi, K [Kyushu Univ., Fukuoka (Japan). Research Inst. for Applied Mechanics

    1981-02-01

    A data processing system was designed and constructed for the purpose of analyzing the data of the high magnetic field tokamak TRIAM-1. The system consists of a 10-channel A-D converter, a 20 K byte memory (RAM), an address bus control circuit, a data bus control circuit, a timing pulse and control signal generator, a D-A converter, a micro-computer, and a power source. The memory can be used as a CPU memory except at the time of sampling and data output. The out-put devices of the system are an X-Y recorder and an oscilloscope. The computer is composed of a CPU, a memory and an I/O part. The memory size can be extended. A cassette tape recorder is provided to keep the programs of the computer. An interface circuit between the computer and the tape recorder was designed and constructed. An electric discharge printer as an I/O device can be connected. From TRIAM-1, the signals of magnetic probes, plasma current, vertical field coil current, and one-turn loop voltage are fed into the processing system. The plasma displacement calculated from these signals is shown by one of I/O devices. The results of test run showed good performance.

  16. Computational performance of a smoothed particle hydrodynamics simulation for shared-memory parallel computing

    Science.gov (United States)

    Nishiura, Daisuke; Furuichi, Mikito; Sakaguchi, Hide

    2015-09-01

    The computational performance of a smoothed particle hydrodynamics (SPH) simulation is investigated for three types of current shared-memory parallel computer devices: many integrated core (MIC) processors, graphics processing units (GPUs), and multi-core CPUs. We are especially interested in efficient shared-memory allocation methods for each chipset, because the efficient data access patterns differ between compute unified device architecture (CUDA) programming for GPUs and OpenMP programming for MIC processors and multi-core CPUs. We first introduce several parallel implementation techniques for the SPH code, and then examine these on our target computer architectures to determine the most effective algorithms for each processor unit. In addition, we evaluate the effective computing performance and power efficiency of the SPH simulation on each architecture, as these are critical metrics for overall performance in a multi-device environment. In our benchmark test, the GPU is found to produce the best arithmetic performance as a standalone device unit, and gives the most efficient power consumption. The multi-core CPU obtains the most effective computing performance. The computational speed of the MIC processor on Xeon Phi approached that of two Xeon CPUs. This indicates that using MICs is an attractive choice for existing SPH codes on multi-core CPUs parallelized by OpenMP, as it gains computational acceleration without the need for significant changes to the source code.

  17. Hybrid computing using a neural network with dynamic external memory.

    Science.gov (United States)

    Graves, Alex; Wayne, Greg; Reynolds, Malcolm; Harley, Tim; Danihelka, Ivo; Grabska-Barwińska, Agnieszka; Colmenarejo, Sergio Gómez; Grefenstette, Edward; Ramalho, Tiago; Agapiou, John; Badia, Adrià Puigdomènech; Hermann, Karl Moritz; Zwols, Yori; Ostrovski, Georg; Cain, Adam; King, Helen; Summerfield, Christopher; Blunsom, Phil; Kavukcuoglu, Koray; Hassabis, Demis

    2016-10-27

    Artificial neural networks are remarkably adept at sensory processing, sequence learning and reinforcement learning, but are limited in their ability to represent variables and data structures and to store data over long timescales, owing to the lack of an external memory. Here we introduce a machine learning model called a differentiable neural computer (DNC), which consists of a neural network that can read from and write to an external memory matrix, analogous to the random-access memory in a conventional computer. Like a conventional computer, it can use its memory to represent and manipulate complex data structures, but, like a neural network, it can learn to do so from data. When trained with supervised learning, we demonstrate that a DNC can successfully answer synthetic questions designed to emulate reasoning and inference problems in natural language. We show that it can learn tasks such as finding the shortest path between specified points and inferring the missing links in randomly generated graphs, and then generalize these tasks to specific graphs such as transport networks and family trees. When trained with reinforcement learning, a DNC can complete a moving blocks puzzle in which changing goals are specified by sequences of symbols. Taken together, our results demonstrate that DNCs have the capacity to solve complex, structured tasks that are inaccessible to neural networks without external read-write memory.

  18. An Alternative Algorithm for Computing Watersheds on Shared Memory Parallel Computers

    NARCIS (Netherlands)

    Meijster, A.; Roerdink, J.B.T.M.

    1995-01-01

    In this paper a parallel implementation of a watershed algorithm is proposed. The algorithm can easily be implemented on shared memory parallel computers. The watershed transform is generally considered to be inherently sequential since the discrete watershed of an image is defined using recursion.

  19. How Human Memory and Working Memory Work in Second Language Acquisition

    OpenAIRE

    小那覇, 洋子; Onaha, Hiroko

    2014-01-01

    We often draw an analogy between human memory and computers. Information around us is taken into our memory storage first, and then we use the information in storage whatever we need it in our daily life. Linguistic information is also in storage and we process our thoughts based on the memory that is stored. Memory storage consists of multiple memory systems; one of which is called working memory that includes short-term memory. Working memory is the central system that underpins the process...

  20. Main Memory DBMS

    NARCIS (Netherlands)

    P.A. Boncz (Peter); L. Liu (Lei); M. Tamer Özsu

    2008-01-01

    htmlabstractA main memory database system is a DBMS that primarily relies on main memory for computer data storage. In contrast, normal database management systems employ hard disk based persisntent storage.

  1. Continuous-variable quantum computing in optical time-frequency modes using quantum memories.

    Science.gov (United States)

    Humphreys, Peter C; Kolthammer, W Steven; Nunn, Joshua; Barbieri, Marco; Datta, Animesh; Walmsley, Ian A

    2014-09-26

    We develop a scheme for time-frequency encoded continuous-variable cluster-state quantum computing using quantum memories. In particular, we propose a method to produce, manipulate, and measure two-dimensional cluster states in a single spatial mode by exploiting the intrinsic time-frequency selectivity of Raman quantum memories. Time-frequency encoding enables the scheme to be extremely compact, requiring a number of memories that are a linear function of only the number of different frequencies in which the computational state is encoded, independent of its temporal duration. We therefore show that quantum memories can be a powerful component for scalable photonic quantum information processing architectures.

  2. A Case Study on Neural Inspired Dynamic Memory Management Strategies for High Performance Computing.

    Energy Technology Data Exchange (ETDEWEB)

    Vineyard, Craig Michael [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Verzi, Stephen Joseph [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-09-01

    As high performance computing architectures pursue more computational power there is a need for increased memory capacity and bandwidth as well. A multi-level memory (MLM) architecture addresses this need by combining multiple memory types with different characteristics as varying levels of the same architecture. How to efficiently utilize this memory infrastructure is an unknown challenge, and in this research we sought to investigate whether neural inspired approaches can meaningfully help with memory management. In particular we explored neurogenesis inspired re- source allocation, and were able to show a neural inspired mixed controller policy can beneficially impact how MLM architectures utilize memory.

  3. emMAW: computing minimal absent words in external memory.

    Science.gov (United States)

    Héliou, Alice; Pissis, Solon P; Puglisi, Simon J

    2017-09-01

    The biological significance of minimal absent words has been investigated in genomes of organisms from all domains of life. For instance, three minimal absent words of the human genome were found in Ebola virus genomes. There exists an O(n) -time and O(n) -space algorithm for computing all minimal absent words of a sequence of length n on a fixed-sized alphabet based on suffix arrays. A standard implementation of this algorithm, when applied to a large sequence of length n , requires more than 20 n  bytes of RAM. Such memory requirements are a significant hurdle to the computation of minimal absent words in large datasets. We present emMAW, the first external-memory algorithm for computing minimal absent words. A free open-source implementation of our algorithm is made available. This allows for computation of minimal absent words on far bigger data sets than was previously possible. Our implementation requires less than 3 h on a standard workstation to process the full human genome when as little as 1 GB of RAM is made available. We stress that our implementation, despite making use of external memory, is fast; indeed, even on relatively smaller datasets when enough RAM is available to hold all necessary data structures, it is less than two times slower than state-of-the-art internal-memory implementations. https://github.com/solonas13/maw (free software under the terms of the GNU GPL). alice.heliou@lix.polytechnique.fr or solon.pissis@kcl.ac.uk. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  4. Exploring memory hierarchy design with emerging memory technologies

    CERN Document Server

    Sun, Guangyu

    2014-01-01

    This book equips readers with tools for computer architecture of high performance, low power, and high reliability memory hierarchy in computer systems based on emerging memory technologies, such as STTRAM, PCM, FBDRAM, etc.  The techniques described offer advantages of high density, near-zero static power, and immunity to soft errors, which have the potential of overcoming the “memory wall.”  The authors discuss memory design from various perspectives: emerging memory technologies are employed in the memory hierarchy with novel architecture modification;  hybrid memory structure is introduced to leverage advantages from multiple memory technologies; an analytical model named “Moguls” is introduced to explore quantitatively the optimization design of a memory hierarchy; finally, the vulnerability of the CMPs to radiation-based soft errors is improved by replacing different levels of on-chip memory with STT-RAMs.   ·         Provides a holistic study of using emerging memory technologies i...

  5. A multiprocessor computer simulation model employing a feedback scheduler/allocator for memory space and bandwidth matching and TMR processing

    Science.gov (United States)

    Bradley, D. B.; Irwin, J. D.

    1974-01-01

    A computer simulation model for a multiprocessor computer is developed that is useful for studying the problem of matching multiprocessor's memory space, memory bandwidth and numbers and speeds of processors with aggregate job set characteristics. The model assumes an input work load of a set of recurrent jobs. The model includes a feedback scheduler/allocator which attempts to improve system performance through higher memory bandwidth utilization by matching individual job requirements for space and bandwidth with space availability and estimates of bandwidth availability at the times of memory allocation. The simulation model includes provisions for specifying precedence relations among the jobs in a job set, and provisions for specifying precedence execution of TMR (Triple Modular Redundant and SIMPLEX (non redundant) jobs.

  6. Organization of the two-level memory in the image processing system on scanning measuring projectors

    International Nuclear Information System (INIS)

    Sychev, A.Yu.

    1977-01-01

    Discussed are the problems of improving the efficiency of the system for processing pictures taken in bubble chambers with the use of scanning measuring projectors. The system comprises 20 to 30 pro ectors linked with the ICL-1903A computer provided with a mainframe memory, 64 kilobytes in size. Because of the insufficient size of a mainframe memory, a part of the programs and data is located in a second-level memory, i.e. in an external memory. The analytical model described herein is used to analyze the effect of the memory organization on the characteristics of the system. It is shown that organization of pure procedures and introduction of the centralized control of the tWo-leVel memory result in substantial improvement of the efficiency of the picture processing system

  7. A Layered Active Memory Architecture for Cognitive Vision Systems

    OpenAIRE

    Kolonias, Ilias; Christmas, William; Kittler, Josef

    2007-01-01

    Recognising actions and objects from video material has attracted growing research attention and given rise to important applications. However, injecting cognitive capabilities into computer vision systems requires an architecture more elaborate than the traditional signal processing paradigm for information processing. Inspired by biological cognitive systems, we present a memory architecture enabling cognitive processes (such as selecting the processes required for scene understanding, laye...

  8. A memory-array architecture for computer vision

    Energy Technology Data Exchange (ETDEWEB)

    Balsara, P.T.

    1989-01-01

    With the fast advances in the area of computer vision and robotics there is a growing need for machines that can understand images at a very high speed. A conventional von Neumann computer is not suited for this purpose because it takes a tremendous amount of time to solve most typical image processing problems. Exploiting the inherent parallelism present in various vision tasks can significantly reduce the processing time. Fortunately, parallelism is increasingly affordable as hardware gets cheaper. Thus it is now imperative to study computer vision in a parallel processing framework. The author should first design a computational structure which is well suited for a wide range of vision tasks and then develop parallel algorithms which can run efficiently on this structure. Recent advances in VLSI technology have led to several proposals for parallel architectures for computer vision. In this thesis he demonstrates that a memory array architecture with efficient local and global communication capabilities can be used for high speed execution of a wide range of computer vision tasks. This architecture, called the Access Constrained Memory Array Architecture (ACMAA), is efficient for VLSI implementation because of its modular structure, simple interconnect and limited global control. Several parallel vision algorithms have been designed for this architecture. The choice of vision problems demonstrates the versatility of ACMAA for a wide range of vision tasks. These algorithms were simulated on a high level ACMAA simulator running on the Intel iPSC/2 hypercube, a parallel architecture. The results of this simulation are compared with those of sequential algorithms running on a single hypercube node. Details of the ACMAA processor architecture are also presented.

  9. The associative memory system for the FTK processor at ATLAS

    CERN Document Server

    Magalotti, D; The ATLAS collaboration; Donati, S; Luciano, P; Piendibene, M; Giannetti, P; Lanza, A; Verzellesi, G; Sakellariou, Andreas; Billereau, W; Combe, J M

    2014-01-01

    In high energy physics experiments, the most interesting processes are very rare and hidden in an extremely large level of background. As the experiment complexity, accelerator backgrounds, and instantaneous luminosity increase, more effective and accurate data selection techniques are needed. The Fast TracKer processor (FTK) is a real time tracking processor designed for the ATLAS trigger upgrade. The FTK core is the Associative Memory system. It provides massive computing power to minimize the processing time of complex tracking algorithms executed online. This paper reports on the results and performance of a new prototype of Associative Memory system.

  10. Reprogrammable logic in memristive crossbar for in-memory computing

    Science.gov (United States)

    Cheng, Long; Zhang, Mei-Yun; Li, Yi; Zhou, Ya-Xiong; Wang, Zhuo-Rui; Hu, Si-Yu; Long, Shi-Bing; Liu, Ming; Miao, Xiang-Shui

    2017-12-01

    Memristive stateful logic has emerged as a promising next-generation in-memory computing paradigm to address escalating computing-performance pressures in traditional von Neumann architecture. Here, we present a nonvolatile reprogrammable logic method that can process data between different rows and columns in a memristive crossbar array based on material implication (IMP) logic. Arbitrary Boolean logic can be executed with a reprogrammable cell containing four memristors in a crossbar array. In the fabricated Ti/HfO2/W memristive array, some fundamental functions, such as universal NAND logic and data transfer, were experimentally implemented. Moreover, using eight memristors in a 2  ×  4 array, a one-bit full adder was theoretically designed and verified by simulation to exhibit the feasibility of our method to accomplish complex computing tasks. In addition, some critical logic-related performances were further discussed, such as the flexibility of data processing, cascading problem and bit error rate. Such a method could be a step forward in developing IMP-based memristive nonvolatile logic for large-scale in-memory computing architecture.

  11. Lifetime-Based Memory Management for Distributed Data Processing Systems

    DEFF Research Database (Denmark)

    Lu, Lu; Shi, Xuanhua; Zhou, Yongluan

    2016-01-01

    create a large amount of long-living data objects in the heap, which may quickly saturate the garbage collector, especially when handling a large dataset, and hence would limit the scalability of the system. To eliminate this problem, we propose a lifetime-based memory management framework, which...... the garbage collection time by up to 99.9%, 2) to achieve up to 22.7x speed up in terms of execution time in cases without data spilling and 41.6x speedup in cases with data spilling, and 3) to consume up to 46.6% less memory.......In-memory caching of intermediate data and eager combining of data in shuffle buffers have been shown to be very effective in minimizing the re-computation and I/O cost in distributed data processing systems like Spark and Flink. However, it has also been widely reported that these techniques would...

  12. Applications for Packetized Memory Interfaces

    OpenAIRE

    Watson, Myles Glen

    2015-01-01

    The performance of the memory subsystem has a large impact on the performance of modern computer systems. Many important applications are memory bound and others are expected to become memory bound in the future. The importance of memory performance makes it imperative to understand and optimize the interactions between applications and the system architecture. Prototyping and exploring various configurations of memory systems can give important insights, but current memory interfaces are lim...

  13. Working Memory Systems in the Rat.

    Science.gov (United States)

    Bratch, Alexander; Kann, Spencer; Cain, Joshua A; Wu, Jie-En; Rivera-Reyes, Nilda; Dalecki, Stefan; Arman, Diana; Dunn, Austin; Cooper, Shiloh; Corbin, Hannah E; Doyle, Amanda R; Pizzo, Matthew J; Smith, Alexandra E; Crystal, Jonathon D

    2016-02-08

    A fundamental feature of memory in humans is the ability to simultaneously work with multiple types of information using independent memory systems. Working memory is conceptualized as two independent memory systems under executive control [1, 2]. Although there is a long history of using the term "working memory" to describe short-term memory in animals, it is not known whether multiple, independent memory systems exist in nonhumans. Here, we used two established short-term memory approaches to test the hypothesis that spatial and olfactory memory operate as independent working memory resources in the rat. In the olfactory memory task, rats chose a novel odor from a gradually incrementing set of old odors [3]. In the spatial memory task, rats searched for a depleting food source at multiple locations [4]. We presented rats with information to hold in memory in one domain (e.g., olfactory) while adding a memory load in the other domain (e.g., spatial). Control conditions equated the retention interval delay without adding a second memory load. In a further experiment, we used proactive interference [5-7] in the spatial domain to compromise spatial memory and evaluated the impact of adding an olfactory memory load. Olfactory and spatial memory are resistant to interference from the addition of a memory load in the other domain. Our data suggest that olfactory and spatial memory draw on independent working memory systems in the rat. Copyright © 2016 Elsevier Ltd. All rights reserved.

  14. Providing for organizational memory in computer supported meetings

    OpenAIRE

    Schwabe, Gerhard

    1994-01-01

    Meeting memory features are poorly integrated into current group support systems (GSS). In this article, I discuss how to introduce meeting memory functionality into a GSS. The article first introduces the benefits of effective meetings and organizational memory to an organization. Then, the following challenges to design are discussed: How to store semantically rich output, how to build up the meeting memory with a minimum of additional effort, how to integrate meeting memory into organizati...

  15. Computational Approach to Profit Optimization of a Loss-Queueing System

    Directory of Open Access Journals (Sweden)

    Dinesh Kumar Yadav

    2010-01-01

    Full Text Available Objective of the paper is to deal with the profit optimization of a loss queueing system with the finite capacity. Here, we define and compute total expected cost (TEC, total expected revenue (TER and consequently we compute the total optimal profit (TOP of the system. In order to compute the total optimal profit of the system, a computing algorithm has been developed and a fast converging N-R method has been employed which requires least computing time and lesser memory space as compared to other methods. Sensitivity analysis and its observations based on graphics have added a significant value to this model.

  16. Attacker Modelling in Ubiquitous Computing Systems

    DEFF Research Database (Denmark)

    Papini, Davide

    in with our everyday life. This future is visible to everyone nowadays: terms like smartphone, cloud, sensor, network etc. are widely known and used in our everyday life. But what about the security of such systems. Ubiquitous computing devices can be limited in terms of energy, computing power and memory...... attacker remain somehow undened and still under extensive investigation. This Thesis explores the nature of the ubiquitous attacker with a focus on how she interacts with the physical world and it denes a model that captures the abilities of the attacker. Furthermore a quantitative implementation...

  17. Trial-by-Trial Modulation of Associative Memory Formation by Reward Prediction Error and Reward Anticipation as Revealed by a Biologically Plausible Computational Model.

    Science.gov (United States)

    Aberg, Kristoffer C; Müller, Julia; Schwartz, Sophie

    2017-01-01

    Anticipation and delivery of rewards improves memory formation, but little effort has been made to disentangle their respective contributions to memory enhancement. Moreover, it has been suggested that the effects of reward on memory are mediated by dopaminergic influences on hippocampal plasticity. Yet, evidence linking memory improvements to actual reward computations reflected in the activity of the dopaminergic system, i.e., prediction errors and expected values, is scarce and inconclusive. For example, different previous studies reported that the magnitude of prediction errors during a reinforcement learning task was a positive, negative, or non-significant predictor of successfully encoding simultaneously presented images. Individual sensitivities to reward and punishment have been found to influence the activation of the dopaminergic reward system and could therefore help explain these seemingly discrepant results. Here, we used a novel associative memory task combined with computational modeling and showed independent effects of reward-delivery and reward-anticipation on memory. Strikingly, the computational approach revealed positive influences from both reward delivery, as mediated by prediction error magnitude, and reward anticipation, as mediated by magnitude of expected value, even in the absence of behavioral effects when analyzed using standard methods, i.e., by collapsing memory performance across trials within conditions. We additionally measured trait estimates of reward and punishment sensitivity and found that individuals with increased reward (vs. punishment) sensitivity had better memory for associations encoded during positive (vs. negative) prediction errors when tested after 20 min, but a negative trend when tested after 24 h. In conclusion, modeling trial-by-trial fluctuations in the magnitude of reward, as we did here for prediction errors and expected value computations, provides a comprehensive and biologically plausible description of

  18. A compositional reservoir simulator on distributed memory parallel computers

    International Nuclear Information System (INIS)

    Rame, M.; Delshad, M.

    1995-01-01

    This paper presents the application of distributed memory parallel computes to field scale reservoir simulations using a parallel version of UTCHEM, The University of Texas Chemical Flooding Simulator. The model is a general purpose highly vectorized chemical compositional simulator that can simulate a wide range of displacement processes at both field and laboratory scales. The original simulator was modified to run on both distributed memory parallel machines (Intel iPSC/960 and Delta, Connection Machine 5, Kendall Square 1 and 2, and CRAY T3D) and a cluster of workstations. A domain decomposition approach has been taken towards parallelization of the code. A portion of the discrete reservoir model is assigned to each processor by a set-up routine that attempts a data layout as even as possible from the load-balance standpoint. Each of these subdomains is extended so that data can be shared between adjacent processors for stencil computation. The added routines that make parallel execution possible are written in a modular fashion that makes the porting to new parallel platforms straight forward. Results of the distributed memory computing performance of Parallel simulator are presented for field scale applications such as tracer flood and polymer flood. A comparison of the wall-clock times for same problems on a vector supercomputer is also presented

  19. Optimizing NEURON Simulation Environment Using Remote Memory Access with Recursive Doubling on Distributed Memory Systems.

    Science.gov (United States)

    Shehzad, Danish; Bozkuş, Zeki

    2016-01-01

    Increase in complexity of neuronal network models escalated the efforts to make NEURON simulation environment efficient. The computational neuroscientists divided the equations into subnets amongst multiple processors for achieving better hardware performance. On parallel machines for neuronal networks, interprocessor spikes exchange consumes large section of overall simulation time. In NEURON for communication between processors Message Passing Interface (MPI) is used. MPI_Allgather collective is exercised for spikes exchange after each interval across distributed memory systems. The increase in number of processors though results in achieving concurrency and better performance but it inversely affects MPI_Allgather which increases communication time between processors. This necessitates improving communication methodology to decrease the spikes exchange time over distributed memory systems. This work has improved MPI_Allgather method using Remote Memory Access (RMA) by moving two-sided communication to one-sided communication, and use of recursive doubling mechanism facilitates achieving efficient communication between the processors in precise steps. This approach enhanced communication concurrency and has improved overall runtime making NEURON more efficient for simulation of large neuronal network models.

  20. Optimizing NEURON Simulation Environment Using Remote Memory Access with Recursive Doubling on Distributed Memory Systems

    Directory of Open Access Journals (Sweden)

    Danish Shehzad

    2016-01-01

    Full Text Available Increase in complexity of neuronal network models escalated the efforts to make NEURON simulation environment efficient. The computational neuroscientists divided the equations into subnets amongst multiple processors for achieving better hardware performance. On parallel machines for neuronal networks, interprocessor spikes exchange consumes large section of overall simulation time. In NEURON for communication between processors Message Passing Interface (MPI is used. MPI_Allgather collective is exercised for spikes exchange after each interval across distributed memory systems. The increase in number of processors though results in achieving concurrency and better performance but it inversely affects MPI_Allgather which increases communication time between processors. This necessitates improving communication methodology to decrease the spikes exchange time over distributed memory systems. This work has improved MPI_Allgather method using Remote Memory Access (RMA by moving two-sided communication to one-sided communication, and use of recursive doubling mechanism facilitates achieving efficient communication between the processors in precise steps. This approach enhanced communication concurrency and has improved overall runtime making NEURON more efficient for simulation of large neuronal network models.

  1. Stability of discrete memory states to stochastic fluctuations in neuronal systems

    Science.gov (United States)

    Miller, Paul; Wang, Xiao-Jing

    2014-01-01

    Noise can degrade memories by causing transitions from one memory state to another. For any biological memory system to be useful, the time scale of such noise-induced transitions must be much longer than the required duration for memory retention. Using biophysically-realistic modeling, we consider two types of memory in the brain: short-term memories maintained by reverberating neuronal activity for a few seconds, and long-term memories maintained by a molecular switch for years. Both systems require persistence of (neuronal or molecular) activity self-sustained by an autocatalytic process and, we argue, that both have limited memory lifetimes because of significant fluctuations. We will first discuss a strongly recurrent cortical network model endowed with feedback loops, for short-term memory. Fluctuations are due to highly irregular spike firing, a salient characteristic of cortical neurons. Then, we will analyze a model for long-term memory, based on an autophosphorylation mechanism of calcium/calmodulin-dependent protein kinase II (CaMKII) molecules. There, fluctuations arise from the fact that there are only a small number of CaMKII molecules at each postsynaptic density (putative synaptic memory unit). Our results are twofold. First, we demonstrate analytically and computationally the exponential dependence of stability on the number of neurons in a self-excitatory network, and on the number of CaMKII proteins in a molecular switch. Second, for each of the two systems, we implement graded memory consisting of a group of bistable switches. For the neuronal network we report interesting ramping temporal dynamics as a result of sequentially switching an increasing number of discrete, bistable, units. The general observation of an exponential increase in memory stability with the system size leads to a trade-off between the robustness of memories (which increases with the size of each bistable unit) and the total amount of information storage (which decreases

  2. A Case for Tamper-Resistant and Tamper-Evident Computer Systems

    National Research Council Canada - National Science Library

    Solihin, Yan

    2007-01-01

    .... These attacks attempt to snoop or modify data transfer between various chips in a computer system such as between the processor and memory, and between processors in a multiprocessor interconnect network...

  3. Efficient calculation of open quantum system dynamics and time-resolved spectroscopy with distributed memory HEOM (DM-HEOM).

    Science.gov (United States)

    Kramer, Tobias; Noack, Matthias; Reinefeld, Alexander; Rodríguez, Mirta; Zelinskyy, Yaroslav

    2018-06-11

    Time- and frequency-resolved optical signals provide insights into the properties of light-harvesting molecular complexes, including excitation energies, dipole strengths and orientations, as well as in the exciton energy flow through the complex. The hierarchical equations of motion (HEOM) provide a unifying theory, which allows one to study the combined effects of system-environment dissipation and non-Markovian memory without making restrictive assumptions about weak or strong couplings or separability of vibrational and electronic degrees of freedom. With increasing system size the exact solution of the open quantum system dynamics requires memory and compute resources beyond a single compute node. To overcome this barrier, we developed a scalable variant of HEOM. Our distributed memory HEOM, DM-HEOM, is a universal tool for open quantum system dynamics. It is used to accurately compute all experimentally accessible time- and frequency-resolved processes in light-harvesting molecular complexes with arbitrary system-environment couplings for a wide range of temperatures and complex sizes. © 2018 Wiley Periodicals, Inc. © 2018 Wiley Periodicals, Inc.

  4. Computer-controlled environmental test systems - Criteria for selection, installation, and maintenance.

    Science.gov (United States)

    Chapman, C. P.

    1972-01-01

    Applications for presently marketed, new computer-controlled environmental test systems are suggested. It is shown that capital costs of these systems follow an exponential cost function curve that levels out as additional applications are implemented. Some test laboratory organization changes are recommended in terms of new personnel requirements, and facility modification are considered in support of a computer-controlled test system. Software for computer-controlled test systems are discussed, and control loop speed constraints are defined for real-time control functions. Suitable input and output devices and memory storage device tradeoffs are also considered.

  5. ELASTIC CLOUD COMPUTING ARCHITECTURE AND SYSTEM FOR HETEROGENEOUS SPATIOTEMPORAL COMPUTING

    Directory of Open Access Journals (Sweden)

    X. Shi

    2017-10-01

    Full Text Available Spatiotemporal computation implements a variety of different algorithms. When big data are involved, desktop computer or standalone application may not be able to complete the computation task due to limited memory and computing power. Now that a variety of hardware accelerators and computing platforms are available to improve the performance of geocomputation, different algorithms may have different behavior on different computing infrastructure and platforms. Some are perfect for implementation on a cluster of graphics processing units (GPUs, while GPUs may not be useful on certain kind of spatiotemporal computation. This is the same situation in utilizing a cluster of Intel's many-integrated-core (MIC or Xeon Phi, as well as Hadoop or Spark platforms, to handle big spatiotemporal data. Furthermore, considering the energy efficiency requirement in general computation, Field Programmable Gate Array (FPGA may be a better solution for better energy efficiency when the performance of computation could be similar or better than GPUs and MICs. It is expected that an elastic cloud computing architecture and system that integrates all of GPUs, MICs, and FPGAs could be developed and deployed to support spatiotemporal computing over heterogeneous data types and computational problems.

  6. Elastic Cloud Computing Architecture and System for Heterogeneous Spatiotemporal Computing

    Science.gov (United States)

    Shi, X.

    2017-10-01

    Spatiotemporal computation implements a variety of different algorithms. When big data are involved, desktop computer or standalone application may not be able to complete the computation task due to limited memory and computing power. Now that a variety of hardware accelerators and computing platforms are available to improve the performance of geocomputation, different algorithms may have different behavior on different computing infrastructure and platforms. Some are perfect for implementation on a cluster of graphics processing units (GPUs), while GPUs may not be useful on certain kind of spatiotemporal computation. This is the same situation in utilizing a cluster of Intel's many-integrated-core (MIC) or Xeon Phi, as well as Hadoop or Spark platforms, to handle big spatiotemporal data. Furthermore, considering the energy efficiency requirement in general computation, Field Programmable Gate Array (FPGA) may be a better solution for better energy efficiency when the performance of computation could be similar or better than GPUs and MICs. It is expected that an elastic cloud computing architecture and system that integrates all of GPUs, MICs, and FPGAs could be developed and deployed to support spatiotemporal computing over heterogeneous data types and computational problems.

  7. Event parallelism: Distributed memory parallel computing for high energy physics experiments

    International Nuclear Information System (INIS)

    Nash, T.

    1989-05-01

    This paper describes the present and expected future development of distributed memory parallel computers for high energy physics experiments. It covers the use of event parallel microprocessor farms, particularly at Fermilab, including both ACP multiprocessors and farms of MicroVAXES. These systems have proven very cost effective in the past. A case is made for moving to the more open environment of UNIX and RISC processors. The 2nd Generation ACP Multiprocessor System, which is based on powerful RISC systems, is described. Given the promise of still more extraordinary increases in processor performance, a new emphasis on point to point, rather than bussed, communication will be required. Developments in this direction are described. 6 figs

  8. Event parallelism: Distributed memory parallel computing for high energy physics experiments

    International Nuclear Information System (INIS)

    Nash, T.

    1989-01-01

    This paper describes the present and expected future development of distributed memory parallel computers for high energy physics experiments. It covers the use of event parallel microprocessor farms, particularly at Fermilab, including both ACP multiprocessors and farms of MicroVAXES. These systems have proven very cost effective in the past. A case is made for moving to the more open environment of UNIX and RISC processors. The 2nd Generation ACP Multiprocessor System, which is based on powerful RISC systems, is described. Given the promise of still more extraordinary increases in processor performance, a new emphasis on point to point, rather than bussed, communication will be required. Developments in this direction are described. (orig.)

  9. Event parallelism: Distributed memory parallel computing for high energy physics experiments

    Science.gov (United States)

    Nash, Thomas

    1989-12-01

    This paper describes the present and expected future development of distributed memory parallel computers for high energy physics experiments. It covers the use of event parallel microprocessor farms, particularly at Fermilab, including both ACP multiprocessors and farms of MicroVAXES. These systems have proven very cost effective in the past. A case is made for moving to the more open environment of UNIX and RISC processors. The 2nd Generation ACP Multiprocessor System, which is based on powerful RISC system, is described. Given the promise of still more extraordinary increases in processor performance, a new emphasis on point to point, rather than bussed, communication will be required. Developments in this direction are described.

  10. Evaluation of External Memory Access Performance on a High-End FPGA Hybrid Computer

    Directory of Open Access Journals (Sweden)

    Konstantinos Kalaitzis

    2016-10-01

    Full Text Available The motivation of this research was to evaluate the main memory performance of a hybrid super computer such as the Convey HC-x, and ascertain how the controller performs in several access scenarios, vis-à-vis hand-coded memory prefetches. Such memory patterns are very useful in stencil computations. The theoretical bandwidth of the memory of the Convey is compared with the results of our measurements. The accurate study of the memory subsystem is particularly useful for users when they are developing their application-specific personality. Experiments were performed to measure the bandwidth between the coprocessor and the memory subsystem. The experiments aimed mainly at measuring the reading access speed of the memory from Application Engines (FPGAs. Different ways of accessing data were used in order to find the most efficient way to access memory. This way was proposed for future work in the Convey HC-x. When performing a series of accesses to memory, non-uniform latencies occur. The Memory Controller of the Convey HC-x in the coprocessor attempts to cover this latency. We measure memory efficiency as a ratio of the number of memory accesses and the number of execution cycles. The result of this measurement converges to one in most cases. In addition, we performed experiments with hand-coded memory accesses. The analysis of the experimental results shows how the memory subsystem and Memory Controllers work. From this work we conclude that the memory controllers do an excellent job, largely because (transparently to the user they seem to cache large amounts of data, and hence hand-coding is not needed in most situations.

  11. Computational cost estimates for parallel shared memory isogeometric multi-frontal solvers

    KAUST Repository

    Woźniak, Maciej; Kuźnik, Krzysztof M.; Paszyński, Maciej R.; Calo, Victor M.; Pardo, D.

    2014-01-01

    In this paper we present computational cost estimates for parallel shared memory isogeometric multi-frontal solvers. The estimates show that the ideal isogeometric shared memory parallel direct solver scales as O( p2log(N/p)) for one dimensional problems, O(Np2) for two dimensional problems, and O(N4/3p2) for three dimensional problems, where N is the number of degrees of freedom, and p is the polynomial order of approximation. The computational costs of the shared memory parallel isogeometric direct solver are compared with those corresponding to the sequential isogeometric direct solver, being the latest equal to O(N p2) for the one dimensional case, O(N1.5p3) for the two dimensional case, and O(N2p3) for the three dimensional case. The shared memory version significantly reduces both the scalability in terms of N and p. Theoretical estimates are compared with numerical experiments performed with linear, quadratic, cubic, quartic, and quintic B-splines, in one and two spatial dimensions. © 2014 Elsevier Ltd. All rights reserved.

  12. Computational cost estimates for parallel shared memory isogeometric multi-frontal solvers

    KAUST Repository

    Woźniak, Maciej

    2014-06-01

    In this paper we present computational cost estimates for parallel shared memory isogeometric multi-frontal solvers. The estimates show that the ideal isogeometric shared memory parallel direct solver scales as O( p2log(N/p)) for one dimensional problems, O(Np2) for two dimensional problems, and O(N4/3p2) for three dimensional problems, where N is the number of degrees of freedom, and p is the polynomial order of approximation. The computational costs of the shared memory parallel isogeometric direct solver are compared with those corresponding to the sequential isogeometric direct solver, being the latest equal to O(N p2) for the one dimensional case, O(N1.5p3) for the two dimensional case, and O(N2p3) for the three dimensional case. The shared memory version significantly reduces both the scalability in terms of N and p. Theoretical estimates are compared with numerical experiments performed with linear, quadratic, cubic, quartic, and quintic B-splines, in one and two spatial dimensions. © 2014 Elsevier Ltd. All rights reserved.

  13. Cognitive memory.

    Science.gov (United States)

    Widrow, Bernard; Aragon, Juan Carlos

    2013-05-01

    Regarding the workings of the human mind, memory and pattern recognition seem to be intertwined. You generally do not have one without the other. Taking inspiration from life experience, a new form of computer memory has been devised. Certain conjectures about human memory are keys to the central idea. The design of a practical and useful "cognitive" memory system is contemplated, a memory system that may also serve as a model for many aspects of human memory. The new memory does not function like a computer memory where specific data is stored in specific numbered registers and retrieval is done by reading the contents of the specified memory register, or done by matching key words as with a document search. Incoming sensory data would be stored at the next available empty memory location, and indeed could be stored redundantly at several empty locations. The stored sensory data would neither have key words nor would it be located in known or specified memory locations. Sensory inputs concerning a single object or subject are stored together as patterns in a single "file folder" or "memory folder". When the contents of the folder are retrieved, sights, sounds, tactile feel, smell, etc., are obtained all at the same time. Retrieval would be initiated by a query or a prompt signal from a current set of sensory inputs or patterns. A search through the memory would be made to locate stored data that correlates with or relates to the prompt input. The search would be done by a retrieval system whose first stage makes use of autoassociative artificial neural networks and whose second stage relies on exhaustive search. Applications of cognitive memory systems have been made to visual aircraft identification, aircraft navigation, and human facial recognition. Concerning human memory, reasons are given why it is unlikely that long-term memory is stored in the synapses of the brain's neural networks. Reasons are given suggesting that long-term memory is stored in DNA or RNA

  14. Phase change memory

    CERN Document Server

    Qureshi, Moinuddin K

    2011-01-01

    As conventional memory technologies such as DRAM and Flash run into scaling challenges, architects and system designers are forced to look at alternative technologies for building future computer systems. This synthesis lecture begins by listing the requirements for a next generation memory technology and briefly surveys the landscape of novel non-volatile memories. Among these, Phase Change Memory (PCM) is emerging as a leading contender, and the authors discuss the material, device, and circuit advances underlying this exciting technology. The lecture then describes architectural solutions t

  15. Conditional load and store in a shared memory

    Science.gov (United States)

    Blumrich, Matthias A; Ohmacht, Martin

    2015-02-03

    A method, system and computer program product for implementing load-reserve and store-conditional instructions in a multi-processor computing system. The computing system includes a multitude of processor units and a shared memory cache, and each of the processor units has access to the memory cache. In one embodiment, the method comprises providing the memory cache with a series of reservation registers, and storing in these registers addresses reserved in the memory cache for the processor units as a result of issuing load-reserve requests. In this embodiment, when one of the processor units makes a request to store data in the memory cache using a store-conditional request, the reservation registers are checked to determine if an address in the memory cache is reserved for that processor unit. If an address in the memory cache is reserved for that processor, the data are stored at this address.

  16. Fast Initialization of Bubble-Memory Systems

    Science.gov (United States)

    Looney, K. T.; Nichols, C. D.; Hayes, P. J.

    1986-01-01

    Improved scheme several orders of magnitude faster than normal initialization scheme. State-of-the-art commercial bubble-memory device used. Hardware interface designed connects controlling microprocessor to bubblememory circuitry. System software written to exercise various functions of bubble-memory system in comparison made between normal and fast techniques. Future implementations of approach utilize E2PROM (electrically-erasable programable read-only memory) to provide greater system flexibility. Fastinitialization technique applicable to all bubble-memory devices.

  17. Computer Use and Its Effect on the Memory Process in Young and Adults

    Science.gov (United States)

    Alliprandini, Paula Mariza Zedu; Straub, Sandra Luzia Wrobel; Brugnera, Elisangela; de Oliveira, Tânia Pitombo; Souza, Isabela Augusta Andrade

    2013-01-01

    This work investigates the effect of computer use in the memory process in young and adults under the Perceptual and Memory experimental conditions. The memory condition involved the phases acquisition of information and recovery, on time intervals (2 min, 24 hours and 1 week) on situations of pre and post-test (before and after the participants…

  18. Computational Model-Based Prediction of Human Episodic Memory Performance Based on Eye Movements

    Science.gov (United States)

    Sato, Naoyuki; Yamaguchi, Yoko

    Subjects' episodic memory performance is not simply reflected by eye movements. We use a ‘theta phase coding’ model of the hippocampus to predict subjects' memory performance from their eye movements. Results demonstrate the ability of the model to predict subjects' memory performance. These studies provide a novel approach to computational modeling in the human-machine interface.

  19. Memory-based frame synchronizer. [for digital communication systems

    Science.gov (United States)

    Stattel, R. J.; Niswander, J. K. (Inventor)

    1981-01-01

    A frame synchronizer for use in digital communications systems wherein data formats can be easily and dynamically changed is described. The use of memory array elements provide increased flexibility in format selection and sync word selection in addition to real time reconfiguration ability. The frame synchronizer comprises a serial-to-parallel converter which converts a serial input data stream to a constantly changing parallel data output. This parallel data output is supplied to programmable sync word recognizers each consisting of a multiplexer and a random access memory (RAM). The multiplexer is connected to both the parallel data output and an address bus which may be connected to a microprocessor or computer for purposes of programming the sync word recognizer. The RAM is used as an associative memory or decorder and is programmed to identify a specific sync word. Additional programmable RAMs are used as counter decoders to define word bit length, frame word length, and paragraph frame length.

  20. Progress In Optical Memory Technology

    Science.gov (United States)

    Tsunoda, Yoshito

    1987-01-01

    More than 20 years have passed since the concept of optical memory was first proposed in 1966. Since then considerable progress has been made in this area together with the creation of completely new markets of optical memory in consumer and computer application areas. The first generation of optical memory was mainly developed with holographic recording technology in late 1960s and early 1970s. Considerable number of developments have been done in both analog and digital memory applications. Unfortunately, these technologies did not meet a chance to be a commercial product. The second generation of optical memory started at the beginning of 1970s with bit by bit recording technology. Read-only type optical memories such as video disks and compact audio disks have extensively investigated. Since laser diodes were first applied to optical video disk read out in 1976, there have been extensive developments of laser diode pick-ups for optical disk memory systems. The third generation of optical memory started in 1978 with bit by bit read/write technology using laser diodes. Developments of recording materials including both write-once and erasable have been actively pursued at several research institutes. These technologies are mainly focused on the optical memory systems for computer application. Such practical applications of optical memory technology has resulted in the creation of such new products as compact audio disks and computer file memories.

  1. Generalization through the Recurrent Interaction of Episodic Memories: A Model of the Hippocampal System

    Science.gov (United States)

    Kumaran, Dharshan; McClelland, James L.

    2012-01-01

    In this article, we present a perspective on the role of the hippocampal system in generalization, instantiated in a computational model called REMERGE (recurrency and episodic memory results in generalization). We expose a fundamental, but neglected, tension between prevailing computational theories that emphasize the function of the hippocampus…

  2. Memory-guided attention: Control from multiple memory systems

    OpenAIRE

    Hutchinson, J. Benjamin; Turk-Browne, Nicholas B.

    2012-01-01

    Attention is strongly influenced by both external stimuli and internal goals. However, this useful dichotomy does not readily capture the ubiquitous and often automatic contribution of past experience stored in memory. We review recent evidence about how multiple memory systems control attention, consider how such interactions are manifested in the brain, and highlight how this framework for ‘memory-guided attention’ might help systematize previous findings and guide future research.

  3. Energy Scaling Advantages of Resistive Memory Crossbar Based Computation and its Application to Sparse Coding

    Directory of Open Access Journals (Sweden)

    Sapan eAgarwal

    2016-01-01

    Full Text Available The exponential increase in data over the last decade presents a significant challenge to analytics efforts that seek to process and interpret such data for various applications. Neural-inspired computing approaches are being developed in order to leverage the computational advantages of the analog, low-power data processing observed in biological systems. Analog resistive memory crossbars can perform a parallel read or a vector-matrix multiplication as well as a parallel write or a rank-1 update with high computational efficiency. For an NxN crossbar, these two kernels are at a minimum O(N more energy efficient than a digital memory-based architecture. If the read operation is noise limited, the energy to read a column can be independent of the crossbar size (O(1. These two kernels form the basis of many neuromorphic algorithms such as image, text, and speech recognition. For instance, these kernels can be applied to a neural sparse coding algorithm to give an O(N reduction in energy for the entire algorithm. Sparse coding is a rich problem with a host of applications including computer vision, object tracking, and more generally unsupervised learning.

  4. Graphical Visualization on Computational Simulation Using Shared Memory

    International Nuclear Information System (INIS)

    Lima, A B; Correa, Eberth

    2014-01-01

    The Shared Memory technique is a powerful tool for parallelizing computer codes. In particular it can be used to visualize the results ''on the fly'' without stop running the simulation. In this presentation we discuss and show how to use the technique conjugated with a visualization code using openGL

  5. Resolving time of scintillation camera-computer system and methods of correction for counting loss, 2

    International Nuclear Information System (INIS)

    Iinuma, Takeshi; Fukuhisa, Kenjiro; Matsumoto, Toru

    1975-01-01

    Following the previous work, counting-rate performance of camera-computer systems was investigated for two modes of data acquisition. The first was the ''LIST'' mode in which image data and timing signals were sequentially stored on magnetic disk or tape via a buffer memory. The second was the ''HISTOGRAM'' mode in which image data were stored in a core memory as digital images and then the images were transfered to magnetic disk or tape by the signal of frame timing. Firstly, the counting-rates stored in the buffer memory was measured as a function of display event-rates of the scintillation camera for the two modes. For both modes, stored counting-rated (M) were expressed by the following formula: M=N(1-Ntau) where N was the display event-rates of the camera and tau was the resolving time including analog-to-digital conversion time and memory cycle time. The resolving time for each mode may have been different, but it was about 10 μsec for both modes in our computer system (TOSBAC 3400 model 31). Secondly, the date transfer speed from the buffer memory to the external memory such as magnetic disk or tape was considered for the two modes. For the ''LIST'' mode, the maximum value of stored counting-rates from the camera was expressed in terms of size of the buffer memory, access time and data transfer-rate of the external memory. For the ''HISTOGRAM'' mode, the minimum time of the frame was determined by size of the buffer memory, access time and transfer rate of the external memory. In our system, the maximum value of stored counting-rates were about 17,000 counts/sec. with the buffer size of 2,000 words, and minimum frame time was about 130 msec. with the buffer size of 1024 words. These values agree well with the calculated ones. From the author's present analysis, design of the camera-computer system becomes possible for quantitative dynamic imaging and future improvements are suggested. (author)

  6. Energy efficient hybrid computing systems using spin devices

    Science.gov (United States)

    Sharad, Mrigank

    Emerging spin-devices like magnetic tunnel junctions (MTJ's), spin-valves and domain wall magnets (DWM) have opened new avenues for spin-based logic design. This work explored potential computing applications which can exploit such devices for higher energy-efficiency and performance. The proposed applications involve hybrid design schemes, where charge-based devices supplement the spin-devices, to gain large benefits at the system level. As an example, lateral spin valves (LSV) involve switching of nanomagnets using spin-polarized current injection through a metallic channel such as Cu. Such spin-torque based devices possess several interesting properties that can be exploited for ultra-low power computation. Analog characteristic of spin current facilitate non-Boolean computation like majority evaluation that can be used to model a neuron. The magneto-metallic neurons can operate at ultra-low terminal voltage of ˜20mV, thereby resulting in small computation power. Moreover, since nano-magnets inherently act as memory elements, these devices can facilitate integration of logic and memory in interesting ways. The spin based neurons can be integrated with CMOS and other emerging devices leading to different classes of neuromorphic/non-Von-Neumann architectures. The spin-based designs involve `mixed-mode' processing and hence can provide very compact and ultra-low energy solutions for complex computation blocks, both digital as well as analog. Such low-power, hybrid designs can be suitable for various data processing applications like cognitive computing, associative memory, and currentmode on-chip global interconnects. Simulation results for these applications based on device-circuit co-simulation framework predict more than ˜100x improvement in computation energy as compared to state of the art CMOS design, for optimal spin-device parameters.

  7. A Survey of Phase Change Memory Systems

    Institute of Scientific and Technical Information of China (English)

    夏飞; 蒋德钧; 熊劲; 孙凝晖

    2015-01-01

    As the scaling of applications increases, the demand of main memory capacity increases in order to serve large working set. It is difficult for DRAM (dynamic random access memory) based memory system to satisfy the memory capacity requirement due to its limited scalability and high energy consumption. Compared to DRAM, PCM (phase change memory) has better scalability, lower energy leakage, and non-volatility. PCM memory systems have become a hot topic of academic and industrial research. However, PCM technology has the following three drawbacks: long write latency, limited write endurance, and high write energy, which raises challenges to its adoption in practice. This paper surveys architectural research work to optimize PCM memory systems. First, this paper introduces the background of PCM. Then, it surveys research efforts on PCM memory systems in performance optimization, lifetime improving, and energy saving in detail, respectively. This paper also compares and summarizes these techniques from multiple dimensions. Finally, it concludes these optimization techniques and discusses possible research directions of PCM memory systems in future.

  8. Scripting for construction of a transactive memory system in multidisciplinary CSCL environments

    NARCIS (Netherlands)

    Noroozi, O.; Biemans, H.J.A.; Weinberger, A.; Mulder, M.; Chizari, M.

    2013-01-01

    Establishing a Transactive Memory System (TMS) is essential for groups of learners, when they are multidisciplinary and collaborate online. Environments for Computer-Supported Collaborative Learning (CSCL) could be designed to facilitate the TMS. This study investigates how various aspects of a TMS

  9. A Comparison of Two Paradigms for Distributed Shared Memory

    NARCIS (Netherlands)

    Levelt, W.G.; Kaashoek, M.F.; Bal, H.E.; Tanenbaum, A.S.

    1992-01-01

    Two paradigms for distributed shared memory on loosely‐coupled computing systems are compared: the shared data‐object model as used in Orca, a programming language specially designed for loosely‐coupled computing systems, and the shared virtual memory model. For both paradigms two systems are

  10. A general model for memory interference in a multiprocessor system with memory hierarchy

    Science.gov (United States)

    Taha, Badie A.; Standley, Hilda M.

    1989-01-01

    The problem of memory interference in a multiprocessor system with a hierarchy of shared buses and memories is addressed. The behavior of the processors is represented by a sequence of memory requests with each followed by a determined amount of processing time. A statistical queuing network model for determining the extent of memory interference in multiprocessor systems with clusters of memory hierarchies is presented. The performance of the system is measured by the expected number of busy memory clusters. The results of the analytic model are compared with simulation results, and the correlation between them is found to be very high.

  11. Memory Efficient VLSI Implementation of Real-Time Motion Detection System Using FPGA Platform

    Directory of Open Access Journals (Sweden)

    Sanjay Singh

    2017-06-01

    Full Text Available Motion detection is the heart of a potentially complex automated video surveillance system, intended to be used as a standalone system. Therefore, in addition to being accurate and robust, a successful motion detection technique must also be economical in the use of computational resources on selected FPGA development platform. This is because many other complex algorithms of an automated video surveillance system also run on the same platform. Keeping this key requirement as main focus, a memory efficient VLSI architecture for real-time motion detection and its implementation on FPGA platform is presented in this paper. This is accomplished by proposing a new memory efficient motion detection scheme and designing its VLSI architecture. The complete real-time motion detection system using the proposed memory efficient architecture along with proper input/output interfaces is implemented on Xilinx ML510 (Virtex-5 FX130T FPGA development platform and is capable of operating at 154.55 MHz clock frequency. Memory requirement of the proposed architecture is reduced by 41% compared to the standard clustering based motion detection architecture. The new memory efficient system robustly and automatically detects motion in real-world scenarios (both for the static backgrounds and the pseudo-stationary backgrounds in real-time for standard PAL (720 × 576 size color video.

  12. Insect olfactory coding and memory at multiple timescales.

    Science.gov (United States)

    Gupta, Nitin; Stopfer, Mark

    2011-10-01

    Insects can learn, allowing them great flexibility for locating seasonal food sources and avoiding wily predators. Because insects are relatively simple and accessible to manipulation, they provide good experimental preparations for exploring mechanisms underlying sensory coding and memory. Here we review how the intertwining of memory with computation enables the coding, decoding, and storage of sensory experience at various stages of the insect olfactory system. Individual parts of this system are capable of multiplexing memories at different timescales, and conversely, memory on a given timescale can be distributed across different parts of the circuit. Our sampling of the olfactory system emphasizes the diversity of memories, and the importance of understanding these memories in the context of computations performed by different parts of a sensory system. Published by Elsevier Ltd.

  13. Associative Memory computing power and its simulation.

    CERN Document Server

    Volpi, G; The ATLAS collaboration

    2014-01-01

    The associative memory (AM) chip is ASIC device specifically designed to perform ``pattern matching'' at very high speed and with parallel access to memory locations. The most extensive use for such device will be the ATLAS Fast Tracker (FTK) processor, where more than 8000 chips will be installed in 128 VME boards, specifically designed for high throughput in order to exploit the chip's features. Each AM chip will store a database of about 130000 pre-calculated patterns, allowing FTK to use about 1 billion patterns for the whole system, with any data inquiry broadcast to all memory elements simultaneously within the same clock cycle (10 ns), thus data retrieval time is independent of the database size. Speed and size of the system are crucial for real-time High Energy Physics applications, such as the ATLAS FTK processor. Using 80 million channels of the ATLAS tracker, FTK finds tracks within 100 $\\mathrm{\\mu s}$. The simulation of such a parallelized system is an extremely complex task when executed in comm...

  14. A short review of memory research

    Directory of Open Access Journals (Sweden)

    Igor Areh

    2004-09-01

    Full Text Available Scientific research on memory began at the end of 19th century with studies of semantic and/or long term memory. In most cases memory was interpreted as a storehouse for various data and the quality of the storehouse was usually defined by a quantity of recalled data. The research work was concentrated on specificity of the connection between memory and learning. At that time few authors developed theories which were rare, uncommon and before their time (e.g.: Bartlett, Ribot, Freud. Even after 20th century, when behavioural stimulus-response approach began to dominate, the measure of memory quality was still the quantity of memory recall. In the 1960th the rise of cognitive psychology began, the computer metaphor was born and finally the behavioural comprehension of cognitive system was surpassed. Cognitive system was understood as a computer-like interface between an organism and environment. In recent years the computer metaphor is no longer dominant. New and efficient concepts are moving forward. Quantity of data recall, as the measure of memory quality, is not so important any more – attention is focused on accuracy of memory recall.

  15. Stress Effects on Multiple Memory System Interactions

    Science.gov (United States)

    Ness, Deborah; Calabrese, Pasquale

    2016-01-01

    Extensive behavioural, pharmacological, and neurological research reports stress effects on mammalian memory processes. While stress effects on memory quantity have been known for decades, the influence of stress on multiple memory systems and their distinct contributions to the learning process have only recently been described. In this paper, after summarizing the fundamental biological aspects of stress/emotional arousal and recapitulating functionally and anatomically distinct memory systems, we review recent animal and human studies exploring the effects of stress on multiple memory systems. Apart from discussing the interaction between distinct memory systems in stressful situations, we will also outline the fundamental role of the amygdala in mediating such stress effects. Additionally, based on the methods applied in the herein discussed studies, we will discuss how memory translates into behaviour. PMID:27034845

  16. Stress Effects on Multiple Memory System Interactions.

    Science.gov (United States)

    Ness, Deborah; Calabrese, Pasquale

    2016-01-01

    Extensive behavioural, pharmacological, and neurological research reports stress effects on mammalian memory processes. While stress effects on memory quantity have been known for decades, the influence of stress on multiple memory systems and their distinct contributions to the learning process have only recently been described. In this paper, after summarizing the fundamental biological aspects of stress/emotional arousal and recapitulating functionally and anatomically distinct memory systems, we review recent animal and human studies exploring the effects of stress on multiple memory systems. Apart from discussing the interaction between distinct memory systems in stressful situations, we will also outline the fundamental role of the amygdala in mediating such stress effects. Additionally, based on the methods applied in the herein discussed studies, we will discuss how memory translates into behaviour.

  17. Stress Effects on Multiple Memory System Interactions

    Directory of Open Access Journals (Sweden)

    Deborah Ness

    2016-01-01

    Full Text Available Extensive behavioural, pharmacological, and neurological research reports stress effects on mammalian memory processes. While stress effects on memory quantity have been known for decades, the influence of stress on multiple memory systems and their distinct contributions to the learning process have only recently been described. In this paper, after summarizing the fundamental biological aspects of stress/emotional arousal and recapitulating functionally and anatomically distinct memory systems, we review recent animal and human studies exploring the effects of stress on multiple memory systems. Apart from discussing the interaction between distinct memory systems in stressful situations, we will also outline the fundamental role of the amygdala in mediating such stress effects. Additionally, based on the methods applied in the herein discussed studies, we will discuss how memory translates into behaviour.

  18. The CESR computer control system

    International Nuclear Information System (INIS)

    Helmke, R.G.; Rice, D.H.; Strohman, C.

    1986-01-01

    The control system for the Cornell Electron Storage Ring (CESR) has functioned satisfactorily since its implementation in 1979. Key characteristics are fast tuning response, almost exclusive use of FORTRAN as a programming language, and efficient coordinated ramping of CESR guide field elements. This original system has not, however, been able to keep pace with the increasing complexity of operation of CESR associated with performance upgrades. Limitations in address space, expandability, access to data system-wide, and program development impediments have prompted the undertaking of a major upgrade. The system under development accomodates up to 8 VAX computers for all applications programs. The database and communications semaphores reside in a shared multi-ported memory, and each hardware interface bus is controlled by a dedicated 32 bit micro-processor in a VME based system. (orig.)

  19. Declarative and nondeclarative memory: multiple brain systems supporting learning and memory.

    Science.gov (United States)

    Squire, L R

    1992-01-01

    Abstract The topic of multiple forms of memory is considered from a biological point of view. Fact-and-event (declarative, explicit) memory is contrasted with a collection of non conscious (non-declarative, implicit) memory abilities including skills and habits, priming, and simple conditioning. Recent evidence is reviewed indicating that declarative and non declarative forms of memory have different operating characteristics and depend on separate brain systems. A brain-systems framework for understanding memory phenomena is developed in light of lesion studies involving rats, monkeys, and humans, as well as recent studies with normal humans using the divided visual field technique, event-related potentials, and positron emission tomography (PET).

  20. Time-Predictable Virtual Memory

    DEFF Research Database (Denmark)

    Puffitsch, Wolfgang; Schoeberl, Martin

    2016-01-01

    Virtual memory is an important feature of modern computer architectures. For hard real-time systems, memory protection is a particularly interesting feature of virtual memory. However, current memory management units are not designed for time-predictability and therefore cannot be used...... in such systems. This paper investigates the requirements on virtual memory from the perspective of hard real-time systems and presents the design of a time-predictable memory management unit. Our evaluation shows that the proposed design can be implemented efficiently. The design allows address translation...... and address range checking in constant time of two clock cycles on a cache miss. This constant time is in strong contrast to the possible cost of a miss in a translation look-aside buffer in traditional virtual memory organizations. Compared to a platform without a memory management unit, these two additional...

  1. Neuromorphic Computing – From Materials Research to Systems Architecture Roundtable

    Energy Technology Data Exchange (ETDEWEB)

    Schuller, Ivan K. [Univ. of California, San Diego, CA (United States); Stevens, Rick [Argonne National Lab. (ANL), Argonne, IL (United States); Univ. of Chicago, IL (United States); Pino, Robinson [Dept. of Energy (DOE) Office of Science, Washington, DC (United States); Pechan, Michael [Dept. of Energy (DOE) Office of Science, Washington, DC (United States)

    2015-10-29

    Computation in its many forms is the engine that fuels our modern civilization. Modern computation—based on the von Neumann architecture—has allowed, until now, the development of continuous improvements, as predicted by Moore’s law. However, computation using current architectures and materials will inevitably—within the next 10 years—reach a limit because of fundamental scientific reasons. DOE convened a roundtable of experts in neuromorphic computing systems, materials science, and computer science in Washington on October 29-30, 2015 to address the following basic questions: Can brain-like (“neuromorphic”) computing devices based on new material concepts and systems be developed to dramatically outperform conventional CMOS based technology? If so, what are the basic research challenges for materials sicence and computing? The overarching answer that emerged was: The development of novel functional materials and devices incorporated into unique architectures will allow a revolutionary technological leap toward the implementation of a fully “neuromorphic” computer. To address this challenge, the following issues were considered: The main differences between neuromorphic and conventional computing as related to: signaling models, timing/clock, non-volatile memory, architecture, fault tolerance, integrated memory and compute, noise tolerance, analog vs. digital, and in situ learning New neuromorphic architectures needed to: produce lower energy consumption, potential novel nanostructured materials, and enhanced computation Device and materials properties needed to implement functions such as: hysteresis, stability, and fault tolerance Comparisons of different implementations: spin torque, memristors, resistive switching, phase change, and optical schemes for enhanced breakthroughs in performance, cost, fault tolerance, and/or manufacturability.

  2. Computational complexity and memory usage for multi-frontal direct solvers used in p finite element analysis

    KAUST Repository

    Calo, Victor M.; Collier, Nathan; Pardo, David; Paszyński, Maciej R.

    2011-01-01

    The multi-frontal direct solver is the state of the art for the direct solution of linear systems. This paper provides computational complexity and memory usage estimates for the application of the multi-frontal direct solver algorithm on linear systems resulting from p finite elements. Specifically we provide the estimates for systems resulting from C0 polynomial spaces spanned by B-splines. The structured grid and uniform polynomial order used in isogeometric meshes simplifies the analysis.

  3. Computational complexity and memory usage for multi-frontal direct solvers used in p finite element analysis

    KAUST Repository

    Calo, Victor M.

    2011-05-14

    The multi-frontal direct solver is the state of the art for the direct solution of linear systems. This paper provides computational complexity and memory usage estimates for the application of the multi-frontal direct solver algorithm on linear systems resulting from p finite elements. Specifically we provide the estimates for systems resulting from C0 polynomial spaces spanned by B-splines. The structured grid and uniform polynomial order used in isogeometric meshes simplifies the analysis.

  4. Evaluation of reinitialization-free nonvolatile computer systems for energy-harvesting Internet of things applications

    Science.gov (United States)

    Onizawa, Naoya; Tamakoshi, Akira; Hanyu, Takahiro

    2017-08-01

    In this paper, reinitialization-free nonvolatile computer systems are designed and evaluated for energy-harvesting Internet of things (IoT) applications. In energy-harvesting applications, as power supplies generated from renewable power sources cause frequent power failures, data processed need to be backed up when power failures occur. Unless data are safely backed up before power supplies diminish, reinitialization processes are required when power supplies are recovered, which results in low energy efficiencies and slow operations. Using nonvolatile devices in processors and memories can realize a faster backup than a conventional volatile computer system, leading to a higher energy efficiency. To evaluate the energy efficiency upon frequent power failures, typical computer systems including processors and memories are designed using 90 nm CMOS or CMOS/magnetic tunnel junction (MTJ) technologies. Nonvolatile ARM Cortex-M0 processors with 4 kB MRAMs are evaluated using a typical computing benchmark program, Dhrystone, which shows a few order-of-magnitude reductions in energy in comparison with a volatile processor with SRAM.

  5. A memory efficient user interface for CLIPS micro-computer applications

    Science.gov (United States)

    Sterle, Mark E.; Mayer, Richard J.; Jordan, Janice A.; Brodale, Howard N.; Lin, Min-Jin

    1990-01-01

    The goal of the Integrated Southern Pine Beetle Expert System (ISPBEX) is to provide expert level knowledge concerning treatment advice that is convenient and easy to use for Forest Service personnel. ISPBEX was developed in CLIPS and delivered on an IBM PC AT class micro-computer, operating with an MS/DOS operating system. This restricted the size of the run time system to 640K. In order to provide a robust expert system, with on-line explanation, help, and alternative actions menus, as well as features that allow the user to back up or execute 'what if' scenarios, a memory efficient menuing system was developed to interface with the CLIPS programs. By robust, we mean an expert system that (1) is user friendly, (2) provides reasonable solutions for a wide variety of domain specific problems, (3) explains why some solutions were suggested but others were not, and (4) provides technical information relating to the problem solution. Several advantages were gained by using this type of user interface (UI). First, by storing the menus on the hard disk (instead of main memory) during program execution, a more robust system could be implemented. Second, since the menus were built rapidly, development time was reduced. Third, the user may try a new scenario by backing up to any of the input screens and revising segments of the original input without having to retype all the information. And fourth, asserting facts from the menus provided for a dynamic and flexible fact base. This UI technology has been applied successfully in expert systems applications in forest management, agriculture, and manufacturing. This paper discusses the architecture of the UI system, human factors considerations, and the menu syntax design.

  6. Computer Game Play Reduces Intrusive Memories of Experimental Trauma via Reconsolidation-Update Mechanisms.

    Science.gov (United States)

    James, Ella L; Bonsall, Michael B; Hoppitt, Laura; Tunbridge, Elizabeth M; Geddes, John R; Milton, Amy L; Holmes, Emily A

    2015-08-01

    Memory of a traumatic event becomes consolidated within hours. Intrusive memories can then flash back repeatedly into the mind's eye and cause distress. We investigated whether reconsolidation-the process during which memories become malleable when recalled-can be blocked using a cognitive task and whether such an approach can reduce these unbidden intrusions. We predicted that reconsolidation of a reactivated visual memory of experimental trauma could be disrupted by engaging in a visuospatial task that would compete for visual working memory resources. We showed that intrusive memories were virtually abolished by playing the computer game Tetris following a memory-reactivation task 24 hr after initial exposure to experimental trauma. Furthermore, both memory reactivation and playing Tetris were required to reduce subsequent intrusions (Experiment 2), consistent with reconsolidation-update mechanisms. A simple, noninvasive cognitive-task procedure administered after emotional memory has already consolidated (i.e., > 24 hours after exposure to experimental trauma) may prevent the recurrence of intrusive memories of those emotional events. © The Author(s) 2015.

  7. Homodyne detection of holographic memory systems

    Science.gov (United States)

    Urness, Adam C.; Wilson, William L.; Ayres, Mark R.

    2014-09-01

    We present a homodyne detection system implemented for a page-wise holographic memory architecture. Homodyne detection by holographic memory systems enables phase quadrature multiplexing (doubling address space), and lower exposure times (increasing read transfer rates). It also enables phase modulation, which improves signal-to-noise ratio (SNR) to further increase data capacity. We believe this is the first experimental demonstration of homodyne detection for a page-wise holographic memory system suitable for a commercial design.

  8. 3D-SoftChip: A Novel Architecture for Next-Generation Adaptive Computing Systems

    Directory of Open Access Journals (Sweden)

    Lee Mike Myung-Ok

    2006-01-01

    Full Text Available This paper introduces a novel architecture for next-generation adaptive computing systems, which we term 3D-SoftChip. The 3D-SoftChip is a 3-dimensional (3D vertically integrated adaptive computing system combining state-of-the-art processing and 3D interconnection technology. It comprises the vertical integration of two chips (a configurable array processor and an intelligent configurable switch through an indium bump interconnection array (IBIA. The configurable array processor (CAP is an array of heterogeneous processing elements (PEs, while the intelligent configurable switch (ICS comprises a switch block, 32-bit dedicated RISC processor for control, on-chip program/data memory, data frame buffer, along with a direct memory access (DMA controller. This paper introduces the novel 3D-SoftChip architecture for real-time communication and multimedia signal processing as a next-generation computing system. The paper further describes the advanced HW/SW codesign and verification methodology, including high-level system modeling of the 3D-SoftChip using SystemC, being used to determine the optimum hardware specification in the early design stage.

  9. Implementation of Parallel Dynamic Simulation on Shared-Memory vs. Distributed-Memory Environments

    Energy Technology Data Exchange (ETDEWEB)

    Jin, Shuangshuang; Chen, Yousu; Wu, Di; Diao, Ruisheng; Huang, Zhenyu

    2015-12-09

    Power system dynamic simulation computes the system response to a sequence of large disturbance, such as sudden changes in generation or load, or a network short circuit followed by protective branch switching operation. It consists of a large set of differential and algebraic equations, which is computational intensive and challenging to solve using single-processor based dynamic simulation solution. High-performance computing (HPC) based parallel computing is a very promising technology to speed up the computation and facilitate the simulation process. This paper presents two different parallel implementations of power grid dynamic simulation using Open Multi-processing (OpenMP) on shared-memory platform, and Message Passing Interface (MPI) on distributed-memory clusters, respectively. The difference of the parallel simulation algorithms and architectures of the two HPC technologies are illustrated, and their performances for running parallel dynamic simulation are compared and demonstrated.

  10. Memory allocation and computations for Laplace’s equation of 3-D arbitrary boundary problems

    Directory of Open Access Journals (Sweden)

    Tsay Tswn-Syau

    2017-01-01

    Full Text Available Computation iteration schemes and memory allocation technique for finite difference method were presented in this paper. The transformed form of a groundwater flow problem in the generalized curvilinear coordinates was taken to be the illustrating example and a 3-dimensional second order accurate 19-point scheme was presented. Traditional element-by-element methods (e.g. SOR are preferred since it is simple and memory efficient but time consuming in computation. For efficient memory allocation, an index method was presented to store the sparse non-symmetric matrix of the problem. For computations, conjugate-gradient-like methods were reported to be computationally efficient. Among them, using incomplete Choleski decomposition as preconditioner was reported to be good method for iteration convergence. In general, the developed index method in this paper has the following advantages: (1 adaptable to various governing and boundary conditions, (2 flexible for higher order approximation, (3 independence of problem dimension, (4 efficient for complex problems when global matrix is not symmetric, (5 convenience for general sparse matrices, (6 computationally efficient in the most time consuming procedure of matrix multiplication, and (7 applicable to any developed matrix solver.

  11. Distributed computing system with dual independent communications paths between computers and employing split tokens

    Science.gov (United States)

    Rasmussen, Robert D. (Inventor); Manning, Robert M. (Inventor); Lewis, Blair F. (Inventor); Bolotin, Gary S. (Inventor); Ward, Richard S. (Inventor)

    1990-01-01

    This is a distributed computing system providing flexible fault tolerance; ease of software design and concurrency specification; and dynamic balance of the loads. The system comprises a plurality of computers each having a first input/output interface and a second input/output interface for interfacing to communications networks each second input/output interface including a bypass for bypassing the associated computer. A global communications network interconnects the first input/output interfaces for providing each computer the ability to broadcast messages simultaneously to the remainder of the computers. A meshwork communications network interconnects the second input/output interfaces providing each computer with the ability to establish a communications link with another of the computers bypassing the remainder of computers. Each computer is controlled by a resident copy of a common operating system. Communications between respective ones of computers is by means of split tokens each having a moving first portion which is sent from computer to computer and a resident second portion which is disposed in the memory of at least one of computer and wherein the location of the second portion is part of the first portion. The split tokens represent both functions to be executed by the computers and data to be employed in the execution of the functions. The first input/output interfaces each include logic for detecting a collision between messages and for terminating the broadcasting of a message whereby collisions between messages are detected and avoided.

  12. Configurable memory system and method for providing atomic counting operations in a memory device

    Science.gov (United States)

    Bellofatto, Ralph E.; Gara, Alan G.; Giampapa, Mark E.; Ohmacht, Martin

    2010-09-14

    A memory system and method for providing atomic memory-based counter operations to operating systems and applications that make most efficient use of counter-backing memory and virtual and physical address space, while simplifying operating system memory management, and enabling the counter-backing memory to be used for purposes other than counter-backing storage when desired. The encoding and address decoding enabled by the invention provides all this functionality through a combination of software and hardware.

  13. Scripting for Construction of a Transactive Memory System in Multidisciplinary CSCL Environments

    Science.gov (United States)

    Noroozi, Omid; Biemans, Harm J. A.; Weinberger, Armin; Mulder, Martin; Chizari, Mohammad

    2013-01-01

    Establishing a Transactive Memory System (TMS) is essential for groups of learners, when they are multidisciplinary and collaborate online. Environments for Computer-Supported Collaborative Learning (CSCL) could be designed to facilitate the TMS. This study investigates how various aspects of a TMS (i.e., specialization, coordination, and trust)…

  14. Computer-Presented Organizational/Memory Aids as Instruction for Solving Pico-Fomi Problems.

    Science.gov (United States)

    Steinberg, Esther R.; And Others

    1985-01-01

    Describes investigation of effectiveness of computer-presented organizational/memory aids (matrix and verbal charts controlled by computer or learner) as instructional technique for solving Pico-Fomi problems, and the acquisition of deductive inference rules when such aids are present. Results indicate chart use control should be adapted to…

  15. Towards Modeling False Memory With Computational Knowledge Bases.

    Science.gov (United States)

    Li, Justin; Kohanyi, Emma

    2017-01-01

    One challenge to creating realistic cognitive models of memory is the inability to account for the vast common-sense knowledge of human participants. Large computational knowledge bases such as WordNet and DBpedia may offer a solution to this problem but may pose other challenges. This paper explores some of these difficulties through a semantic network spreading activation model of the Deese-Roediger-McDermott false memory task. In three experiments, we show that these knowledge bases only capture a subset of human associations, while irrelevant information introduces noise and makes efficient modeling difficult. We conclude that the contents of these knowledge bases must be augmented and, more important, that the algorithms must be refined and optimized, before large knowledge bases can be widely used for cognitive modeling. Copyright © 2016 Cognitive Science Society, Inc.

  16. A Heterogeneous High-Performance System for Computational and Computer Science

    Science.gov (United States)

    2016-11-15

    expand the research infrastructure at the institution but also to enhance the high -performance computing training provided to both undergraduate and... cloud computing, supercomputing, and the availability of cheap memory and storage led to enormous amounts of data to be sifted through in forensic... High -Performance Computing (HPC) tools that can be integrated with existing curricula and support our research to modernize and dramatically advance

  17. A computer-based spectrometry system for assessment of body radioactivity

    International Nuclear Information System (INIS)

    Venn, J.B.

    1985-01-01

    This paper describes a PDP-11 computer system operating under RT-11 for the acquisition and processing of pulse height spectra in the measurement of body radioactivity. SABRA (system for the assessment of body radioactivity) provides control of multiple detection systems from visual display consoles by means of a command language. A wide range of facilities is available for the display, processing and storage of acquired spectra and complex operations may be pre-programmed by means of the SABRE MACRO language. The hardware includes a CAMAC interface to the detection systems, disc cartridge drives for mass storage of data and programs, and data-links to other computers. The software is written in assembler language and includes special features for the dynamic allocation of computer memory and for safeguarding acquired data. (orig.)

  18. ClimateSpark: An In-memory Distributed Computing Framework for Big Climate Data Analytics

    Science.gov (United States)

    Hu, F.; Yang, C. P.; Duffy, D.; Schnase, J. L.; Li, Z.

    2016-12-01

    Massive array-based climate data is being generated from global surveillance systems and model simulations. They are widely used to analyze the environment problems, such as climate changes, natural hazards, and public health. However, knowing the underlying information from these big climate datasets is challenging due to both data- and computing- intensive issues in data processing and analyzing. To tackle the challenges, this paper proposes ClimateSpark, an in-memory distributed computing framework to support big climate data processing. In ClimateSpark, the spatiotemporal index is developed to enable Apache Spark to treat the array-based climate data (e.g. netCDF4, HDF4) as native formats, which are stored in Hadoop Distributed File System (HDFS) without any preprocessing. Based on the index, the spatiotemporal query services are provided to retrieve dataset according to a defined geospatial and temporal bounding box. The data subsets will be read out, and a data partition strategy will be applied to equally split the queried data to each computing node, and store them in memory as climateRDDs for processing. By leveraging Spark SQL and User Defined Function (UDFs), the climate data analysis operations can be conducted by the intuitive SQL language. ClimateSpark is evaluated by two use cases using the NASA Modern-Era Retrospective Analysis for Research and Applications (MERRA) climate reanalysis dataset. One use case is to conduct the spatiotemporal query and visualize the subset results in animation; the other one is to compare different climate model outputs using Taylor-diagram service. Experimental results show that ClimateSpark can significantly accelerate data query and processing, and enable the complex analysis services served in the SQL-style fashion.

  19. Multiple Memory Systems Are Unnecessary to Account for Infant Memory Development: An Ecological Model

    Science.gov (United States)

    Rovee-Collier, Carolyn; Cuevas, Kimberly

    2009-01-01

    How the memory of adults evolves from the memory abilities of infants is a central problem in cognitive development. The popular solution holds that the multiple memory systems of adults mature at different rates during infancy. The "early-maturing system" (implicit or nondeclarative memory) functions automatically from birth, whereas the…

  20. Human brain as the model of a new computer system. II

    Energy Technology Data Exchange (ETDEWEB)

    Holtz, K; Langheld, E

    1981-12-09

    For Pt. I see IBID., Vol. 29, No. 22, P. 13 (1981). The authors describe the self-generating system of connections of a self-teaching no-program associative computer. The self-generating systems of connections are regarded as simulation models of the human brain and compared with the brain structure. The system hardware comprises microprocessor, PROM, memory, VDU, keyboard unit.

  1. Concurrent performance of two memory tasks: evidence for domain-specific working memory systems.

    Science.gov (United States)

    Cocchini, Gianna; Logie, Robert H; Della Sala, Sergio; MacPherson, Sarah E; Baddeley, Alan D

    2002-10-01

    Previous studies of dual-task coordination in working memory have shown a lack of dual-task interference when a verbal memory task is combined with concurrent perceptuomotor tracking. Two experiments are reported in which participants were required to perform pairwise combinations of (1) a verbal memory task, a visual memory task, and perceptuomotor tracking (Experiment 1), and (2) pairwise combinations of the two memory tasks and articulatory suppression (Experiment 2). Tracking resulted in no disruption of the verbal memory preload over and above the impact of a delay in recall and showed only minimal disruption of the retention of the visual memory load. Performing an ongoing verbal memory task had virtually no impact on retention of a visual memory preload or vice versa, indicating that performing two demanding memory tasks results in little mutual interference. Experiment 2 also showed minimal disruption when the two memory tasks were combined, although verbal memory (but not visual memory) was clearly disrupted by articulatory suppression interpolated between presentation and recall. These data suggest that a multiple-component working memory model provides a better account for performance in concurrent immediate memory tasks than do theories that assume a single processing and storage system or a limited-capacity attentional system coupled with activated memory traces.

  2. Dynamic memory management for embedded systems

    CERN Document Server

    Atienza Alonso, David; Poucet, Christophe; Peón-Quirós, Miguel; Bartzas, Alexandros; Catthoor, Francky; Soudris, Dimitrios

    2015-01-01

    This book provides a systematic and unified methodology, including basic principles and reusable processes, for dynamic memory management (DMM) in embedded systems.  The authors describe in detail how to design and optimize the use of dynamic memory in modern, multimedia and network applications, targeting the latest generation of portable embedded systems, such as smartphones. Coverage includes a variety of design and optimization topics in electronic design automation of DMM, from high-level software optimization to microarchitecture-level hardware support. The authors describe the design of multi-layer dynamic data structures for the final memory hierarchy layers of the target portable embedded systems and how to create a low-fragmentation, cost-efficient, dynamic memory management subsystem out of configurable components for the particular memory allocation and de-allocation patterns for each type of application.  The design methodology described in this book is based on propagating constraints among de...

  3. The Research on Linux Memory Forensics

    Science.gov (United States)

    Zhang, Jun; Che, ShengBing

    2018-03-01

    Memory forensics is a branch of computer forensics. It does not depend on the operating system API, and analyzes operating system information from binary memory data. Based on the 64-bit Linux operating system, it analyzes system process and thread information from physical memory data. Using ELF file debugging information and propose a method for locating kernel structure member variable, it can be applied to different versions of the Linux operating system. The experimental results show that the method can successfully obtain the sytem process information from physical memory data, and can be compatible with multiple versions of the Linux kernel.

  4. FPGA-based prototype storage system with phase change memory

    Science.gov (United States)

    Li, Gezi; Chen, Xiaogang; Chen, Bomy; Li, Shunfen; Zhou, Mi; Han, Wenbing; Song, Zhitang

    2016-10-01

    With the ever-increasing amount of data being stored via social media, mobile telephony base stations, and network devices etc. the database systems face severe bandwidth bottlenecks when moving vast amounts of data from storage to the processing nodes. At the same time, Storage Class Memory (SCM) technologies such as Phase Change Memory (PCM) with unique features like fast read access, high density, non-volatility, byte-addressability, positive response to increasing temperature, superior scalability, and zero standby leakage have changed the landscape of modern computing and storage systems. In such a scenario, we present a storage system called FLEET which can off-load partial or whole SQL queries to the storage engine from CPU. FLEET uses an FPGA rather than conventional CPUs to implement the off-load engine due to its highly parallel nature. We have implemented an initial prototype of FLEET with PCM-based storage. The results demonstrate that significant performance and CPU utilization gains can be achieved by pushing selected query processing components inside in PCM-based storage.

  5. A model for calculating the optimal replacement interval of computer systems

    International Nuclear Information System (INIS)

    Fujii, Minoru; Asai, Kiyoshi

    1981-08-01

    A mathematical model for calculating the optimal replacement interval of computer systems is described. This model is made to estimate the best economical interval of computer replacement when computing demand, cost and performance of computer, etc. are known. The computing demand is assumed to monotonously increase every year. Four kinds of models are described. In the model 1, a computer system is represented by only a central processing unit (CPU) and all the computing demand is to be processed on the present computer until the next replacement. On the other hand in the model 2, the excessive demand is admitted and may be transferred to other computing center and processed costly there. In the model 3, the computer system is represented by a CPU, memories (MEM) and input/output devices (I/O) and it must process all the demand. Model 4 is same as model 3, but the excessive demand is admitted to be processed in other center. (1) Computing demand at the JAERI, (2) conformity of Grosch's law for the recent computers, (3) replacement cost of computer systems, etc. are also described. (author)

  6. Method of Computer-aided Instruction in Situation Control Systems

    Directory of Open Access Journals (Sweden)

    Anatoliy O. Kargin

    2013-01-01

    Full Text Available The article considers the problem of computer-aided instruction in context-chain motivated situation control system of the complex technical system behavior. The conceptual and formal models of situation control with practical instruction are considered. Acquisition of new behavior knowledge is presented as structural changes in system memory in the form of situational agent set. Model and method of computer-aided instruction represent formalization, based on the nondistinct theories by physiologists and cognitive psychologists.The formal instruction model describes situation and reaction formation and dependence on different parameters, effecting education, such as the reinforcement value, time between the stimulus, action and the reinforcement. The change of the contextual link between situational elements when using is formalized.The examples and results of computer instruction experiments of the robot device “LEGO MINDSTORMS NXT”, equipped with ultrasonic distance, touch, light sensors.

  7. Reduction of Used Memory Ensemble Kalman Filtering (RumEnKF): A data assimilation scheme for memory intensive, high performance computing

    Science.gov (United States)

    Hut, Rolf; Amisigo, Barnabas A.; Steele-Dunne, Susan; van de Giesen, Nick

    2015-12-01

    Reduction of Used Memory Ensemble Kalman Filtering (RumEnKF) is introduced as a variant on the Ensemble Kalman Filter (EnKF). RumEnKF differs from EnKF in that it does not store the entire ensemble, but rather only saves the first two moments of the ensemble distribution. In this way, the number of ensemble members that can be calculated is less dependent on available memory, and mainly on available computing power (CPU). RumEnKF is developed to make optimal use of current generation super computer architecture, where the number of available floating point operations (flops) increases more rapidly than the available memory and where inter-node communication can quickly become a bottleneck. RumEnKF reduces the used memory compared to the EnKF when the number of ensemble members is greater than half the number of state variables. In this paper, three simple models are used (auto-regressive, low dimensional Lorenz and high dimensional Lorenz) to show that RumEnKF performs similarly to the EnKF. Furthermore, it is also shown that increasing the ensemble size has a similar impact on the estimation error from the three algorithms.

  8. Computer-based system for acquisition of nuclear well log data

    International Nuclear Information System (INIS)

    Meisner, J.E.

    1983-01-01

    There is described a computer-based well logging system, for acquiring nuclear well log data, including gamma ray energy spectrum and neutron population decay rate data, and providing a real-time presentation of the data on an operator's display based on a traversal by a downhole instrument of a prescribed borehole depth interval. The system has a multichannel analyzer including a pulse height analyzer and a memory. After a spectral gamma ray pulse signal coming from a downhole instrument over a logging cable is amplified and conditioned, the pulse height analyzer converts the pulse height into a digital code by peak detection, sample-and-hold action, and analog-to-digital conversion. The digital code defines the address of a memory location or channel, corresponding to a particular gamma ray energy and having a count value to be incremented. The spectrum data is then accessed by the system central processing unit (CPU) for analysis, and routed to the operator's display for presentation as a plot of relative gamma ray emissions activity versus energy level. For acquiring neutron decay rate data, the system has a multichannel scaling unit including a memory and a memory address generator. After a burst of neutrons downhole, thermal and epithermal neutron detector pulses build up and die away. Using the neutron source trigger as an initializing reference, the address generator produces a sequence of memory address codes, each code addressing the memory for a prescribed period of time, so as to define a series of time slots. A detector pulse signal produced during a time slot results in the incrementing of the count value in an address memory location. (author)

  9. A homotopy method for solving Riccati equations on a shared memory parallel computer

    International Nuclear Information System (INIS)

    Zigic, D.; Watson, L.T.; Collins, E.G. Jr.; Davis, L.D.

    1993-01-01

    Although there are numerous algorithms for solving Riccati equations, there still remains a need for algorithms which can operate efficiently on large problems and on parallel machines. This paper gives a new homotopy-based algorithm for solving Riccati equations on a shared memory parallel computer. The central part of the algorithm is the computation of the kernel of the Jacobian matrix, which is essential for the corrector iterations along the homotopy zero curve. Using a Schur decomposition the tensor product structure of various matrices can be efficiently exploited. The algorithm allows for efficient parallelization on shared memory machines

  10. TOWARD HIGHLY SECURE AND AUTONOMIC COMPUTING SYSTEMS: A HIERARCHICAL APPROACH

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Hsien-Hsin S

    2010-05-11

    The overall objective of this research project is to develop novel architectural techniques as well as system software to achieve a highly secure and intrusion-tolerant computing system. Such system will be autonomous, self-adapting, introspective, with self-healing capability under the circumstances of improper operations, abnormal workloads, and malicious attacks. The scope of this research includes: (1) System-wide, unified introspection techniques for autonomic systems, (2) Secure information-flow microarchitecture, (3) Memory-centric security architecture, (4) Authentication control and its implication to security, (5) Digital right management, (5) Microarchitectural denial-of-service attacks on shared resources. During the period of the project, we developed several architectural techniques and system software for achieving a robust, secure, and reliable computing system toward our goal.

  11. MEMORY SYSTEMS AND THE ADDICTED BRAIN

    Directory of Open Access Journals (Sweden)

    Jarid eGoodman

    2016-02-01

    Full Text Available The view that anatomically distinct memory systems differentially contribute to the development of drug addiction and relapse has received extensive support. The present brief review revisits this hypothesis as it was originally proposed twenty years ago (White, 1996 and highlights several recent developments. Extensive research employing a variety of animal learning paradigms indicates that dissociable neural systems mediate distinct types of learning and memory. Each memory system potentially contributes unique components to the learned behavior supporting drug addiction and relapse. In particular, the shift from recreational drug use to compulsive drug abuse may reflect a neuroanatomical shift from cognitive control of behavior mediated by the hippocampus/dorsomedial striatum toward habitual control of behavior mediated by the dorsolateral striatum (DLS. In addition, stress/anxiety may constitute a cofactor that facilitates DLS-dependent memory, and this may serve as a neurobehavioral mechanism underlying the increased drug use and relapse in humans following stressful life events. Evidence supporting the multiple systems view of drug addiction comes predominantly from studies of learning and memory that have employed as reinforcers addictive substances often considered within the context of drug addiction research, including cocaine, alcohol, and amphetamines. In addition, recent evidence suggests that the memory systems approach may also be helpful for understanding topical sources of addiction that reflect emerging health concerns, including marijuana use, high-fat diet, and video game playing.

  12. Memory H ∞ performance control of a class T-S fuzzy system

    Science.gov (United States)

    Wang, Yanhua; He, Xiqin; Wu, Zhihua; Kang, Xiulan; Xiu, Wei

    2018-03-01

    For much nonlinear system in the control system, both the stability of the system and certain performance indicators are required. The characteristics of T-S model in fuzzy system make it possible to illustrate a great amount of nonlinear system efficiently. First and foremost, the T-S model with uncertainties and external disturbance is utilized to interpret nonlinear system so as to implement H∞ performance control by means of fuzzy control theory. Meantime, owing to the tremendous existence of time delay phenomenon in the controlled, feedback controller with memory fuzzy state is fabricated. On the basis of Lyapunov Stability Theory, the closed-loop system becomes stable by establishing Lyapunov function. Gain matrix of the memory state feedback controller is obtained by applying linear matrix inequality methodology. And simultaneously it makes the system meet the requirement of the H∞ performance indicator. Ultimately, the efficiency of the above-mentioned method is exemplified by the numerical computation.

  13. Parallel Breadth-First Search on Distributed Memory Systems

    Energy Technology Data Exchange (ETDEWEB)

    Computational Research Division; Buluc, Aydin; Madduri, Kamesh

    2011-04-15

    Data-intensive, graph-based computations are pervasive in several scientific applications, and are known to to be quite challenging to implement on distributed memory systems. In this work, we explore the design space of parallel algorithms for Breadth-First Search (BFS), a key subroutine in several graph algorithms. We present two highly-tuned par- allel approaches for BFS on large parallel systems: a level-synchronous strategy that relies on a simple vertex-based partitioning of the graph, and a two-dimensional sparse matrix- partitioning-based approach that mitigates parallel commu- nication overhead. For both approaches, we also present hybrid versions with intra-node multithreading. Our novel hybrid two-dimensional algorithm reduces communication times by up to a factor of 3.5, relative to a common vertex based approach. Our experimental study identifies execu- tion regimes in which these approaches will be competitive, and we demonstrate extremely high performance on lead- ing distributed-memory parallel systems. For instance, for a 40,000-core parallel execution on Hopper, an AMD Magny- Cours based system, we achieve a BFS performance rate of 17.8 billion edge visits per second on an undirected graph of 4.3 billion vertices and 68.7 billion edges with skewed degree distribution.

  14. FPGA Based Intelligent Co-operative Processor in Memory Architecture

    DEFF Research Database (Denmark)

    Ahmed, Zaki; Sotudeh, Reza; Hussain, Dil Muhammad Akbar

    2011-01-01

    benefits of PIM, a concept of Co-operative Intelligent Memory (CIM) was developed by the intelligent system group of University of Hertfordshire, based on the previously developed Co-operative Pseudo Intelligent Memory (CPIM). This paper provides an overview on previous works (CPIM, CIM) and realization......In a continuing effort to improve computer system performance, Processor-In-Memory (PIM) architecture has emerged as an alternative solution. PIM architecture incorporates computational units and control logic directly on the memory to provide immediate access to the data. To exploit the potential...

  15. Local rollback for fault-tolerance in parallel computing systems

    Science.gov (United States)

    Blumrich, Matthias A [Yorktown Heights, NY; Chen, Dong [Yorktown Heights, NY; Gara, Alan [Yorktown Heights, NY; Giampapa, Mark E [Yorktown Heights, NY; Heidelberger, Philip [Yorktown Heights, NY; Ohmacht, Martin [Yorktown Heights, NY; Steinmacher-Burow, Burkhard [Boeblingen, DE; Sugavanam, Krishnan [Yorktown Heights, NY

    2012-01-24

    A control logic device performs a local rollback in a parallel super computing system. The super computing system includes at least one cache memory device. The control logic device determines a local rollback interval. The control logic device runs at least one instruction in the local rollback interval. The control logic device evaluates whether an unrecoverable condition occurs while running the at least one instruction during the local rollback interval. The control logic device checks whether an error occurs during the local rollback. The control logic device restarts the local rollback interval if the error occurs and the unrecoverable condition does not occur during the local rollback interval.

  16. Computational dissection of human episodic memory reveals mental process-specific genetic profiles.

    Science.gov (United States)

    Luksys, Gediminas; Fastenrath, Matthias; Coynel, David; Freytag, Virginie; Gschwind, Leo; Heck, Angela; Jessen, Frank; Maier, Wolfgang; Milnik, Annette; Riedel-Heller, Steffi G; Scherer, Martin; Spalek, Klara; Vogler, Christian; Wagner, Michael; Wolfsgruber, Steffen; Papassotiropoulos, Andreas; de Quervain, Dominique J-F

    2015-09-01

    Episodic memory performance is the result of distinct mental processes, such as learning, memory maintenance, and emotional modulation of memory strength. Such processes can be effectively dissociated using computational models. Here we performed gene set enrichment analyses of model parameters estimated from the episodic memory performance of 1,765 healthy young adults. We report robust and replicated associations of the amine compound SLC (solute-carrier) transporters gene set with the learning rate, of the collagen formation and transmembrane receptor protein tyrosine kinase activity gene sets with the modulation of memory strength by negative emotional arousal, and of the L1 cell adhesion molecule (L1CAM) interactions gene set with the repetition-based memory improvement. Furthermore, in a large functional MRI sample of 795 subjects we found that the association between L1CAM interactions and memory maintenance revealed large clusters of differences in brain activity in frontal cortical areas. Our findings provide converging evidence that distinct genetic profiles underlie specific mental processes of human episodic memory. They also provide empirical support to previous theoretical and neurobiological studies linking specific neuromodulators to the learning rate and linking neural cell adhesion molecules to memory maintenance. Furthermore, our study suggests additional memory-related genetic pathways, which may contribute to a better understanding of the neurobiology of human memory.

  17. Computational dissection of human episodic memory reveals mental process-specific genetic profiles

    Science.gov (United States)

    Luksys, Gediminas; Fastenrath, Matthias; Coynel, David; Freytag, Virginie; Gschwind, Leo; Heck, Angela; Jessen, Frank; Maier, Wolfgang; Milnik, Annette; Riedel-Heller, Steffi G.; Scherer, Martin; Spalek, Klara; Vogler, Christian; Wagner, Michael; Wolfsgruber, Steffen; Papassotiropoulos, Andreas; de Quervain, Dominique J.-F.

    2015-01-01

    Episodic memory performance is the result of distinct mental processes, such as learning, memory maintenance, and emotional modulation of memory strength. Such processes can be effectively dissociated using computational models. Here we performed gene set enrichment analyses of model parameters estimated from the episodic memory performance of 1,765 healthy young adults. We report robust and replicated associations of the amine compound SLC (solute-carrier) transporters gene set with the learning rate, of the collagen formation and transmembrane receptor protein tyrosine kinase activity gene sets with the modulation of memory strength by negative emotional arousal, and of the L1 cell adhesion molecule (L1CAM) interactions gene set with the repetition-based memory improvement. Furthermore, in a large functional MRI sample of 795 subjects we found that the association between L1CAM interactions and memory maintenance revealed large clusters of differences in brain activity in frontal cortical areas. Our findings provide converging evidence that distinct genetic profiles underlie specific mental processes of human episodic memory. They also provide empirical support to previous theoretical and neurobiological studies linking specific neuromodulators to the learning rate and linking neural cell adhesion molecules to memory maintenance. Furthermore, our study suggests additional memory-related genetic pathways, which may contribute to a better understanding of the neurobiology of human memory. PMID:26261317

  18. Sparse distributed memory overview

    Science.gov (United States)

    Raugh, Mike

    1990-01-01

    The Sparse Distributed Memory (SDM) project is investigating the theory and applications of massively parallel computing architecture, called sparse distributed memory, that will support the storage and retrieval of sensory and motor patterns characteristic of autonomous systems. The immediate objectives of the project are centered in studies of the memory itself and in the use of the memory to solve problems in speech, vision, and robotics. Investigation of methods for encoding sensory data is an important part of the research. Examples of NASA missions that may benefit from this work are Space Station, planetary rovers, and solar exploration. Sparse distributed memory offers promising technology for systems that must learn through experience and be capable of adapting to new circumstances, and for operating any large complex system requiring automatic monitoring and control. Sparse distributed memory is a massively parallel architecture motivated by efforts to understand how the human brain works. Sparse distributed memory is an associative memory, able to retrieve information from cues that only partially match patterns stored in the memory. It is able to store long temporal sequences derived from the behavior of a complex system, such as progressive records of the system's sensory data and correlated records of the system's motor controls.

  19. Gamma spectrometric system based on the personal computer Pravetz-83

    International Nuclear Information System (INIS)

    Yanakiev, K; Grigorov, T.; Vuchkov, M.

    1985-01-01

    A gamma spectrometric system based on a personal microcomputer Pravets-85 is described. The analog modules are NIM standard. ADC data are stored in the memory of the computer via a DMA channel and a real-time data processing is possible. The results from a series of tests indicate that the performance of the system is comparable with that of comercially avalable computerized spectrometers Ortec and Canberra

  20. A three-dimensional ground-water-flow model modified to reduce computer-memory requirements and better simulate confining-bed and aquifer pinchouts

    Science.gov (United States)

    Leahy, P.P.

    1982-01-01

    The Trescott computer program for modeling groundwater flow in three dimensions has been modified to (1) treat aquifer and confining bed pinchouts more realistically and (2) reduce the computer memory requirements needed for the input data. Using the original program, simulation of aquifer systems with nonrectangular external boundaries may result in a large number of nodes that are not involved in the numerical solution of the problem, but require computer storage. (USGS)

  1. The Associative Memory system for the FTK processor at ATLAS

    CERN Document Server

    Cipriani, R; The ATLAS collaboration; Donati, S; Giannetti, P; Lanza, A; Luciano, P; Magalotti, D; Piendibene, M

    2013-01-01

    Experiments at the LHC hadron collider search for extremely rare processes hidden in much larger background levels. As the experiment complexity, the accelerator backgrounds and instantaneus luminosity increase, increasingly complex and exclusive selections are necessary. We present results and performances of a new prototype of Associative Memory (AM) system, the core of the Fast Tracker processor (FTK). FTK is a real time tracking device for the ATLAS experiment trigger upgrade. The AM system provides massive computing power to minimize the online execution time of complex tracking algorithms. The time consuming pattern recognition problem, generally referred to as the "combinatorial challenge", is beat by the AM technology exploiting parallelism to the maximum level. The Associative Memory compares the event to pre-calculated "expectations" or "patterns" (pattern matching) at once and look for candidate tracks called "roads". The problem is solved by the time data are loaded into the AM devices. We report ...

  2. Extended memory management under RTOS

    Science.gov (United States)

    Plummer, M.

    1981-01-01

    A technique for extended memory management in ROLM 1666 computers using FORTRAN is presented. A general software system is described for which the technique can be ideally applied. The memory manager interface with the system is described. The protocols by which the manager is invoked are presented, as well as the methods used by the manager.

  3. Implementation of relational data base management systems on micro-computers

    International Nuclear Information System (INIS)

    Huang, C.L.

    1982-01-01

    This dissertation describes an implementation of a Relational Data Base Management System on a microcomputer. A specific floppy disk based hardward called TERAK is being used, and high level query interface which is similar to a subset of the SEQUEL language is provided. The system contains sub-systems such as I/O, file management, virtual memory management, query system, B-tree management, scanner, command interpreter, expression compiler, garbage collection, linked list manipulation, disk space management, etc. The software has been implemented to fulfill the following goals: (1) it is highly modularized. (2) The system is physically segmented into 16 logically independent, overlayable segments, in a way such that a minimal amount of memory is needed at execution time. (3) Virtual memory system is simulated that provides the system with seemingly unlimited memory space. (4) A language translator is applied to recognize user requests in the query language. The code generation of this translator generates compact code for the execution of UPDATE, DELETE, and QUERY commands. (5) A complete set of basic functions needed for on-line data base manipulations is provided through the use of a friendly query interface. (6) To eliminate the dependency on the environment (both software and hardware) as much as possible, so that it would be easy to transplant the system to other computers. (7) To simulate each relation as a sequential file. It is intended to be a highly efficient, single user system suited to be used by small or medium sized organizations for, say, administrative purposes. Experiments show that quite satisfying results have indeed been achieved

  4. In-Depth Analysis of Computer Memory Acquisition Software for Forensic Purposes.

    Science.gov (United States)

    McDown, Robert J; Varol, Cihan; Carvajal, Leonardo; Chen, Lei

    2016-01-01

    The comparison studies on random access memory (RAM) acquisition tools are either limited in metrics or the selected tools were designed to be executed in older operating systems. Therefore, this study evaluates widely used seven shareware or freeware/open source RAM acquisition forensic tools that are compatible to work with the latest 64-bit Windows operating systems. These tools' user interface capabilities, platform limitations, reporting capabilities, total execution time, shared and proprietary DLLs, modified registry keys, and invoked files during processing were compared. We observed that Windows Memory Reader and Belkasoft's Live Ram Capturer leaves the least fingerprints in memory when loaded. On the other hand, ProDiscover and FTK Imager perform poor in memory usage, processing time, DLL usage, and not-wanted artifacts introduced to the system. While Belkasoft's Live Ram Capturer is the fastest to obtain an image of the memory, Pro Discover takes the longest time to do the same job. © 2015 American Academy of Forensic Sciences.

  5. From Focused Thought to Reveries: A Memory System for a Conscious Robot

    Directory of Open Access Journals (Sweden)

    Christian Balkenius

    2018-04-01

    Full Text Available We introduce a memory model for robots that can account for many aspects of an inner world, ranging from object permanence, episodic memory, and planning to imagination and reveries. It is modeled after neurophysiological data and includes parts of the cerebral cortex together with models of arousal systems that are relevant for consciousness. The three central components are an identification network, a localization network, and a working memory network. Attention serves as the interface between the inner and the external world. It directs the flow of information from sensory organs to memory, as well as controlling top-down influences on perception. It also compares external sensations to internal top-down expectations. The model is tested in a number of computer simulations that illustrate how it can operate as a component in various cognitive tasks including perception, the A-not-B test, delayed matching to sample, episodic recall, and vicarious trial and error.

  6. The relationships between memory systems and sleep stages.

    Science.gov (United States)

    Rauchs, Géraldine; Desgranges, Béatrice; Foret, Jean; Eustache, Francis

    2005-06-01

    Sleep function remains elusive despite our rapidly increasing comprehension of the processes generating and maintaining the different sleep stages. Several lines of evidence support the hypothesis that sleep is involved in the off-line reprocessing of recently-acquired memories. In this review, we summarize the main results obtained in the field of sleep and memory consolidation in both animals and humans, and try to connect sleep stages with the different memory systems. To this end, we have collated data obtained using several methodological approaches, including electrophysiological recordings of neuronal ensembles, post-training modifications of sleep architecture, sleep deprivation and functional neuroimaging studies. Broadly speaking, all the various studies emphasize the fact that the four long-term memory systems (procedural memory, perceptual representation system, semantic and episodic memory, according to Tulving's SPI model; Tulving, 1995) benefit either from non-rapid eye movement (NREM) (not just SWS) or rapid eye movement (REM) sleep, or from both sleep stages. Tulving's classification of memory systems appears more pertinent than the declarative/non-declarative dichotomy when it comes to understanding the role of sleep in memory. Indeed, this model allows us to resolve several contradictions, notably the fact that episodic and semantic memory (the two memory systems encompassed in declarative memory) appear to rely on different sleep stages. Likewise, this model provides an explanation for why the acquisition of various types of skills (perceptual-motor, sensory-perceptual and cognitive skills) and priming effects, subserved by different brain structures but all designated by the generic term of implicit or non-declarative memory, may not benefit from the same sleep stages.

  7. Efficient implementation of multidimensional fast fourier transform on a distributed-memory parallel multi-node computer

    Science.gov (United States)

    Bhanot, Gyan V [Princeton, NJ; Chen, Dong [Croton-On-Hudson, NY; Gara, Alan G [Mount Kisco, NY; Giampapa, Mark E [Irvington, NY; Heidelberger, Philip [Cortlandt Manor, NY; Steinmacher-Burow, Burkhard D [Mount Kisco, NY; Vranas, Pavlos M [Bedford Hills, NY

    2012-01-10

    The present in invention is directed to a method, system and program storage device for efficiently implementing a multidimensional Fast Fourier Transform (FFT) of a multidimensional array comprising a plurality of elements initially distributed in a multi-node computer system comprising a plurality of nodes in communication over a network, comprising: distributing the plurality of elements of the array in a first dimension across the plurality of nodes of the computer system over the network to facilitate a first one-dimensional FFT; performing the first one-dimensional FFT on the elements of the array distributed at each node in the first dimension; re-distributing the one-dimensional FFT-transformed elements at each node in a second dimension via "all-to-all" distribution in random order across other nodes of the computer system over the network; and performing a second one-dimensional FFT on elements of the array re-distributed at each node in the second dimension, wherein the random order facilitates efficient utilization of the network thereby efficiently implementing the multidimensional FFT. The "all-to-all" re-distribution of array elements is further efficiently implemented in applications other than the multidimensional FFT on the distributed-memory parallel supercomputer.

  8. Factors that influence the relative use of multiple memory systems.

    Science.gov (United States)

    Packard, Mark G; Goodman, Jarid

    2013-11-01

    Neurobehavioral evidence supports the existence of at least two anatomically distinct "memory systems" in the mammalian brain that mediate dissociable types of learning and memory; a "cognitive" memory system dependent upon the hippocampus and a "stimulus-response/habit" memory system dependent upon the dorsolateral striatum. Several findings indicate that despite their anatomical and functional distinctiveness, hippocampal- and dorsolateral striatal-dependent memory systems may potentially interact and that, depending on the learning situation, this interaction may be cooperative or competitive. One approach to examining the neural mechanisms underlying these interactions is to consider how various factors influence the relative use of multiple memory systems. The present review examines several such factors, including information compatibility, temporal sequence of training, the visual sensory environment, reinforcement parameters, emotional arousal, and memory modulatory systems. Altering these parameters can lead to selective enhancements of either hippocampal-dependent or dorsolateral striatal-dependent memory, and bias animals toward the use of either cognitive or habit memory in dual-solution tasks that may be solved adequately with either memory system. In many learning situations, the influence of such experimental factors on the relative use of memory systems likely reflects a competitive interaction between the systems. Research examining how various factors influence the relative use of multiple memory systems may be a useful method for investigating how these systems interact with one another. Copyright © 2013 Wiley Periodicals, Inc.

  9. Maze learning by a hybrid brain-computer system.

    Science.gov (United States)

    Wu, Zhaohui; Zheng, Nenggan; Zhang, Shaowu; Zheng, Xiaoxiang; Gao, Liqiang; Su, Lijuan

    2016-09-13

    The combination of biological and artificial intelligence is particularly driven by two major strands of research: one involves the control of mechanical, usually prosthetic, devices by conscious biological subjects, whereas the other involves the control of animal behaviour by stimulating nervous systems electrically or optically. However, to our knowledge, no study has demonstrated that spatial learning in a computer-based system can affect the learning and decision making behaviour of the biological component, namely a rat, when these two types of intelligence are wired together to form a new intelligent entity. Here, we show how rule operations conducted by computing components contribute to a novel hybrid brain-computer system, i.e., ratbots, exhibit superior learning abilities in a maze learning task, even when their vision and whisker sensation were blocked. We anticipate that our study will encourage other researchers to investigate combinations of various rule operations and other artificial intelligence algorithms with the learning and memory processes of organic brains to develop more powerful cyborg intelligence systems. Our results potentially have profound implications for a variety of applications in intelligent systems and neural rehabilitation.

  10. Maze learning by a hybrid brain-computer system

    Science.gov (United States)

    Wu, Zhaohui; Zheng, Nenggan; Zhang, Shaowu; Zheng, Xiaoxiang; Gao, Liqiang; Su, Lijuan

    2016-09-01

    The combination of biological and artificial intelligence is particularly driven by two major strands of research: one involves the control of mechanical, usually prosthetic, devices by conscious biological subjects, whereas the other involves the control of animal behaviour by stimulating nervous systems electrically or optically. However, to our knowledge, no study has demonstrated that spatial learning in a computer-based system can affect the learning and decision making behaviour of the biological component, namely a rat, when these two types of intelligence are wired together to form a new intelligent entity. Here, we show how rule operations conducted by computing components contribute to a novel hybrid brain-computer system, i.e., ratbots, exhibit superior learning abilities in a maze learning task, even when their vision and whisker sensation were blocked. We anticipate that our study will encourage other researchers to investigate combinations of various rule operations and other artificial intelligence algorithms with the learning and memory processes of organic brains to develop more powerful cyborg intelligence systems. Our results potentially have profound implications for a variety of applications in intelligent systems and neural rehabilitation.

  11. Novel spintronics devices for memory and logic: prospects and challenges for room temperature all spin computing

    Science.gov (United States)

    Wang, Jian-Ping

    An energy efficient memory and logic device for the post-CMOS era has been the goal of a variety of research fields. The limits of scaling, which we expect to reach by the year 2025, demand that future advances in computational power will not be realized from ever-shrinking device sizes, but rather by innovative designs and new materials and physics. Magnetoresistive based devices have been a promising candidate for future integrated magnetic computation because of its unique non-volatility and functionalities. The application of perpendicular magnetic anisotropy for potential STT-RAM application was demonstrated and later has been intensively investigated by both academia and industry groups, but there is no clear path way how scaling will eventually work for both memory and logic applications. One of main reasons is that there is no demonstrated material stack candidate that could lead to a scaling scheme down to sub 10 nm. Another challenge for the usage of magnetoresistive based devices for logic application is its available switching speed and writing energy. Although a good progress has been made to demonstrate the fast switching of a thermally stable magnetic tunnel junction (MTJ) down to 165 ps, it is still several times slower than its CMOS counterpart. In this talk, I will review the recent progress by my research group and my C-SPIN colleagues, then discuss the opportunities, challenges and some potential path ways for magnetoresitive based devices for memory and logic applications and their integration for room temperature all spin computing system.

  12. The Associative Memory system for the FTK processor at ATLAS

    CERN Document Server

    Cipriani, R; The ATLAS collaboration; Donati, S; Giannetti, P; Lanza, A; Luciano, P; Magalotti, D; Piendibene, M

    2013-01-01

    Modern experiments search for extremely rare processes hidden in much larger background levels. As the experiment complexity, the accelerator backgrounds and luminosity increase we need increasingly complex and exclusive selections. We present results and performances of a new prototype of Associative Memory system, the core of the Fast Tracker processor (FTK). FTK is a real time tracking device for the Atlas experiment trigger upgrade. The AM system provides massive computing power to minimize the online execution time of complex tracking algorithms. The time consuming pattern recognition problem, generally referred to as the “combinatorial challenge”, is beat by the Associative Memory (AM) technology exploiting parallelism to the maximum level: it compares the event to pre-calculated “expectations” or “patterns” (pattern matching) at once looking for candidate tracks called “roads”. The problem is solved by the time data are loaded into the AM devices. We report on the tests of the integrate...

  13. The Associative Memory system for the FTK processor at ATLAS

    CERN Document Server

    Cipriani, R; The ATLAS collaboration; Donati, S; Giannetti, P; Lanza, A; Luciano, P; Magalotti, D; Piendibene, M

    2014-01-01

    Modern experiments search for extremely rare processes hidden in much larger background levels. As the experiment complexity, the accelerator backgrounds and luminosity increase we need increasingly complex and exclusive selections. We present results and performances of a new prototype of Associative Memory system, the core of the Fast Tracker processor (FTK). FTK is a real time tracking device for the Atlas experiment trigger upgrade. The AM system provides massive computing power to minimize the online execution time of complex tracking algorithms. The time consuming pattern recognition problem, generally referred to as the “combinatorial challenge”, is beat by the Associative Memory (AM) technology exploiting parallelism to the maximum level: it compares the event to pre-calculated “expectations” or “patterns” (pattern matching) at once looking for candidate tracks called “roads”. The problem is solved by the time data are loaded into the AM devices. We report on the tests of the integrate...

  14. Understanding Organizational Memory from the Integrated Management Systems (ERP

    Directory of Open Access Journals (Sweden)

    Gilberto Perez

    2013-10-01

    Full Text Available With this research, in the form of a theoretical essay addressing the theme of Organizational Memory and Integrated Management Systems (ERP, we tried to present some evidence of how this type of system can contribute to the consolidation of certain features of Organizational Memory. From a theoretical review of the concepts of Human Memory, extending to the Organizational Memory and Information Systems, with emphasis on Integrated Management Systems (ERP we tried to draw a parallel between the functions and structures of Organizational Memory and features and characteristics of ERPs. The choice of the ERP system for this study was made due to the complexity and broad scope of this system. It was verified that the ERPs adequately support many functions of the Organizational Memory, highlighting the implementation of logical processes, practices and rules in business. It is hoped that the dialogue presented here can contribute to the advancement of the understanding of organizational memory, since the similarity of Human Memory is a fertile field and there is still much to be researched.

  15. From shoebox to performative agent: the computer as personal memory machine

    NARCIS (Netherlands)

    van Dijck, J.

    2005-01-01

    Digital technologies offer new opportunities in the everyday lives of people: with still expanding memory capacities, the computer is rapidly becoming a giant storage and processing facility for recording and retrieving ‘bits of life’. Software engineers and companies promise not only to expand the

  16. Computational cost of isogeometric multi-frontal solvers on parallel distributed memory machines

    KAUST Repository

    Woźniak, Maciej; Paszyński, Maciej R.; Pardo, D.; Dalcin, Lisandro; Calo, Victor M.

    2015-01-01

    This paper derives theoretical estimates of the computational cost for isogeometric multi-frontal direct solver executed on parallel distributed memory machines. We show theoretically that for the Cp-1 global continuity of the isogeometric solution

  17. Building logical qubits in a superconducting quantum computing system

    Science.gov (United States)

    Gambetta, Jay M.; Chow, Jerry M.; Steffen, Matthias

    2017-01-01

    The technological world is in the midst of a quantum computing and quantum information revolution. Since Richard Feynman's famous `plenty of room at the bottom' lecture (Feynman, Engineering and Science23, 22 (1960)), hinting at the notion of novel devices employing quantum mechanics, the quantum information community has taken gigantic strides in understanding the potential applications of a quantum computer and laid the foundational requirements for building one. We believe that the next significant step will be to demonstrate a quantum memory, in which a system of interacting qubits stores an encoded logical qubit state longer than the incorporated parts. Here, we describe the important route towards a logical memory with superconducting qubits, employing a rotated version of the surface code. The current status of technology with regards to interconnected superconducting-qubit networks will be described and near-term areas of focus to improve devices will be identified. Overall, the progress in this exciting field has been astounding, but we are at an important turning point, where it will be critical to incorporate engineering solutions with quantum architectural considerations, laying the foundation towards scalable fault-tolerant quantum computers in the near future.

  18. Bidirectional Frontoparietal Oscillatory Systems Support Working Memory.

    Science.gov (United States)

    Johnson, Elizabeth L; Dewar, Callum D; Solbakk, Anne-Kristin; Endestad, Tor; Meling, Torstein R; Knight, Robert T

    2017-06-19

    The ability to represent and select information in working memory provides the neurobiological infrastructure for human cognition. For 80 years, dominant views of working memory have focused on the key role of prefrontal cortex (PFC) [1-8]. However, more recent work has implicated posterior cortical regions [9-12], suggesting that PFC engagement during working memory is dependent on the degree of executive demand. We provide evidence from neurological patients with discrete PFC damage that challenges the dominant models attributing working memory to PFC-dependent systems. We show that neural oscillations, which provide a mechanism for PFC to communicate with posterior cortical regions [13], independently subserve communications both to and from PFC-uncovering parallel oscillatory mechanisms for working memory. Fourteen PFC patients and 20 healthy, age-matched controls performed a working memory task where they encoded, maintained, and actively processed information about pairs of common shapes. In controls, the electroencephalogram (EEG) exhibited oscillatory activity in the low-theta range over PFC and directional connectivity from PFC to parieto-occipital regions commensurate with executive processing demands. Concurrent alpha-beta oscillations were observed over parieto-occipital regions, with directional connectivity from parieto-occipital regions to PFC, regardless of processing demands. Accuracy, PFC low-theta activity, and PFC → parieto-occipital connectivity were attenuated in patients, revealing a PFC-independent, alpha-beta system. The PFC patients still demonstrated task proficiency, which indicates that the posterior alpha-beta system provides sufficient resources for working memory. Taken together, our findings reveal neurologically dissociable PFC and parieto-occipital systems and suggest that parallel, bidirectional oscillatory systems form the basis of working memory. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  19. Development scenarios for organizational memory information systems

    NARCIS (Netherlands)

    Wijnhoven, Alphonsus B.J.M.

    1999-01-01

    Well-managed organizational memories have been emphasized in the recent management literature as important sources for business success. Organizational memory infonnation systems (OMIS) have been conceptualized as a framework for information technologies to support these organizational memories.

  20. Wearable Intrinsically Soft, Stretchable, Flexible Devices for Memories and Computing.

    Science.gov (United States)

    Rajan, Krishna; Garofalo, Erik; Chiolerio, Alessandro

    2018-01-27

    A recent trend in the development of high mass consumption electron devices is towards electronic textiles (e-textiles), smart wearable devices, smart clothes, and flexible or printable electronics. Intrinsically soft, stretchable, flexible, Wearable Memories and Computing devices (WMCs) bring us closer to sci-fi scenarios, where future electronic systems are totally integrated in our everyday outfits and help us in achieving a higher comfort level, interacting for us with other digital devices such as smartphones and domotics, or with analog devices, such as our brain/peripheral nervous system. WMC will enable each of us to contribute to open and big data systems as individual nodes, providing real-time information about physical and environmental parameters (including air pollution monitoring, sound and light pollution, chemical or radioactive fallout alert, network availability, and so on). Furthermore, WMC could be directly connected to human brain and enable extremely fast operation and unprecedented interface complexity, directly mapping the continuous states available to biological systems. This review focuses on recent advances in nanotechnology and materials science and pays particular attention to any result and promising technology to enable intrinsically soft, stretchable, flexible WMC.

  1. A distributed-memory hierarchical solver for general sparse linear systems

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Chao [Stanford Univ., CA (United States). Inst. for Computational and Mathematical Engineering; Pouransari, Hadi [Stanford Univ., CA (United States). Dept. of Mechanical Engineering; Rajamanickam, Sivasankaran [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Center for Computing Research; Boman, Erik G. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Center for Computing Research; Darve, Eric [Stanford Univ., CA (United States). Inst. for Computational and Mathematical Engineering and Dept. of Mechanical Engineering

    2017-12-20

    We present a parallel hierarchical solver for general sparse linear systems on distributed-memory machines. For large-scale problems, this fully algebraic algorithm is faster and more memory-efficient than sparse direct solvers because it exploits the low-rank structure of fill-in blocks. Depending on the accuracy of low-rank approximations, the hierarchical solver can be used either as a direct solver or as a preconditioner. The parallel algorithm is based on data decomposition and requires only local communication for updating boundary data on every processor. Moreover, the computation-to-communication ratio of the parallel algorithm is approximately the volume-to-surface-area ratio of the subdomain owned by every processor. We also provide various numerical results to demonstrate the versatility and scalability of the parallel algorithm.

  2. External-Memory Algorithms and Data Structures

    DEFF Research Database (Denmark)

    Arge, Lars; Zeh, Norbert

    2010-01-01

    The data sets involved in many modern applications are often too massive to fit in main memory of even the most powerful computers and must therefore reside on disk. Thus communication between internal and external memory, and not actual computation time, becomes the bottleneck in the computation....... This is due to the huge difference in access time of fast internal memory and slower external memory such as disks. The goal of theoretical work in the area of external memory algorithms (also called I/O algorithms or out-of-core algorithms) has been to develop algorithms that minimize the Input...... in parallel and the use of parallel disks has received a lot of theoretical attention. See below for recent surveys of theoretical results in the area of I/O-efficient algorithms. TPIE is designed to bridge the gap between the theory and practice of parallel I/O systems. It is intended to demonstrate all...

  3. NVL-C: Static Analysis Techniques for Efficient, Correct Programming of Non-Volatile Main Memory Systems

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Seyong [ORNL; Vetter, Jeffrey S [ORNL

    2016-01-01

    Computer architecture experts expect that non-volatile memory (NVM) hierarchies will play a more significant role in future systems including mobile, enterprise, and HPC architectures. With this expectation in mind, we present NVL-C: a novel programming system that facilitates the efficient and correct programming of NVM main memory systems. The NVL-C programming abstraction extends C with a small set of intuitive language features that target NVM main memory, and can be combined directly with traditional C memory model features for DRAM. We have designed these new features to enable compiler analyses and run-time checks that can improve performance and guard against a number of subtle programming errors, which, when left uncorrected, can corrupt NVM-stored data. Moreover, to enable recovery of data across application or system failures, these NVL-C features include a flexible directive for specifying NVM transactions. So that our implementation might be extended to other compiler front ends and languages, the majority of our compiler analyses are implemented in an extended version of LLVM's intermediate representation (LLVM IR). We evaluate NVL-C on a number of applications to show its flexibility, performance, and correctness.

  4. Scintillation camera-computer systems: General principles of quality control

    International Nuclear Information System (INIS)

    Ganatra, R.D.

    1992-01-01

    Scintillation camera-computer systems are designed to allow the collection, digital analysis and display of the image data from a scintillation camera. The components of the computer in such a system are essentially the same as those of a computer used in any other application, i.e. a central processing unit (CPU), memory and magnetic storage. Additional hardware items necessary for nuclear medicine applications are an analogue-to-digital converter (ADC), which converts the analogue signals from the camera to digital numbers, and an image display. It is possible that the transfer of data from camera to computer degrades the information to some extent. The computer can generate the image for display, but it also provides the capability of manipulating the primary data to improve the display of the image. The first function of conversion from analogue to digital mode is not within the control of the operator, but the second type of manipulation is in the control of the operator. These type of manipulations should be done carefully without sacrificing the integrity of the incoming information

  5. Command vector memory systems: high performance at low cost

    OpenAIRE

    Corbal San Adrián, Jesús; Espasa Sans, Roger; Valero Cortés, Mateo

    1998-01-01

    The focus of this paper is on designing both a low cost and high performance, high bandwidth vector memory system that takes advantage of modern commodity SDRAM memory chips. To successfully extract the full bandwidth from SDRAM parts, we propose a new memory system organization based on sending commands to the memory system as opposed to sending individual addresses. A command specifies, in a few bytes, a request for multiple independent memory words. A command is similar to a burst found in...

  6. Irrelevant sensory stimuli interfere with working memory storage: evidence from a computational model of prefrontal neurons.

    Science.gov (United States)

    Bancroft, Tyler D; Hockley, William E; Servos, Philip

    2013-03-01

    The encoding of irrelevant stimuli into the memory store has previously been suggested as a mechanism of interference in working memory (e.g., Lange & Oberauer, Memory, 13, 333-339, 2005; Nairne, Memory & Cognition, 18, 251-269, 1990). Recently, Bancroft and Servos (Experimental Brain Research, 208, 529-532, 2011) used a tactile working memory task to provide experimental evidence that irrelevant stimuli were, in fact, encoded into working memory. In the present study, we replicated Bancroft and Servos's experimental findings using a biologically based computational model of prefrontal neurons, providing a neurocomputational model of overwriting in working memory. Furthermore, our modeling results show that inhibition acts to protect the contents of working memory, and they suggest a need for further experimental research into the capacity of vibrotactile working memory.

  7. Forms of memory: Investigating the computational basis of semantic-episodic memory interactions

    NARCIS (Netherlands)

    Neville, D.A.

    2015-01-01

    The present thesis investigated how the memory systems related to the processing of semantic and episodic information combine to generate behavioural performance as measured in standard laboratory tasks. Across a series of behavioural experiment I looked at different types of interactions between

  8. Spectral decomposition of nonlinear systems with memory

    Science.gov (United States)

    Svenkeson, Adam; Glaz, Bryan; Stanton, Samuel; West, Bruce J.

    2016-02-01

    We present an alternative approach to the analysis of nonlinear systems with long-term memory that is based on the Koopman operator and a Lévy transformation in time. Memory effects are considered to be the result of interactions between a system and its surrounding environment. The analysis leads to the decomposition of a nonlinear system with memory into modes whose temporal behavior is anomalous and lacks a characteristic scale. On average, the time evolution of a mode follows a Mittag-Leffler function, and the system can be described using the fractional calculus. The general theory is demonstrated on the fractional linear harmonic oscillator and the fractional nonlinear logistic equation. When analyzing data from an ill-defined (black-box) system, the spectral decomposition in terms of Mittag-Leffler functions that we propose may uncover inherent memory effects through identification of a small set of dynamically relevant structures that would otherwise be obscured by conventional spectral methods. Consequently, the theoretical concepts we present may be useful for developing more general methods for numerical modeling that are able to determine whether observables of a dynamical system are better represented by memoryless operators, or operators with long-term memory in time, when model details are unknown.

  9. Developing a personal computer based expert system for radionuclide identification

    International Nuclear Information System (INIS)

    Aarnio, P.A.; Hakulinen, T.T.

    1990-01-01

    Several expert system development tools are available for personal computers today. We have used one of the LISP-based high end tools for nearly two years in developing an expert system for identification of gamma sources. The system contains a radionuclide database of 2055 nuclides and 48000 gamma transitions with a knowledge base of about sixty rules. This application combines a LISP-based inference engine with database management and relatively heavy numerical calculations performed using C-language. The most important feature needed has been the possibility to use LISP and C together with the more advanced object oriented features of the development tool. Main difficulties have been long response times and the big amount (10-16 MB) of computer memory required

  10. Understanding Organizational Memory from the Integrated Management Systems (ERP)

    OpenAIRE

    Gilberto Perez; Isabel Ramos

    2013-01-01

    With this research, in the form of a theoretical essay addressing the theme of Organizational Memory and Integrated Management Systems (ERP), we tried to present some evidence of how this type of system can contribute to the consolidation of certain features of Organizational Memory. From a theoretical review of the concepts of Human Memory, extending to the Organizational Memory and Information Systems, with emphasis on Integrated Management Systems (ERP) we tried to draw a parallel between ...

  11. Noise reduction in optically controlled quantum memory

    Science.gov (United States)

    Ma, Lijun; Slattery, Oliver; Tang, Xiao

    2018-05-01

    Quantum memory is an essential tool for quantum communications systems and quantum computers. An important category of quantum memory, called optically controlled quantum memory, uses a strong classical beam to control the storage and re-emission of a single-photon signal through an atomic ensemble. In this type of memory, the residual light from the strong classical control beam can cause severe noise and degrade the system performance significantly. Efficiently suppressing this noise is a requirement for the successful implementation of optically controlled quantum memories. In this paper, we briefly introduce the latest and most common approaches to quantum memory and review the various noise-reduction techniques used in implementing them.

  12. Holographic associative memories in document retrieval systems

    International Nuclear Information System (INIS)

    Becker, P.J.; Bolle, H.; Keller, A.; Kistner, W.; Riecke, W.D.; Wagner, U.

    1979-03-01

    The objective of this work was the implementation of a holographic memory with associative readout for a document retrieval system. Taking advantage of the favourable properties of holography - associative readout of the memory, parallel processing in the response store - may give shorter response times than sequentially organized data memories. Such a system may also operate in the interactive mode including chain associations. In order to avoid technological difficulties, the experimental setup made use of commercially available components only. As a result an improved holographic structure is proposed which uses volume holograms in photorefractive crystals as storage device. In two chapters of appendix we give a review of the state of the art of electrooptic devices for coherent optical data processing and of competing technologies (semiconductor associative memories and associative program systems). (orig.) [de

  13. Efficient implementation of a multidimensional fast fourier transform on a distributed-memory parallel multi-node computer

    Science.gov (United States)

    Bhanot, Gyan V [Princeton, NJ; Chen, Dong [Croton-On-Hudson, NY; Gara, Alan G [Mount Kisco, NY; Giampapa, Mark E [Irvington, NY; Heidelberger, Philip [Cortlandt Manor, NY; Steinmacher-Burow, Burkhard D [Mount Kisco, NY; Vranas, Pavlos M [Bedford Hills, NY

    2008-01-01

    The present in invention is directed to a method, system and program storage device for efficiently implementing a multidimensional Fast Fourier Transform (FFT) of a multidimensional array comprising a plurality of elements initially distributed in a multi-node computer system comprising a plurality of nodes in communication over a network, comprising: distributing the plurality of elements of the array in a first dimension across the plurality of nodes of the computer system over the network to facilitate a first one-dimensional FFT; performing the first one-dimensional FFT on the elements of the array distributed at each node in the first dimension; re-distributing the one-dimensional FFT-transformed elements at each node in a second dimension via "all-to-all" distribution in random order across other nodes of the computer system over the network; and performing a second one-dimensional FFT on elements of the array re-distributed at each node in the second dimension, wherein the random order facilitates efficient utilization of the network thereby efficiently implementing the multidimensional FFT. The "all-to-all" re-distribution of array elements is further efficiently implemented in applications other than the multidimensional FFT on the distributed-memory parallel supercomputer.

  14. ClimateSpark: An in-memory distributed computing framework for big climate data analytics

    Science.gov (United States)

    Hu, Fei; Yang, Chaowei; Schnase, John L.; Duffy, Daniel Q.; Xu, Mengchao; Bowen, Michael K.; Lee, Tsengdar; Song, Weiwei

    2018-06-01

    The unprecedented growth of climate data creates new opportunities for climate studies, and yet big climate data pose a grand challenge to climatologists to efficiently manage and analyze big data. The complexity of climate data content and analytical algorithms increases the difficulty of implementing algorithms on high performance computing systems. This paper proposes an in-memory, distributed computing framework, ClimateSpark, to facilitate complex big data analytics and time-consuming computational tasks. Chunking data structure improves parallel I/O efficiency, while a spatiotemporal index is built for the chunks to avoid unnecessary data reading and preprocessing. An integrated, multi-dimensional, array-based data model (ClimateRDD) and ETL operations are developed to address big climate data variety by integrating the processing components of the climate data lifecycle. ClimateSpark utilizes Spark SQL and Apache Zeppelin to develop a web portal to facilitate the interaction among climatologists, climate data, analytic operations and computing resources (e.g., using SQL query and Scala/Python notebook). Experimental results show that ClimateSpark conducts different spatiotemporal data queries/analytics with high efficiency and data locality. ClimateSpark is easily adaptable to other big multiple-dimensional, array-based datasets in various geoscience domains.

  15. Stress Effects on Multiple Memory System Interactions

    OpenAIRE

    Ness, Deborah; Calabrese, Pasquale

    2016-01-01

    Extensive behavioural, pharmacological, and neurological research reports stress effects on mammalian memory processes. While stress effects on memory quantity have been known for decades, the influence of stress on multiple memory systems and their distinct contributions to the learning process have only recently been described. In this paper, after summarizing the fundamental biological aspects of stress/emotional arousal and recapitulating functionally and anatomically distinct memory syst...

  16. Generation-based memory synchronization in a multiprocessor system with weakly consistent memory accesses

    Energy Technology Data Exchange (ETDEWEB)

    Ohmacht, Martin

    2017-08-15

    In a multiprocessor system, a central memory synchronization module coordinates memory synchronization requests responsive to memory access requests in flight, a generation counter, and a reclaim pointer. The central module communicates via point-to-point communication. The module includes a global OR reduce tree for each memory access requesting device, for detecting memory access requests in flight. An interface unit is implemented associated with each processor requesting synchronization. The interface unit includes multiple generation completion detectors. The generation count and reclaim pointer do not pass one another.

  17. Generation-based memory synchronization in a multiprocessor system with weakly consistent memory accesses

    Science.gov (United States)

    Ohmacht, Martin

    2014-09-09

    In a multiprocessor system, a central memory synchronization module coordinates memory synchronization requests responsive to memory access requests in flight, a generation counter, and a reclaim pointer. The central module communicates via point-to-point communication. The module includes a global OR reduce tree for each memory access requesting device, for detecting memory access requests in flight. An interface unit is implemented associated with each processor requesting synchronization. The interface unit includes multiple generation completion detectors. The generation count and reclaim pointer do not pass one another.

  18. Embedded System Synthesis under Memory Constraints

    DEFF Research Database (Denmark)

    Madsen, Jan; Bjørn-Jørgensen, Peter

    1999-01-01

    This paper presents a genetic algorithm to solve the system synthesis problem of mapping a time constrained single-rate system specification onto a given heterogeneous architecture which may contain irregular interconnection structures. The synthesis is performed under memory constraints, that is......, the algorithm takes into account the memory size of processors and the size of interface buffers of communication links, and in particular the complicated interplay of these. The presented algorithm is implemented as part of the LY-COS cosynthesis system....

  19. The Sensitivity of Memory Consolidation and Reconsolidation to Inhibitors of Protein Synthesis and Kinases: Computational Analysis

    Science.gov (United States)

    Zhang, Yili; Smolen, Paul; Baxter, Douglas A.; Byrne, John H.

    2010-01-01

    Memory consolidation and reconsolidation require kinase activation and protein synthesis. Blocking either process during or shortly after training or recall disrupts memory stabilization, which suggests the existence of a critical time window during which these processes are necessary. Using a computational model of kinase synthesis and…

  20. The ACP [Advanced Computer Program] multiprocessor system at Fermilab

    International Nuclear Information System (INIS)

    Nash, T.; Areti, H.; Atac, R.

    1986-09-01

    The Advanced Computer Program at Fermilab has developed a multiprocessor system which is easy to use and uniquely cost effective for many high energy physics problems. The system is based on single board computers which cost under $2000 each to build including 2 Mbytes of on board memory. These standard VME modules each run experiment reconstruction code in Fortran at speeds approaching that of a VAX 11/780. Two versions have been developed: one uses Motorola's 68020 32 bit microprocessor, the other runs with AT and T's 32100. both include the corresponding floating point coprocessor chip. The first system, when fully configured, uses 70 each of the two types of processors. A 53 processor system has been operated for several months with essentially no down time by computer operators in the Fermilab Computer Center, performing at nearly the capacity of 6 CDC Cyber 175 mainframe computers. The VME crates in which the processing ''nodes'' sit are connected via a high speed ''Branch Bus'' to one or more MicroVAX computers which act as hosts handling system resource management and all I/O in offline applications. An interface from Fastbus to the Branch Bus has been developed for online use which has been tested error free at 20 Mbytes/sec for 48 hours. ACP hardware modules are now available commercially. A major package of software, including a simulator that runs on any VAX, has been developed. It allows easy migration of existing programs to this multiprocessor environment. This paper describes the ACP Multiprocessor System and early experience with it at Fermilab and elsewhere

  1. Impact of singular excessive computer game and television exposure on sleep patterns and memory performance of school-aged children.

    Science.gov (United States)

    Dworak, Markus; Schierl, Thomas; Bruns, Thomas; Strüder, Heiko Klaus

    2007-11-01

    Television and computer game consumption are a powerful influence in the lives of most children. Previous evidence has supported the notion that media exposure could impair a variety of behavioral characteristics. Excessive television viewing and computer game playing have been associated with many psychiatric symptoms, especially emotional and behavioral symptoms, somatic complaints, attention problems such as hyperactivity, and family interaction problems. Nevertheless, there is insufficient knowledge about the relationship between singular excessive media consumption on sleep patterns and linked implications on children. The aim of this study was to investigate the effects of singular excessive television and computer game consumption on sleep patterns and memory performance of children. Eleven school-aged children were recruited for this polysomnographic study. Children were exposed to voluntary excessive television and computer game consumption. In the subsequent night, polysomnographic measurements were conducted to measure sleep-architecture and sleep-continuity parameters. In addition, a visual and verbal memory test was conducted before media stimulation and after the subsequent sleeping period to determine visuospatial and verbal memory performance. Only computer game playing resulted in significant reduced amounts of slow-wave sleep as well as significant declines in verbal memory performance. Prolonged sleep-onset latency and more stage 2 sleep were also detected after previous computer game consumption. No effects on rapid eye movement sleep were observed. Television viewing reduced sleep efficiency significantly but did not affect sleep patterns. The results suggest that television and computer game exposure affect children's sleep and deteriorate verbal cognitive performance, which supports the hypothesis of the negative influence of media consumption on children's sleep, learning, and memory.

  2. A QDWH-Based SVD Software Framework on Distributed-Memory Manycore Systems

    KAUST Repository

    Sukkari, Dalal

    2017-01-01

    This paper presents a high performance software framework for computing a dense SVD on distributed- memory manycore systems. Originally introduced by Nakatsukasa et al. (Nakatsukasa et al. 2010; Nakatsukasa and Higham 2013), the SVD solver relies on the polar decomposition using the QR Dynamically-Weighted Halley algorithm (QDWH). Although the QDWH-based SVD algorithm performs a significant amount of extra floating-point operations compared to the traditional SVD with the one-stage bidiagonal reduction, the inherent high level of concurrency associated with Level 3 BLAS compute-bound kernels ultimately compensates for the arithmetic complexity overhead. Using the ScaLAPACK two-dimensional block cyclic data distribution with a rectangular processor topology, the resulting QDWH-SVD further reduces excessive communications during the panel factorization, while increasing the degree of parallelism during the update of the trailing submatrix, as opposed to relying to the default square processor grid. After detailing the algorithmic complexity and the memory footprint of the algorithm, we conduct a thorough performance analysis and study the impact of the grid topology on the performance by looking at the communication and computation profiling trade-offs. We report performance results against state-of-the-art existing QDWH software implementations (e.g., Elemental) and their SVD extensions on large-scale distributed-memory manycore systems based on commodity Intel x86 Haswell processors and Knights Landing (KNL) architecture. The QDWH-SVD framework achieves up to 3/8-fold on the Haswell/KNL-based platforms, respectively, against ScaLAPACK PDGESVD and turns out to be a competitive alternative for well and ill-conditioned matrices. We finally come up herein with a performance model based on these empirical results. Our QDWH-based polar decomposition and its SVD extension are freely available at https://github.com/ecrc/qdwh.git and https

  3. High Performance Polar Decomposition on Distributed Memory Systems

    KAUST Repository

    Sukkari, Dalal E.

    2016-08-08

    The polar decomposition of a dense matrix is an important operation in linear algebra. It can be directly calculated through the singular value decomposition (SVD) or iteratively using the QR dynamically-weighted Halley algorithm (QDWH). The former is difficult to parallelize due to the preponderant number of memory-bound operations during the bidiagonal reduction. We investigate the latter scenario, which performs more floating-point operations but exposes at the same time more parallelism, and therefore, runs closer to the theoretical peak performance of the system, thanks to more compute-bound matrix operations. Profiling results show the performance scalability of QDWH for calculating the polar decomposition using around 9200 MPI processes on well and ill-conditioned matrices of 100K×100K problem size. We study then the performance impact of the QDWH-based polar decomposition as a pre-processing step toward calculating the SVD itself. The new distributed-memory implementation of the QDWH-SVD solver achieves up to five-fold speedup against current state-of-the-art vendor SVD implementations. © Springer International Publishing Switzerland 2016.

  4. Architectural Techniques to Enable Reliable and Scalable Memory Systems

    OpenAIRE

    Nair, Prashant J.

    2017-01-01

    High capacity and scalable memory systems play a vital role in enabling our desktops, smartphones, and pervasive technologies like Internet of Things (IoT). Unfortunately, memory systems are becoming increasingly prone to faults. This is because we rely on technology scaling to improve memory density, and at small feature sizes, memory cells tend to break easily. Today, memory reliability is seen as the key impediment towards using high-density devices, adopting new technologies, and even bui...

  5. Optical interconnection network for parallel access to multi-rank memory in future computing systems.

    Science.gov (United States)

    Wang, Kang; Gu, Huaxi; Yang, Yintang; Wang, Kun

    2015-08-10

    With the number of cores increasing, there is an emerging need for a high-bandwidth low-latency interconnection network, serving core-to-memory communication. In this paper, aiming at the goal of simultaneous access to multi-rank memory, we propose an optical interconnection network for core-to-memory communication. In the proposed network, the wavelength usage is delicately arranged so that cores can communicate with different ranks at the same time and broadcast for flow control can be achieved. A distributed memory controller architecture that works in a pipeline mode is also designed for efficient optical communication and transaction address processes. The scaling method and wavelength assignment for the proposed network are investigated. Compared with traditional electronic bus-based core-to-memory communication, the simulation results based on the PARSEC benchmark show that the bandwidth enhancement and latency reduction are apparent.

  6. Megachannel γ--γ coincidence system using a PDP-8/E computer and moving-head disks

    International Nuclear Information System (INIS)

    Ruhter, W.D.; Camp, D.C.; Mann, L.G.; Niday, J.B.; Siemens, P.D.

    1976-01-01

    A megachannel pulse-height analysis system using a PDP-8/E computer and two moving-head disk memories has been developed. The system has a storage capacity of 220 memory locations, is capable of processing 1100 events/s, and provides on-line sorting and disk storage. An X- or Y-pulse-height spectrum in coincidence with one or several arbitrary pulse-height windows can be assembled in core for scope display and spectral analysis within 2 to 20 seconds. Reconstruction of a complete X- or Y-pulse-height spectrum requires about 3 minutes

  7. Memory architecture for efficient utilization of SDRAM: a case study of the computation/memory access trade-off

    DEFF Research Database (Denmark)

    Gleerup, Thomas Møller; Holten-Lund, Hans Erik; Madsen, Jan

    2000-01-01

    . In software, forward differencing is usually better, but in this hardware implementation, the trade-off has made it possible to develop a very regular memory architecture with a buffering system, which can reach 95% bandwidth utilization using off-the-shelf SDRAM, This is achieved by changing the algorithm......This paper discusses the trade-off between calculations and memory accesses in a 3D graphics tile renderer for visualization of data from medical scanners. The performance requirement of this application is a frame rate of 25 frames per second when rendering 3D models with 2 million triangles, i...... to use a memory access strategy with write-only and read-only phases, and a buffering system, which uses round-robin bank write-access combined with burst read-access....

  8. Impulse: Memory System Support for Scientific Applications

    Directory of Open Access Journals (Sweden)

    John B. Carter

    1999-01-01

    Full Text Available Impulse is a new memory system architecture that adds two important features to a traditional memory controller. First, Impulse supports application‐specific optimizations through configurable physical address remapping. By remapping physical addresses, applications control how their data is accessed and cached, improving their cache and bus utilization. Second, Impulse supports prefetching at the memory controller, which can hide much of the latency of DRAM accesses. Because it requires no modification to processor, cache, or bus designs, Impulse can be adopted in conventional systems. In this paper we describe the design of the Impulse architecture, and show how an Impulse memory system can improve the performance of memory‐bound scientific applications. For instance, Impulse decreases the running time of the NAS conjugate gradient benchmark by 67%. We expect that Impulse will also benefit regularly strided, memory‐bound applications of commercial importance, such as database and multimedia programs.

  9. Effect of Computer-Presented Organizational/Memory Aids on Problem Solving Behavior.

    Science.gov (United States)

    Steinberg, Esther R.; And Others

    This research studied the effects of computer-presented organizational/memory aids on problem solving behavior. The aids were either matrix or verbal charts shown on the display screen next to the problem. The 104 college student subjects were randomly assigned to one of the four conditions: type of chart (matrix or verbal chart) and use of charts…

  10. Directions for memory hierarchies and their components: research and development

    International Nuclear Information System (INIS)

    Smith, A.J.

    1978-10-01

    The memory hierarchy is usually the largest identifiable part of a computer system and making effective use of it is critical to the operation and use of the system. The levels of such a memory hierarchy are considered and the state of the art and likely directions for both research and development are described. Algorithmic and logical features of the hierarchy not directly associated with specific components are also discussed. Among the problems believed to be the most significant are the following: (a) evaluate the effectiveness of gap filler technology as a level of storage between main memory and disk, and if it proves to be effective, determine how/where it should be used, (b) develop algorithms for the use of mass storage in a large computer system, and (c) determine how cache memories should be implemented in very large, fast multiprocessor systems

  11. Memory and selective attention in multiple sclerosis: cross-sectional computer-based assessment in a large outpatient sample.

    Science.gov (United States)

    Adler, Georg; Lembach, Yvonne

    2015-08-01

    Cognitive impairments may have a severe impact on everyday functioning and quality of life of patients with multiple sclerosis (MS). However, there are some methodological problems in the assessment and only a few studies allow a representative estimate of the prevalence and severity of cognitive impairments in MS patients. We applied a computer-based method, the memory and attention test (MAT), in 531 outpatients with MS, who were assessed at nine neurological practices or specialized outpatient clinics. The findings were compared with those obtained in an age-, sex- and education-matched control group of 84 healthy subjects. Episodic short-term memory was substantially decreased in the MS patients. About 20% of them reached a score of only less than two standard deviations below the mean of the control group. The episodic short-term memory score was negatively correlated with the EDSS score. Minor but also significant impairments in the MS patients were found for verbal short-term memory, episodic working memory and selective attention. The computer-based MAT was found to be useful for a routine assessment of cognition in MS outpatients.

  12. A Brain System for Auditory Working Memory.

    Science.gov (United States)

    Kumar, Sukhbinder; Joseph, Sabine; Gander, Phillip E; Barascud, Nicolas; Halpern, Andrea R; Griffiths, Timothy D

    2016-04-20

    The brain basis for auditory working memory, the process of actively maintaining sounds in memory over short periods of time, is controversial. Using functional magnetic resonance imaging in human participants, we demonstrate that the maintenance of single tones in memory is associated with activation in auditory cortex. In addition, sustained activation was observed in hippocampus and inferior frontal gyrus. Multivoxel pattern analysis showed that patterns of activity in auditory cortex and left inferior frontal gyrus distinguished the tone that was maintained in memory. Functional connectivity during maintenance was demonstrated between auditory cortex and both the hippocampus and inferior frontal cortex. The data support a system for auditory working memory based on the maintenance of sound-specific representations in auditory cortex by projections from higher-order areas, including the hippocampus and frontal cortex. In this work, we demonstrate a system for maintaining sound in working memory based on activity in auditory cortex, hippocampus, and frontal cortex, and functional connectivity among them. Specifically, our work makes three advances from the previous work. First, we robustly demonstrate hippocampal involvement in all phases of auditory working memory (encoding, maintenance, and retrieval): the role of hippocampus in working memory is controversial. Second, using a pattern classification technique, we show that activity in the auditory cortex and inferior frontal gyrus is specific to the maintained tones in working memory. Third, we show long-range connectivity of auditory cortex to hippocampus and frontal cortex, which may be responsible for keeping such representations active during working memory maintenance. Copyright © 2016 Kumar et al.

  13. The MNESIS model: Memory systems and processes, identity and future thinking.

    Science.gov (United States)

    Eustache, Francis; Viard, Armelle; Desgranges, Béatrice

    2016-07-01

    The Memory NEo-Structural Inter-Systemic model (MNESIS; Eustache and Desgranges, Neuropsychology Review, 2008) is a macromodel based on neuropsychological data which presents an interactive construction of memory systems and processes. Largely inspired by Tulving's SPI model, MNESIS puts the emphasis on the existence of different memory systems in humans and their reciprocal relations, adding new aspects, such as the episodic buffer proposed by Baddeley. The more integrative comprehension of brain dynamics offered by neuroimaging has contributed to rethinking the existence of memory systems. In the present article, we will argue that understanding the concept of memory by dividing it into systems at the functional level is still valid, but needs to be considered in the light of brain imaging. Here, we reinstate the importance of this division in different memory systems and illustrate, with neuroimaging findings, the links that operate between memory systems in response to task demands that constrain the brain dynamics. During a cognitive task, these memory systems interact transiently to rapidly assemble representations and mobilize functions to propose a flexible and adaptative response. We will concentrate on two memory systems, episodic and semantic memory, and their links with autobiographical memory. More precisely, we will focus on interactions between episodic and semantic memory systems in support of 1) self-identity in healthy aging and in brain pathologies and 2) the concept of the prospective brain during future projection. In conclusion, this MNESIS global framework may help to get a general representation of human memory and its brain implementation with its specific components which are in constant interaction during cognitive processes. Copyright © 2016 Elsevier Ltd. All rights reserved.

  14. A Time-predictable Memory Network-on-Chip

    DEFF Research Database (Denmark)

    Schoeberl, Martin; Chong, David VH; Puffitsch, Wolfgang

    2014-01-01

    To derive safe bounds on worst-case execution times (WCETs), all components of a computer system need to be time-predictable: the processor pipeline, the caches, the memory controller, and memory arbitration on a multicore processor. This paper presents a solution for time-predictable memory...... arbitration and access for chip-multiprocessors. The memory network-on-chip is organized as a tree with time-division multiplexing (TDM) of accesses to the shared memory. The TDM based arbitration completely decouples processor cores and allows WCET analysis of the memory accesses on individual cores without...

  15. Memory Systems Do Not Divide on Consciousness: Reinterpreting Memory in Terms of Activation and Binding

    Science.gov (United States)

    Reder, Lynne M.; Park, Heekyeong; Kieffaber, Paul D.

    2009-01-01

    There is a popular hypothesis that performance on implicit and explicit memory tasks reflects 2 distinct memory systems. Explicit memory is said to store those experiences that can be consciously recollected, and implicit memory is said to store experiences and affect subsequent behavior but to be unavailable to conscious awareness. Although this…

  16. Design of SMART alarm system using main memory database

    International Nuclear Information System (INIS)

    Jang, Kue Sook; Seo, Yong Seok; Park, Keun Oak; Lee, Jong Bok; Kim, Dong Hoon

    2001-01-01

    To achieve design goal of SMART alarm system, first of all we have to decide on how to handle and manage alarm information and how to use database. So this paper analyses concepts and deficiencies of main memory database applied in real time system. And this paper sets up structure and processing principles of main memory database using nonvolatile memory such as flash memory and develops recovery strategy and process board structures using these. Therefore this paper shows design of SMART alarm system is suited functions and requirements

  17. The effects of working memory on brain-computer interface performance.

    Science.gov (United States)

    Sprague, Samantha A; McBee, Matthew T; Sellers, Eric W

    2016-02-01

    The purpose of the present study is to evaluate the relationship between working memory and BCI performance. Participants took part in two separate sessions. The first session consisted of three computerized tasks. The List Sorting Working Memory Task was used to measure working memory, the Picture Vocabulary Test was used to measure general intelligence, and the Dimensional Change Card Sort Test was used to measure executive function, specifically cognitive flexibility. The second session consisted of a P300-based BCI copy-spelling task. The results indicate that both working memory and general intelligence are significant predictors of BCI performance. This suggests that working memory training could be used to improve performance on a BCI task. Working memory training may help to reduce a portion of the individual differences that exist in BCI performance allowing for a wider range of users to successfully operate the BCI system as well as increase the BCI performance of current users. Copyright © 2015 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  18. Investigation of fast initialization of spacecraft bubble memory systems

    Science.gov (United States)

    Looney, K. T.; Nichols, C. D.; Hayes, P. J.

    1984-01-01

    Bubble domain technology offers significant improvement in reliability and functionality for spacecraft onboard memory applications. In considering potential memory systems organizations, minimization of power in high capacity bubble memory systems necessitates the activation of only the desired portions of the memory. In power strobing arbitrary memory segments, a capability of fast turn on is required. Bubble device architectures, which provide redundant loop coding in the bubble devices, limit the initialization speed. Alternate initialization techniques are investigated to overcome this design limitation. An initialization technique using a small amount of external storage is demonstrated.

  19. The ACP (Advanced Computer Program) multiprocessor system at Fermilab

    Energy Technology Data Exchange (ETDEWEB)

    Nash, T.; Areti, H.; Atac, R.; Biel, J.; Case, G.; Cook, A.; Fischler, M.; Gaines, I.; Hance, R.; Husby, D.

    1986-09-01

    The Advanced Computer Program at Fermilab has developed a multiprocessor system which is easy to use and uniquely cost effective for many high energy physics problems. The system is based on single board computers which cost under $2000 each to build including 2 Mbytes of on board memory. These standard VME modules each run experiment reconstruction code in Fortran at speeds approaching that of a VAX 11/780. Two versions have been developed: one uses Motorola's 68020 32 bit microprocessor, the other runs with AT and T's 32100. both include the corresponding floating point coprocessor chip. The first system, when fully configured, uses 70 each of the two types of processors. A 53 processor system has been operated for several months with essentially no down time by computer operators in the Fermilab Computer Center, performing at nearly the capacity of 6 CDC Cyber 175 mainframe computers. The VME crates in which the processing ''nodes'' sit are connected via a high speed ''Branch Bus'' to one or more MicroVAX computers which act as hosts handling system resource management and all I/O in offline applications. An interface from Fastbus to the Branch Bus has been developed for online use which has been tested error free at 20 Mbytes/sec for 48 hours. ACP hardware modules are now available commercially. A major package of software, including a simulator that runs on any VAX, has been developed. It allows easy migration of existing programs to this multiprocessor environment. This paper describes the ACP Multiprocessor System and early experience with it at Fermilab and elsewhere.

  20. Design issues for block-oriented reflective memory system

    Energy Technology Data Exchange (ETDEWEB)

    Jovanovic, M; Tomasevic, M; Milutinovic, V

    1996-12-31

    The block-oriented reflective memory (BORM) system represents a modular bus-based system architecture that belongs to the class of distributed shared memory systems. The results of the evaluation study of the BORM implementation strategies and design decisions in regard to the different values of input parameters are presented. 5 refs.

  1. Robust dynamical decoupling for quantum computing and quantum memory.

    Science.gov (United States)

    Souza, Alexandre M; Alvarez, Gonzalo A; Suter, Dieter

    2011-06-17

    Dynamical decoupling (DD) is a popular technique for protecting qubits from the environment. However, unless special care is taken, experimental errors in the control pulses used in this technique can destroy the quantum information instead of preserving it. Here, we investigate techniques for making DD sequences robust against different types of experimental errors while retaining good decoupling efficiency in a fluctuating environment. We present experimental data from solid-state nuclear spin qubits and introduce a new DD sequence that is suitable for quantum computing and quantum memory.

  2. Systemic Lisbon Battery: Normative Data for Memory and Attention Assessments.

    Science.gov (United States)

    Gamito, Pedro; Morais, Diogo; Oliveira, Jorge; Ferreira Lopes, Paulo; Picareli, Luís Felipe; Matias, Marcelo; Correia, Sara; Brito, Rodrigo

    2016-05-04

    Memory and attention are two cognitive domains pivotal for the performance of instrumental activities of daily living (IADLs). The assessment of these functions is still widely carried out with pencil-and-paper tests, which lack ecological validity. The evaluation of cognitive and memory functions while the patients are performing IADLs should contribute to the ecological validity of the evaluation process. The objective of this study is to establish normative data from virtual reality (VR) IADLs designed to activate memory and attention functions. A total of 243 non-clinical participants carried out a paper-and-pencil Mini-Mental State Examination (MMSE) and performed 3 VR activities: art gallery visual matching task, supermarket shopping task, and memory fruit matching game. The data (execution time and errors, and money spent in the case of the supermarket activity) was automatically generated from the app. Outcomes were computed using non-parametric statistics, due to non-normality of distributions. Age, academic qualifications, and computer experience all had significant effects on most measures. Normative values for different levels of these measures were defined. Age, academic qualifications, and computer experience should be taken into account while using our VR-based platform for cognitive assessment purposes. ©Pedro Gamito, Diogo Morais, Jorge Oliveira, Paulo Ferreira Lopes, Luís Felipe Picareli, Marcelo Matias, Sara Correia, Rodrigo Brito. Originally published in JMIR Rehabilitation and Assistive Technology (http://rehab.jmir.org), 04.05.2016.

  3. Age effects on explicit and implicit memory

    Directory of Open Access Journals (Sweden)

    Emma eWard

    2013-09-01

    Full Text Available It is well documented that explicit memory (e.g., recognition declines with age. In contrast, many argue that implicit memory (e.g., priming is preserved in healthy aging. For example, priming on tasks such as perceptual identification is often not statistically different in groups of young and older adults. Such observations are commonly taken as evidence for distinct explicit and implicit learning/memory systems. In this article we discuss several lines of evidence that challenge this view. We describe how patterns of differential age-related decline may arise from differences in the ways in which the two forms of memory are commonly measured, and review recent research suggesting that under improved measurement methods, implicit memory is not age-invariant. Formal computational models are of considerable utility in revealing the nature of underlying systems. We report the results of applying single and multiple-systems models to data on age effects in implicit and explicit memory. Model comparison clearly favours the single-system view. Implications for the memory systems debate are discussed.

  4. Age effects on explicit and implicit memory.

    Science.gov (United States)

    Ward, Emma V; Berry, Christopher J; Shanks, David R

    2013-01-01

    It is well-documented that explicit memory (e.g., recognition) declines with age. In contrast, many argue that implicit memory (e.g., priming) is preserved in healthy aging. For example, priming on tasks such as perceptual identification is often not statistically different in groups of young and older adults. Such observations are commonly taken as evidence for distinct explicit and implicit learning/memory systems. In this article we discuss several lines of evidence that challenge this view. We describe how patterns of differential age-related decline may arise from differences in the ways in which the two forms of memory are commonly measured, and review recent research suggesting that under improved measurement methods, implicit memory is not age-invariant. Formal computational models are of considerable utility in revealing the nature of underlying systems. We report the results of applying single and multiple-systems models to data on age effects in implicit and explicit memory. Model comparison clearly favors the single-system view. Implications for the memory systems debate are discussed.

  5. Computer-based data acquisition system in the Large Coil Test Facility

    International Nuclear Information System (INIS)

    Gould, S.S.; Layman, L.R.; Million, D.L.

    1983-01-01

    The utilization of computers for data acquisition and control is of paramount importance on large-scale fusion experiments because they feature the ability to acquire data from a large number of sensors at various sample rates and provide for flexible data interpretation, presentation, reduction, and analysis. In the Large Coil Test Facility (LCTF) a Digital Equipment Corporation (DEC) PDP-11/60 host computer with the DEC RSX-11M operating system coordinates the activities of five DEC LSI-11/23 front-end processors (FEPs) via direct memory access (DMA) communication links. This provides host control of scheduled data acquisition and FEP event-triggered data collection tasks. Four of the five FEPs have no operating system

  6. Flash memory management system and method utilizing multiple block list windows

    Science.gov (United States)

    Chow, James (Inventor); Gender, Thomas K. (Inventor)

    2005-01-01

    The present invention provides a flash memory management system and method with increased performance. The flash memory management system provides the ability to efficiently manage and allocate flash memory use in a way that improves reliability and longevity, while maintaining good performance levels. The flash memory management system includes a free block mechanism, a disk maintenance mechanism, and a bad block detection mechanism. The free block mechanism provides efficient sorting of free blocks to facilitate selecting low use blocks for writing. The disk maintenance mechanism provides for the ability to efficiently clean flash memory blocks during processor idle times. The bad block detection mechanism provides the ability to better detect when a block of flash memory is likely to go bad. The flash status mechanism stores information in fast access memory that describes the content and status of the data in the flash disk. The new bank detection mechanism provides the ability to automatically detect when new banks of flash memory are added to the system. Together, these mechanisms provide a flash memory management system that can improve the operational efficiency of systems that utilize flash memory.

  7. Scalable unit commitment by memory-bounded ant colony optimization with A{sup *} local search

    Energy Technology Data Exchange (ETDEWEB)

    Saber, Ahmed Yousuf; Alshareef, Abdulaziz Mohammed [Department of Electrical and Computer Engineering, King Abdulaziz University, P.O. Box 80204, Jeddah 21589 (Saudi Arabia)

    2008-07-15

    Ant colony optimization (ACO) is successfully applied in optimization problems. Performance of the basic ACO for small problems with moderate dimension and searching space is satisfactory. As the searching space grows exponentially in the large-scale unit commitment problem, the basic ACO is not applicable for the vast size of pheromone matrix of ACO in practical time and physical computer-memory limit. However, memory-bounded methods prune the least-promising nodes to fit the system in computer memory. Therefore, the authors propose memory-bounded ant colony optimization (MACO) in this paper for the scalable (no restriction for system size) unit commitment problem. This MACO intelligently solves the limitation of computer memory, and does not permit the system to grow beyond a bound on memory. In the memory-bounded ACO implementation, A{sup *} heuristic is introduced to increase local searching ability and probabilistic nearest neighbor method is applied to estimate pheromone intensity for the forgotten value. Finally, the benchmark data sets and existing methods are used to show the effectiveness of the proposed method. (author)

  8. Visual Memories Bypass Normalization.

    Science.gov (United States)

    Bloem, Ilona M; Watanabe, Yurika L; Kibbe, Melissa M; Ling, Sam

    2018-05-01

    How distinct are visual memory representations from visual perception? Although evidence suggests that briefly remembered stimuli are represented within early visual cortices, the degree to which these memory traces resemble true visual representations remains something of a mystery. Here, we tested whether both visual memory and perception succumb to a seemingly ubiquitous neural computation: normalization. Observers were asked to remember the contrast of visual stimuli, which were pitted against each other to promote normalization either in perception or in visual memory. Our results revealed robust normalization between visual representations in perception, yet no signature of normalization occurring between working memory stores-neither between representations in memory nor between memory representations and visual inputs. These results provide unique insight into the nature of visual memory representations, illustrating that visual memory representations follow a different set of computational rules, bypassing normalization, a canonical visual computation.

  9. A computational model of fMRI activity in the intraparietal sulcus that supports visual working memory.

    Science.gov (United States)

    Domijan, Dražen

    2011-12-01

    A computational model was developed to explain a pattern of results of fMRI activation in the intraparietal sulcus (IPS) supporting visual working memory for multiobject scenes. The model is based on the hypothesis that dendrites of excitatory neurons are major computational elements in the cortical circuit. Dendrites enable formation of a competitive queue that exhibits a gradient of activity values for nodes encoding different objects, and this pattern is stored in working memory. In the model, brain imaging data are interpreted as a consequence of blood flow arising from dendritic processing. Computer simulations showed that the model successfully simulates data showing the involvement of inferior IPS in object individuation and spatial grouping through representation of objects' locations in space, along with the involvement of superior IPS in object identification through representation of a set of objects' features. The model exhibits a capacity limit due to the limited dynamic range for nodes and the operation of lateral inhibition among them. The capacity limit is fixed in the inferior IPS regardless of the objects' complexity, due to the normalization of lateral inhibition, and variable in the superior IPS, due to the different encoding demands for simple and complex shapes. Systematic variation in the strength of self-excitation enables an understanding of the individual differences in working memory capacity. The model offers several testable predictions regarding the neural basis of visual working memory.

  10. SODR Memory Control Buffer Control ASIC

    Science.gov (United States)

    Hodson, Robert F.

    1994-01-01

    The Spacecraft Optical Disk Recorder (SODR) is a state of the art mass storage system for future NASA missions requiring high transmission rates and a large capacity storage system. This report covers the design and development of an SODR memory buffer control applications specific integrated circuit (ASIC). The memory buffer control ASIC has two primary functions: (1) buffering data to prevent loss of data during disk access times, (2) converting data formats from a high performance parallel interface format to a small computer systems interface format. Ten 144 p in, 50 MHz CMOS ASIC's were designed, fabricated and tested to implement the memory buffer control function.

  11. Computational cost of isogeometric multi-frontal solvers on parallel distributed memory machines

    KAUST Repository

    Woźniak, Maciej

    2015-02-01

    This paper derives theoretical estimates of the computational cost for isogeometric multi-frontal direct solver executed on parallel distributed memory machines. We show theoretically that for the Cp-1 global continuity of the isogeometric solution, both the computational cost and the communication cost of a direct solver are of order O(log(N)p2) for the one dimensional (1D) case, O(Np2) for the two dimensional (2D) case, and O(N4/3p2) for the three dimensional (3D) case, where N is the number of degrees of freedom and p is the polynomial order of the B-spline basis functions. The theoretical estimates are verified by numerical experiments performed with three parallel multi-frontal direct solvers: MUMPS, PaStiX and SuperLU, available through PETIGA toolkit built on top of PETSc. Numerical results confirm these theoretical estimates both in terms of p and N. For a given problem size, the strong efficiency rapidly decreases as the number of processors increases, becoming about 20% for 256 processors for a 3D example with 1283 unknowns and linear B-splines with C0 global continuity, and 15% for a 3D example with 643 unknowns and quartic B-splines with C3 global continuity. At the same time, one cannot arbitrarily increase the problem size, since the memory required by higher order continuity spaces is large, quickly consuming all the available memory resources even in the parallel distributed memory version. Numerical results also suggest that the use of distributed parallel machines is highly beneficial when solving higher order continuity spaces, although the number of processors that one can efficiently employ is somehow limited.

  12. Adaptive Dynamic Process Scheduling on Distributed Memory Parallel Computers

    Directory of Open Access Journals (Sweden)

    Wei Shu

    1994-01-01

    Full Text Available One of the challenges in programming distributed memory parallel machines is deciding how to allocate work to processors. This problem is particularly important for computations with unpredictable dynamic behaviors or irregular structures. We present a scheme for dynamic scheduling of medium-grained processes that is useful in this context. The adaptive contracting within neighborhood (ACWN is a dynamic, distributed, load-dependent, and scalable scheme. It deals with dynamic and unpredictable creation of processes and adapts to different systems. The scheme is described and contrasted with two other schemes that have been proposed in this context, namely the randomized allocation and the gradient model. The performance of the three schemes on an Intel iPSC/2 hypercube is presented and analyzed. The experimental results show that even though the ACWN algorithm incurs somewhat larger overhead than the randomized allocation, it achieves better performance in most cases due to its adaptiveness. Its feature of quickly spreading the work helps it outperform the gradient model in performance and scalability.

  13. Polymorphous computing fabric

    Science.gov (United States)

    Wolinski, Christophe Czeslaw [Los Alamos, NM; Gokhale, Maya B [Los Alamos, NM; McCabe, Kevin Peter [Los Alamos, NM

    2011-01-18

    Fabric-based computing systems and methods are disclosed. A fabric-based computing system can include a polymorphous computing fabric that can be customized on a per application basis and a host processor in communication with said polymorphous computing fabric. The polymorphous computing fabric includes a cellular architecture that can be highly parameterized to enable a customized synthesis of fabric instances for a variety of enhanced application performances thereof. A global memory concept can also be included that provides the host processor random access to all variables and instructions associated with the polymorphous computing fabric.

  14. Demonstration of holographic smart card system using the optical memory technology

    Science.gov (United States)

    Kim, JungHoi; Choi, JaeKwang; An, JunWon; Kim, Nam; Lee, KwonYeon; Jeon, SeckHee

    2003-05-01

    In this paper, we demonstrate the holographic smart card system using digital holographic memory technique that uses reference beam encrypted by the random phase mask to prevent unauthorized users from accessing the stored digital page. The input data that include document data, a picture of face, and a fingerprint for identification is encoded digitally and then coupled with the reference beam modulated by a random phase mask. Therefore, this proposed system can execute recording in the order of MB~GB and readout all personal information from just one card without any additional database system. Also, recorded digital holograms can't be reconstructed without a phase key and can't be copied by using computers, scanners, or photography.

  15. A computer vision-based automated Figure-8 maze for working memory test in rodents.

    Science.gov (United States)

    Pedigo, Samuel F; Song, Eun Young; Jung, Min Whan; Kim, Jeansok J

    2006-09-30

    The benchmark test for prefrontal cortex (PFC)-mediated working memory in rodents is a delayed alternation task utilizing variations of T-maze or Figure-8 maze, which requires the animals to make specific arm entry responses for reward. In this task, however, manual procedures involved in shaping target behavior, imposing delays between trials and delivering rewards can potentially influence the animal's performance on the maze. Here, we report an automated Figure-8 maze which does not necessitate experimenter-subject interaction during shaping, training or testing. This system incorporates a computer vision system for tracking, motorized gates to impose delays, and automated reward delivery. The maze is controlled by custom software that records the animal's location and activates the gates according to the animal's behavior and a control algorithm. The program performs calculations of task accuracy, tracks movement sequence through the maze, and provides other dependent variables (such as running speed, time spent in different maze locations, activity level during delay). Testing in rats indicates that the performance accuracy is inversely proportional to the delay interval, decreases with PFC lesions, and that animals anticipate timing during long delays. Thus, our automated Figure-8 maze is effective at assessing working memory and provides novel behavioral measures in rodents.

  16. Noise-assisted morphing of memory and logic function

    International Nuclear Information System (INIS)

    Kohar, Vivek; Sinha, Sudeshna

    2012-01-01

    We demonstrate how noise allows a bistable system to behave as a memory device, as well as a logic gate. Namely, in some optimal range of noise, the system can operate flexibly, both as a NAND/AND gate and a Set–Reset latch, by varying an asymmetrizing bias. Thus we show how this system implements memory, even for sub-threshold input signals, using noise constructively to store information. This can lead to the development of reconfigurable devices, that can switch efficiently between memory tasks and logic operations. -- Highlights: ► We consider a nonlinear system in a noisy environment. ► We show that the system can function as a robust memory element. ► Further, the response of the system can be easily morphed from memory to logic operations. ► Such systems can potentially act as building blocks of “smart” computing devices.

  17. Parallelization of MCNP 4, a Monte Carlo neutron and photon transport code system, in highly parallel distributed memory type computer

    International Nuclear Information System (INIS)

    Masukawa, Fumihiro; Takano, Makoto; Naito, Yoshitaka; Yamazaki, Takao; Fujisaki, Masahide; Suzuki, Koichiro; Okuda, Motoi.

    1993-11-01

    In order to improve the accuracy and calculating speed of shielding analyses, MCNP 4, a Monte Carlo neutron and photon transport code system, has been parallelized and measured of its efficiency in the highly parallel distributed memory type computer, AP1000. The code has been analyzed statically and dynamically, then the suitable algorithm for parallelization has been determined for the shielding analysis functions of MCNP 4. This includes a strategy where a new history is assigned to the idling processor element dynamically during the execution. Furthermore, to avoid the congestion of communicative processing, the batch concept, processing multi-histories by a unit, has been introduced. By analyzing a sample cask problem with 2,000,000 histories by the AP1000 with 512 processor elements, the 82 % of parallelization efficiency is achieved, and the calculational speed has been estimated to be around 50 times as fast as that of FACOM M-780. (author)

  18. Bulk-memory processor for data acquisition

    International Nuclear Information System (INIS)

    Nelson, R.O.; McMillan, D.E.; Sunier, J.W.; Meier, M.; Poore, R.V.

    1981-01-01

    To meet the diverse needs and data rate requirements at the Van de Graaff and Weapons Neutron Research (WNR) facilities, a bulk memory system has been implemented which includes a fast and flexible processor. This bulk memory processor (BMP) utilizes bit slice and microcode techniques and features a 24 bit wide internal architecture allowing direct addressing of up to 16 megawords of memory and histogramming up to 16 million counts per channel without overflow. The BMP is interfaced to the MOSTEK MK 8000 bulk memory system and to the standard MODCOMP computer I/O bus. Coding for the BMP both at the microcode level and with macro instructions is supported. The generalized data acquisition system has been extended to support the BMP in a manner transparent to the user

  19. Effect of yogic education system and modern education system on memory.

    Science.gov (United States)

    Rangan, R; Nagendra, Hr; Bhat, G Ramachandra

    2009-07-01

    Memory is more associated with the temporal cortex than other cortical areas. The two main components of memory are spatial and verbal which relate to right and left hemispheres of the brain, respectively. Many investigations have shown the beneficial effects of yoga on memory and temporal functions of the brain. This study was aimed at comparing the effect of one Gurukula Education System (GES) school based on a yoga way of life with a school using the Modern Education System (MES) on memory. Forty nine boys of ages ranging from 11-13 years were selected from each of two residential schools, one MES and the other GES, providing similar ambiance and daily routines. The boys were matched for age and socioeconomic status. The GES educational program is based around integrated yoga modules while the MES provides a conventional modern education program. Memory was assessed by means of standard spatial and verbal memory tests applicable to Indian conditions before and after an academic year. Between groups there was matching at start of the academic year, while after it the GES boys showed significant enhancement in both verbal and visual memory scores than MES boys (P < 0.001, Mann-Whitney test). The present study showed that the GES meant for total personality development adopting yoga way of life is more effective in enhancing visual and verbal memory scores than the MES.

  20. Open system evolution and 'memory dressing'

    International Nuclear Information System (INIS)

    Knezevic, Irena; Ferry, David K.

    2004-01-01

    Due to recent advances in quantum information, as well as in mesoscopic and nanoscale physics, the interest in the theory of open systems and decoherence has significantly increased. In this paper, we present an interesting approach to solving a time-convolutionless equation of motion for the open system reduced density matrix beyond the limit of weak coupling with the environment. Our approach is based on identifying an effective, memory-containing interaction in the equations of motion for the representation submatrices of the evolution operator (these submatices are written in a special basis, adapted for the 'partial-trace-free' approach, in the system+environment Liouville space). We then identify the 'memory dressing', a quantity crucial for solving the equation of motion for the reduced density matrix, which separates the effective from the real physical interaction. The memory dressing obeys a self-contained nonlinear equation of motion, which we solve exactly. The solution can be represented in a diagrammatic fashion after introducing an 'information exchange propagator', a quantity that describes the transfer of information to and from the system, so the cumulative effect of the information exchange results in the memory dressing. In the case of weak system-environment coupling, we present the expansion of the reduced density matrix in terms of the physical interaction up to the third order. However, our approach is capable of going beyond the weak-coupling limit, and we show how short-time behavior of an open system can be analyzed for arbitrary coupling strength. We illustrate the approach with a simple numerical example of single-particle level broadening for a two-particle interacting system on short time scales. Furthermore, we point out a way to identify the structure of decoherence-free subspaces using the present approach

  1. A CAMAC-based laboratory computer system

    International Nuclear Information System (INIS)

    Westphal, G.P.

    1975-01-01

    A CAMAC-based laboratory computer network is described by sharing a common mass memory this offers distinct advantages over slow and core-consuming single-processor installations. A fast compiler-BASIC, with extensions for CAMAC and real-time, provides a convenient means for interactive experiment control

  2. Concurrent Operations of O2-Tree on Shared Memory Multicore Architectures

    OpenAIRE

    Daniel Ohene-Kwofie; E. J. Otoo1, Gideon Nimako

    2014-01-01

    Modern computer architectures provide high performance computing capability by having multiple CPU cores. Such systems are also typically associated with very large main-memory capacities, thereby allowing them to be used for fast processing of in-memory database applications. However, most of the concurrency control mechanism associated with the index structures of these memory resident databases do not scale well, under high transaction rates. This paper presents the O2-Tree, a fast main me...

  3. A unitary signal-detection model of implicit and explicit memory.

    Science.gov (United States)

    Berry, Christopher J; Shanks, David R; Henson, Richard N A

    2008-10-01

    Do dissociations imply independent systems? In the memory field, the view that there are independent implicit and explicit memory systems has been predominantly supported by dissociation evidence. Here, we argue that many of these dissociations do not necessarily imply distinct memory systems. We review recent work with a single-system computational model that extends signal-detection theory (SDT) to implicit memory. SDT has had a major influence on research in a variety of domains. The current work shows that it can be broadened even further in its range of application. Indeed, the single-system model that we present does surprisingly well in accounting for some key dissociations that have been taken as evidence for independent implicit and explicit memory systems.

  4. Translation Memory and Computer Assisted Translation Tool for Medieval Texts

    Directory of Open Access Journals (Sweden)

    Törcsvári Attila

    2013-05-01

    Full Text Available Translation memories (TMs, as part of Computer Assisted Translation (CAT tools, support translators reusing portions of formerly translated text. Fencing books are good candidates for using TMs due to the high number of repeated terms. Medieval texts suffer a number of drawbacks that make hard even “simple” rewording to the modern version of the same language. The analyzed difficulties are: lack of systematic spelling, unusual word orders and typos in the original. A hypothesis is made and verified that even simple modernization increases legibility and it is feasible, also it is worthwhile to apply translation memories due to the numerous and even extremely long repeated terms. Therefore, methods and algorithms are presented 1. for automated transcription of medieval texts (when a limited training set is available, and 2. collection of repeated patterns. The efficiency of the algorithms is analyzed for recall and precision.

  5. A High Performance VLSI Computer Architecture For Computer Graphics

    Science.gov (United States)

    Chin, Chi-Yuan; Lin, Wen-Tai

    1988-10-01

    A VLSI computer architecture, consisting of multiple processors, is presented in this paper to satisfy the modern computer graphics demands, e.g. high resolution, realistic animation, real-time display etc.. All processors share a global memory which are partitioned into multiple banks. Through a crossbar network, data from one memory bank can be broadcasted to many processors. Processors are physically interconnected through a hyper-crossbar network (a crossbar-like network). By programming the network, the topology of communication links among processors can be reconfigurated to satisfy specific dataflows of different applications. Each processor consists of a controller, arithmetic operators, local memory, a local crossbar network, and I/O ports to communicate with other processors, memory banks, and a system controller. Operations in each processor are characterized into two modes, i.e. object domain and space domain, to fully utilize the data-independency characteristics of graphics processing. Special graphics features such as 3D-to-2D conversion, shadow generation, texturing, and reflection, can be easily handled. With the current high density interconnection (MI) technology, it is feasible to implement a 64-processor system to achieve 2.5 billion operations per second, a performance needed in most advanced graphics applications.

  6. Neural systems for tactual memories.

    Science.gov (United States)

    Bonda, E; Petrides, M; Evans, A

    1996-04-01

    1. The aim of this study was to investigate the neural systems involved in the memory processing of experiences through touch. 2. Regional cerebral blood flow was measured with positron emission tomography by means of the water bolus H2(15)O methodology in human subjects as they performed tasks involving different levels of tactual memory. In one of the experimental tasks, the subjects had to palpate nonsense shapes to match each one to a previously learned set, thus requiring constant reference to long-term memory. The other experimental task involved judgements of the recent recurrence of shapes during the scanning period. A set of three control tasks was used to control for the type of exploratory movements and sensory processing inherent in the two experimental tasks. 3. Comparisons of the distribution of activity between the experimental and the control tasks were carried out by means of the subtraction method. In relation to the control conditions, the two experimental tasks requiring memory resulted in significant changes within the posteroventral insula and the central opercular region. In addition, the task requiring recall from long-term memory yielded changes in the perirhinal cortex. 4. The above findings demonstrated that a ventrally directed parietoinsular pathway, leading to the posteroventral insula and the perirhinal cortex, constitutes a system by which long-lasting representations of tactual experiences are formed. It is proposed that the posteroventral insula is involved in tactual feature analysis, by analogy with the similar role of the inferotemporal cortex in vision, whereas the perirhinal cortex is further involved in the integration of these features into long-lasting representations of somatosensory experiences.

  7. Data Movement Dominates: Advanced Memory Technology to Address the Real Exascale Power Problem

    Energy Technology Data Exchange (ETDEWEB)

    Bergman, Keren

    2014-08-28

    Energy is the fundamental barrier to Exascale supercomputing and is dominated by the cost of moving data from one point to another, not computation. Similarly, performance is dominated by data movement, not computation. The solution to this problem requires three critical technologies: 3D integration, optical chip-to-chip communication, and a new communication model. The central goal of the Sandia led "Data Movement Dominates" project aimed to develop memory systems and new architectures based on these technologies that have the potential to lower the cost of local memory accesses by orders of magnitude and provide substantially more bandwidth. Only through these transformational advances can future systems reach the goals of Exascale computing with a manageable power budgets. The Sandia led team included co-PIs from Columbia University, Lawrence Berkeley Lab, and the University of Maryland. The Columbia effort of Data Movement Dominates focused on developing a physically accurate simulation environment and experimental verification for optically-connected memory (OCM) systems that can enable continued performance scaling through high-bandwidth capacity, energy-efficient bit-rate transparency, and time-of-flight latency. With OCM, memory device parallelism and total capacity can scale to match future high-performance computing requirements without sacrificing data-movement efficiency. When we consider systems with integrated photonics, links to memory can be seamlessly integrated with the interconnection network-in a sense, memory becomes a primary aspect of the interconnection network. At the core of the Columbia effort, toward expanding our understanding of OCM enabled computing we have created an integrated modeling and simulation environment that uniquely integrates the physical behavior of the optical layer. The PhoenxSim suite of design and software tools developed under this effort has enabled the co-design of and performance evaluation photonics-enabled OCM

  8. Parallel-vector algorithms for particle simulations on shared-memory multiprocessors

    International Nuclear Information System (INIS)

    Nishiura, Daisuke; Sakaguchi, Hide

    2011-01-01

    Over the last few decades, the computational demands of massive particle-based simulations for both scientific and industrial purposes have been continuously increasing. Hence, considerable efforts are being made to develop parallel computing techniques on various platforms. In such simulations, particles freely move within a given space, and so on a distributed-memory system, load balancing, i.e., assigning an equal number of particles to each processor, is not guaranteed. However, shared-memory systems achieve better load balancing for particle models, but suffer from the intrinsic drawback of memory access competition, particularly during (1) paring of contact candidates from among neighboring particles and (2) force summation for each particle. Here, novel algorithms are proposed to overcome these two problems. For the first problem, the key is a pre-conditioning process during which particle labels are sorted by a cell label in the domain to which the particles belong. Then, a list of contact candidates is constructed by pairing the sorted particle labels. For the latter problem, a table comprising the list indexes of the contact candidate pairs is created and used to sum the contact forces acting on each particle for all contacts according to Newton's third law. With just these methods, memory access competition is avoided without additional redundant procedures. The parallel efficiency and compatibility of these two algorithms were evaluated in discrete element method (DEM) simulations on four types of shared-memory parallel computers: a multicore multiprocessor computer, scalar supercomputer, vector supercomputer, and graphics processing unit. The computational efficiency of a DEM code was found to be drastically improved with our algorithms on all but the scalar supercomputer. Thus, the developed parallel algorithms are useful on shared-memory parallel computers with sufficient memory bandwidth.

  9. Studies of electron collisions with polyatomic molecules using distributed-memory parallel computers

    International Nuclear Information System (INIS)

    Winstead, C.; Hipes, P.G.; Lima, M.A.P.; McKoy, V.

    1991-01-01

    Elastic electron scattering cross sections from 5--30 eV are reported for the molecules C 2 H 4 , C 2 H 6 , C 3 H 8 , Si 2 H 6 , and GeH 4 , obtained using an implementation of the Schwinger multichannel method for distributed-memory parallel computer architectures. These results, obtained within the static-exchange approximation, are in generally good agreement with the available experimental data. These calculations demonstrate the potential of highly parallel computation in the study of collisions between low-energy electrons and polyatomic gases. The computational methodology discussed is also directly applicable to the calculation of elastic cross sections at higher levels of approximation (target polarization) and of electronic excitation cross sections

  10. Embedded memory design for multi-core and systems on chip

    CERN Document Server

    Mohammad, Baker

    2014-01-01

    This book describes the various tradeoffs systems designers face when designing embedded memory.  Readers designing multi-core systems and systems on chip will benefit from the discussion of different topics from memory architecture, array organization, circuit design techniques and design for test.  The presentation enables a multi-disciplinary approach to chip design, which bridges the gap between the architecture level and circuit level, in order to address yield, reliability and power-related issues for embedded memory.  ·         Provides a comprehensive overview of embedded memory design and associated challenges and choices; ·         Explains tradeoffs and dependencies across different disciplines involved with multi-core and system on chip memory design; ·         Includes detailed discussion of memory hierarchy and its impact on energy and performance; ·         Uses real product examples to demonstrate embedded memory design flow from architecture, to circuit ...

  11. A long-memory model of motor learning in the saccadic system: a regime-switching approach.

    Science.gov (United States)

    Wong, Aaron L; Shelhamer, Mark

    2013-08-01

    Maintenance of movement accuracy relies on motor learning, by which prior errors guide future behavior. One aspect of this learning process involves the accurate generation of predictions of movement outcome. These predictions can, for example, drive anticipatory movements during a predictive-saccade task. Predictive saccades are rapid eye movements made to anticipated future targets based on error information from prior movements. This predictive process exhibits long-memory (fractal) behavior, as suggested by inter-trial fluctuations. Here, we model this learning process using a regime-switching approach, which avoids the computational complexities associated with true long-memory processes. The resulting model demonstrates two fundamental characteristics. First, long-memory behavior can be mimicked by a system possessing no true long-term memory, producing model outputs consistent with human-subjects performance. In contrast, the popular two-state model, which is frequently used in motor learning, cannot replicate these findings. Second, our model suggests that apparent long-term memory arises from the trade-off between correcting for the most recent movement error and maintaining consistent long-term behavior. Thus, the model surprisingly predicts that stronger long-memory behavior correlates to faster learning during adaptation (in which systematic errors drive large behavioral changes); greater apparent long-term memory indicates more effective incorporation of error from the cumulative history across trials.

  12. Dynamic switching between semantic and episodic memory systems.

    Science.gov (United States)

    Kompus, Kristiina; Olsson, Carl-Johan; Larsson, Anne; Nyberg, Lars

    2009-09-01

    It has been suggested that episodic and semantic long-term memory systems interact during retrieval. Here we examined the flexibility of memory retrieval in an associative task taxing memories of different strength, assumed to differentially engage episodic and semantic memory. Healthy volunteers were pre-trained on a set of 36 face-name pairs over a 6-week period. Another set of 36 items was shown only once during the same time period. About 3 months after the training period all items were presented in a randomly intermixed order in an event-related fMRI study of face-name memory. Once presented items differentially activated anterior cingulate cortex and a right prefrontal region that previously have been associated with episodic retrieval mode. High-familiar items were associated with stronger activation of posterior cortices and a left frontal region. These findings fit a model of memory retrieval by which early processes determine, on a trial-by-trial basis, if the task can be solved by the default semantic system. If not, there is a dynamic shift to cognitive control processes that guide retrieval from episodic memory.

  13. Non-volatile main memory management methods based on a file system.

    Science.gov (United States)

    Oikawa, Shuichi

    2014-01-01

    There are upcoming non-volatile (NV) memory technologies that provide byte addressability and high performance. PCM, MRAM, and STT-RAM are such examples. Such NV memory can be used as storage because of its data persistency without power supply while it can be used as main memory because of its high performance that matches up with DRAM. There are a number of researches that investigated its uses for main memory and storage. They were, however, conducted independently. This paper presents the methods that enables the integration of the main memory and file system management for NV memory. Such integration makes NV memory simultaneously utilized as both main memory and storage. The presented methods use a file system as their basis for the NV memory management. We implemented the proposed methods in the Linux kernel, and performed the evaluation on the QEMU system emulator. The evaluation results show that 1) the proposed methods can perform comparably to the existing DRAM memory allocator and significantly better than the page swapping, 2) their performance is affected by the internal data structures of a file system, and 3) the data structures appropriate for traditional hard disk drives do not always work effectively for byte addressable NV memory. We also performed the evaluation of the effects caused by the longer access latency of NV memory by cycle-accurate full-system simulation. The results show that the effect on page allocation cost is limited if the increase of latency is moderate.

  14. Logic computation in phase change materials by threshold and memory switching.

    Science.gov (United States)

    Cassinerio, M; Ciocchini, N; Ielmini, D

    2013-11-06

    Memristors, namely hysteretic devices capable of changing their resistance in response to applied electrical stimuli, may provide new opportunities for future memory and computation, thanks to their scalable size, low switching energy and nonvolatile nature. We have developed a functionally complete set of logic functions including NOR, NAND and NOT gates, each utilizing a single phase-change memristor (PCM) where resistance switching is due to the phase transformation of an active chalcogenide material. The logic operations are enabled by the high functionality of nanoscale phase change, featuring voltage comparison, additive crystallization and pulse-induced amorphization. The nonvolatile nature of memristive states provides the basis for developing reconfigurable hybrid logic/memory circuits featuring low-power and high-speed switching. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  15. Effect of yogic education system and modern education system on memory

    Directory of Open Access Journals (Sweden)

    Rangan R

    2009-01-01

    Full Text Available Background/Aim: Memory is more associated with the temporal cortex than other cortical areas. The two main components of memory are spatial and verbal which relate to right and left hemispheres of the brain, respectively. Many investigations have shown the beneficial effects of yoga on memory and temporal functions of the brain. This study was aimed at comparing the effect of one Gurukula Education System (GES school based on a yoga way of life with a school using the Modern Education System (MES on memory. Materials and Methods: Forty nine boys of ages ranging from 11-13 years were selected from each of two residential schools, one MES and the other GES, providing similar ambiance and daily routines. The boys were matched for age and socioeconomic status. The GES educational program is based around integrated yoga modules while the MES provides a conventional modern education program. Memory was assessed by means of standard spatial and verbal memory tests applicable to Indian conditions before and after an academic year. Results: Between groups there was matching at start of the academic year, while after it the GES boys showed significant enhancement in both verbal and visual memory scores than MES boys (P < 0.001, Mann-Whitney test. Conclusions: The present study showed that the GES meant for total personality development adopting yoga way of life is more effective in enhancing visual and verbal memory scores than the MES.

  16. Techniques for Reducing Consistency-Related Communication in Distributed Shared Memory System

    OpenAIRE

    Zwaenepoel, W; Bennett, J.K.; Carter, J.B.

    1995-01-01

    Distributed shared memory 8DSM) is an abstraction of shared memory on a distributed memory machine. Hardware DSM systems support this abstraction at the architecture level; software DSM systems support the abstraction within the runtime system. One of the key problems in building an efficient software DSM system is to reduce the amount of communication needed to keep the distributed memories consistent. In this paper we present four techniques for doing so: 1) software release consistency; 2)...

  17. A review of emerging non-volatile memory (NVM) technologies and applications

    Science.gov (United States)

    Chen, An

    2016-11-01

    This paper will review emerging non-volatile memory (NVM) technologies, with the focus on phase change memory (PCM), spin-transfer-torque random-access-memory (STTRAM), resistive random-access-memory (RRAM), and ferroelectric field-effect-transistor (FeFET) memory. These promising NVM devices are evaluated in terms of their advantages, challenges, and applications. Their performance is compared based on reported parameters of major industrial test chips. Memory selector devices and cell structures are discussed. Changing market trends toward low power (e.g., mobile, IoT) and data-centric applications create opportunities for emerging NVMs. High-performance and low-cost emerging NVMs may simplify memory hierarchy, introduce non-volatility in logic gates and circuits, reduce system power, and enable novel architectures. Storage-class memory (SCM) based on high-density NVMs could fill the performance and density gap between memory and storage. Some unique characteristics of emerging NVMs can be utilized for novel applications beyond the memory space, e.g., neuromorphic computing, hardware security, etc. In the beyond-CMOS era, emerging NVMs have the potential to fulfill more important functions and enable more efficient, intelligent, and secure computing systems.

  18. Noise filtering algorithm for the MFTF-B computer based control system

    International Nuclear Information System (INIS)

    Minor, E.G.

    1983-01-01

    An algorithm to reduce the message traffic in the MFTF-B computer based control system is described. The algorithm filters analog inputs to the control system. Its purpose is to distinguish between changes in the inputs due to noise and changes due to significant variations in the quantity being monitored. Noise is rejected while significant changes are reported to the control system data base, thus keeping the data base updated with a minimum number of messages. The algorithm is memory efficient, requiring only four bytes of storage per analog channel, and computationally simple, requiring only subtraction and comparison. Quantitative analysis of the algorithm is presented for the case of additive Gaussian noise. It is shown that the algorithm is stable and tends toward the mean value of the monitored variable over a wide variety of additive noise distributions

  19. Multiprocessor shared-memory information exchange

    International Nuclear Information System (INIS)

    Santoline, L.L.; Bowers, M.D.; Crew, A.W.; Roslund, C.J.; Ghrist, W.D. III

    1989-01-01

    In distributed microprocessor-based instrumentation and control systems, the inter-and intra-subsystem communication requirements ultimately form the basis for the overall system architecture. This paper describes a software protocol which addresses the intra-subsystem communications problem. Specifically the protocol allows for multiple processors to exchange information via a shared-memory interface. The authors primary goal is to provide a reliable means for information to be exchanged between central application processor boards (masters) and dedicated function processor boards (slaves) in a single computer chassis. The resultant Multiprocessor Shared-Memory Information Exchange (MSMIE) protocol, a standard master-slave shared-memory interface suitable for use in nuclear safety systems, is designed to pass unidirectional buffers of information between the processors while providing a minimum, deterministic cycle time for this data exchange

  20. Dynamics of Shape Memory Alloy Systems, Phase 2

    Science.gov (United States)

    2015-12-22

    Nonlinear Dynamics and Chaos in Systems with Discontinuous Support Using a Switch Model”, DINAME 2005 - XI International Conference on Dynamic Problems in...AFRL-AFOSR-CL-TR-2016-0003 Dynamics of Shape Memory Alloy Systems , Phase 2 Marcelo Savi FUNDACAO COORDENACAO DE PROJETOS PESQUISAS E EEUDOS TECNOL...release. 2 AFOSR FINAL REPORT Grant Title: Nonlinear Dynamics of Shape Memory Alloy Systems , Phase 2 Grant #: FA9550-11-1-0284 Reporting Period

  1. Brains of verbal memory specialists show anatomical differences in language, memory and visual systems.

    Science.gov (United States)

    Hartzell, James F; Davis, Ben; Melcher, David; Miceli, Gabriele; Jovicich, Jorge; Nath, Tanmay; Singh, Nandini Chatterjee; Hasson, Uri

    2016-05-01

    We studied a group of verbal memory specialists to determine whether intensive oral text memory is associated with structural features of hippocampal and lateral-temporal regions implicated in language processing. Professional Vedic Sanskrit Pandits in India train from childhood for around 10years in an ancient, formalized tradition of oral Sanskrit text memorization and recitation, mastering the exact pronunciation and invariant content of multiple 40,000-100,000 word oral texts. We conducted structural analysis of gray matter density, cortical thickness, local gyrification, and white matter structure, relative to matched controls. We found massive gray matter density and cortical thickness increases in Pandit brains in language, memory and visual systems, including i) bilateral lateral temporal cortices and ii) the anterior cingulate cortex and the hippocampus, regions associated with long and short-term memory. Differences in hippocampal morphometry matched those previously documented for expert spatial navigators and individuals with good verbal working memory. The findings provide unique insight into the brain organization implementing formalized oral knowledge systems. Copyright © 2015. Published by Elsevier Inc.

  2. Stress and multiple memory systems: from 'thinking' to 'doing'.

    Science.gov (United States)

    Schwabe, Lars; Wolf, Oliver T

    2013-02-01

    Although it has been known for decades that stress influences memory performance, it was only recently shown that stress may alter the contribution of multiple, anatomically and functionally distinct memory systems to behavior. Here, we review recent animal and human studies demonstrating that stress promotes a shift from flexible 'cognitive' to rather rigid 'habit' memory systems and discuss, based on recent neuroimaging data in humans, the underlying brain mechanisms. We argue that, despite being generally adaptive, this stress-induced shift towards 'habit' memory may, in vulnerable individuals, be a risk factor for psychopathology. Copyright © 2012 Elsevier Ltd. All rights reserved.

  3. Novel procedure for characterizing nonlinear systems with memory: 2017 update

    Science.gov (United States)

    Nuttall, Albert H.; Katz, Richard A.; Hughes, Derke R.; Koch, Robert M.

    2017-05-01

    The present article discusses novel improvements in nonlinear signal processing made by the prime algorithm developer, Dr. Albert H. Nuttall and co-authors, a consortium of research scientists from the Naval Undersea Warfare Center Division, Newport, RI. The algorithm, called the Nuttall-Wiener-Volterra or 'NWV' algorithm is named for its principal contributors [1], [2],[ 3] . The NWV algorithm significantly reduces the computational workload for characterizing nonlinear systems with memory. Following this formulation, two measurement waveforms are required in order to characterize a specified nonlinear system under consideration: (1) an excitation input waveform, x(t) (the transmitted signal); and, (2) a response output waveform, z(t) (the received signal). Given these two measurement waveforms for a given propagation channel, a 'kernel' or 'channel response', h= [h0,h1,h2,h3] between the two measurement points, is computed via a least squares approach that optimizes modeled kernel values by performing a best fit between measured response z(t) and a modeled response y(t). New techniques significantly diminish the exponential growth of the number of computed kernel coefficients at second and third order and alleviate the Curse of Dimensionality (COD) in order to realize practical nonlinear solutions of scientific and engineering interest.

  4. A Compute Capable SSD Architecture for Next-Generation Non-volatile Memories

    Energy Technology Data Exchange (ETDEWEB)

    De, Arup [Univ. of California, San Diego, CA (United States)

    2014-01-01

    Existing storage technologies (e.g., disks and ash) are failing to cope with the processor and main memory speed and are limiting the overall perfor- mance of many large scale I/O or data-intensive applications. Emerging fast byte-addressable non-volatile memory (NVM) technologies, such as phase-change memory (PCM), spin-transfer torque memory (STTM) and memristor are very promising and are approaching DRAM-like performance with lower power con- sumption and higher density as process technology scales. These new memories are narrowing down the performance gap between the storage and the main mem- ory and are putting forward challenging problems on existing SSD architecture, I/O interface (e.g, SATA, PCIe) and software. This dissertation addresses those challenges and presents a novel SSD architecture called XSSD. XSSD o oads com- putation in storage to exploit fast NVMs and reduce the redundant data tra c across the I/O bus. XSSD o ers a exible RPC-based programming framework that developers can use for application development on SSD without dealing with the complication of the underlying architecture and communication management. We have built a prototype of XSSD on the BEE3 FPGA prototyping system. We implement various data-intensive applications and achieve speedup and energy ef- ciency of 1.5-8.9 and 1.7-10.27 respectively. This dissertation also compares XSSD with previous work on intelligent storage and intelligent memory. The existing ecosystem and these new enabling technologies make this system more viable than earlier ones.

  5. Exploiting Data Similarity to Reduce Memory Footprints

    Science.gov (United States)

    2011-01-01

    ure 1 illustrates. We expect the budget for an exascale system to be approximately $200M and memory costs will account for about half of that budget [21...Figure 2 shows that monetary considerations will lead to significantly less main memory relative to compute capability in exascale systems even if...J. Davenport, T. Schlagel, F. John- son, and P. Messina. A Decadal DOE Plan for Providing Exascale Applications and Technologies for DOE Mission

  6. The memory systems of children with (central) auditory disorder.

    Science.gov (United States)

    Pires, Mayra Monteiro; Mota, Mailce Borges; Pinheiro, Maria Madalena Canina

    2015-01-01

    This study aims to investigate working, declarative, and procedural memory in children with (central) auditory processing disorder who showed poor phonological awareness. Thirty 9- and 10-year-old children participated in the study and were distributed into two groups: a control group consisting of 15 children with typical development, and an experimental group consisting of 15 children with (central) auditory processing disorder who were classified according to three behavioral tests and who showed poor phonological awareness in the CONFIAS test battery. The memory systems were assessed through the adapted tests in the program E-PRIME 2.0. The working memory was assessed by the Working Memory Test Battery for Children (WMTB-C), whereas the declarative memory was assessed by a picture-naming test and the procedural memory was assessed by means of a morphosyntactic processing test. The results showed that, when compared to the control group, children with poor phonological awareness scored lower in the working, declarative, and procedural memory tasks. The results of this study suggest that in children with (central) auditory processing disorder, phonological awareness is associated with the analyzed memory systems.

  7. Computers, the Human Mind, and My In-Laws' House.

    Science.gov (United States)

    Esque, Timm J.

    1996-01-01

    Discussion of human memory, computer memory, and the storage of information focuses on a metaphor that can account for memory without storage and can set the stage for systemic research around a more comprehensive, understandable theory. (Author/LRW)

  8. Data fusion using dynamic associative memory

    Science.gov (United States)

    Lo, Titus K. Y.; Leung, Henry; Chan, Keith C. C.

    1997-07-01

    An associative memory, unlike an addressed memory used in conventional computers, is content addressable. That is, storing and retrieving information are not based on the location of the memory cell but on the content of the information. There are a number of approaches to implement an associative memory, one of which is to use a neural dynamical system where objects being memorized or recognized correspond to its basic attractors. The work presented in this paper is the investigation of applying a particular type of neural dynamical associative memory, namely the projection network, to pattern recognition and data fusion. Three types of attractors, which are fixed-point, limit- cycle, and chaotic, have been studied, evaluated and compared.

  9. A Stream Tilling Approach to Surface Area Estimation for Large Scale Spatial Data in a Shared Memory System

    Directory of Open Access Journals (Sweden)

    Liu Jiping

    2017-12-01

    Full Text Available Surface area estimation is a widely used tool for resource evaluation in the physical world. When processing large scale spatial data, the input/output (I/O can easily become the bottleneck in parallelizing the algorithm due to the limited physical memory resources and the very slow disk transfer rate. In this paper, we proposed a stream tilling approach to surface area estimation that first decomposed a spatial data set into tiles with topological expansions. With these tiles, the one-to-one mapping relationship between the input and the computing process was broken. Then, we realized a streaming framework towards the scheduling of the I/O processes and computing units. Herein, each computing unit encapsulated a same copy of the estimation algorithm, and multiple asynchronous computing units could work individually in parallel. Finally, the performed experiment demonstrated that our stream tilling estimation can efficiently alleviate the heavy pressures from the I/O-bound work, and the measured speedup after being optimized have greatly outperformed the directly parallel versions in shared memory systems with multi-core processors.

  10. Milestoning with transition memory

    Science.gov (United States)

    Hawk, Alexander T.; Makarov, Dmitrii E.

    2011-12-01

    Milestoning is a method used to calculate the kinetics and thermodynamics of molecular processes occurring on time scales that are not accessible to brute force molecular dynamics (MD). In milestoning, the conformation space of the system is sectioned by hypersurfaces (milestones), an ensemble of trajectories is initialized on each milestone, and MD simulations are performed to calculate transitions between milestones. The transition probabilities and transition time distributions are then used to model the dynamics of the system with a Markov renewal process, wherein a long trajectory of the system is approximated as a succession of independent transitions between milestones. This approximation is justified if the transition probabilities and transition times are statistically independent. In practice, this amounts to a requirement that milestones are spaced such that trajectories lose position and velocity memory between subsequent transitions. Unfortunately, limiting the number of milestones limits both the resolution at which a system's properties can be analyzed, and the computational speedup achieved by the method. We propose a generalized milestoning procedure, milestoning with transition memory (MTM), which accounts for memory of previous transitions made by the system. When a reaction coordinate is used to define the milestones, the MTM procedure can be carried out at no significant additional expense as compared to conventional milestoning. To test MTM, we have applied its version that allows for the memory of the previous step to the toy model of a polymer chain undergoing Langevin dynamics in solution. We have computed the mean first passage time for the chain to attain a cyclic conformation and found that the number of milestones that can be used, without incurring significant errors in the first passage time is at least 8 times that permitted by conventional milestoning. We further demonstrate that, unlike conventional milestoning, MTM permits

  11. C-RAM: breaking mobile device memory barriers using the cloud

    OpenAIRE

    Pamboris, A; Pietzuch, P

    2015-01-01

    ?Mobile applications are constrained by the available memory of mobile devices. We present C-RAM, a system that uses cloud-based memory to extend the memory of mobile devices. It splits application state and its associated computation between a mobile device and a cloud node to allow applications to consume more memory, while minimising the performance impact. C-RAM thus enables developers to realise new applications or port legacy desktop applications with a large memory footprint to mobile ...

  12. Counterbalancing Regulation in Response Memory of a Positively Autoregulated Two-Component System.

    Science.gov (United States)

    Gao, Rong; Godfrey, Katherine A; Sufian, Mahir A; Stock, Ann M

    2017-09-15

    Fluctuations in nutrient availability often result in recurrent exposures to the same stimulus conditions. The ability to memorize the past event and use the "memory" to make adjustments to current behaviors can lead to a more efficient adaptation to the recurring stimulus. A short-term phenotypic memory can be conferred via carryover of the response proteins to facilitate the recurrent response, but the additional accumulation of response proteins can lead to a deviation from response homeostasis. We used the Escherichia coli PhoB/PhoR two-component system (TCS) as a model system to study how cells cope with the recurrence of environmental phosphate (Pi) starvation conditions. We discovered that "memory" of prior Pi starvation can exert distinct effects through two regulatory pathways, the TCS signaling pathway and the stress response pathway. Although carryover of TCS proteins can lead to higher initial levels of transcription factor PhoB and a faster initial response in prestarved cells than in cells not starved, the response enhancement can be overcome by an earlier and greater repression of promoter activity in prestarved cells due to the memory of the stress response. The repression counterbalances the carryover of the response proteins, leading to a homeostatic response whether or not cells are prestimulated. A computational model based on sigma factor competition was developed to understand the memory of stress response and to predict the homeostasis of other PhoB-regulated response proteins. Our insight into the history-dependent PhoBR response may provide a general understanding of how TCSs respond to recurring stimuli and adapt to fluctuating environmental conditions. IMPORTANCE Bacterial cells in their natural environments experience scenarios that are far more complex than are typically replicated in laboratory experiments. The architectures of signaling systems and the integration of multiple adaptive pathways have evolved to deal with such complexity

  13. Database architecture optimized for the new bottleneck: Memory access

    NARCIS (Netherlands)

    P.A. Boncz (Peter); S. Manegold (Stefan); M.L. Kersten (Martin)

    1999-01-01

    textabstractIn the past decade, advances in speed of commodity CPUs have far out-paced advances in memory latency. Main-memory access is therefore increasingly a performance bottleneck for many computer applications, including database systems. In this article, we use a simple scan test to show the

  14. Optimizing Database Architecture for the New Bottleneck: Memory Access

    NARCIS (Netherlands)

    S. Manegold (Stefan); P.A. Boncz (Peter); M.L. Kersten (Martin)

    2000-01-01

    textabstractIn the past decade, advances in speed of commodity CPUs have far out-paced advances in memory latency. Main-memory access is therefore increasingly a performance bottleneck for many computer applications, including database systems. In this article, we use a simple scan test to show the

  15. Processing-in-Memory Enabled Graphics Processors for 3D Rendering

    Energy Technology Data Exchange (ETDEWEB)

    Xie, Chenhao; Song, Shuaiwen; Wang, Jing; Zhang, Weigong; Fu, Xin

    2017-02-06

    The performance of 3D rendering of Graphics Processing Unit that convents 3D vector stream into 2D frame with 3D image effects significantly impact users’ gaming experience on modern computer systems. Due to the high texture throughput in 3D rendering, main memory bandwidth becomes a critical obstacle for improving the overall rendering performance. 3D stacked memory systems such as Hybrid Memory Cube (HMC) provide opportunities to significantly overcome the memory wall by directly connecting logic controllers to DRAM dies. Based on the observation that texel fetches significantly impact off-chip memory traffic, we propose two architectural designs to enable Processing-In-Memory based GPU for efficient 3D rendering.

  16. Reactive wavepacket dynamics for four atom systems on scalable parallel computers

    International Nuclear Information System (INIS)

    Goldfield, E.M.

    1994-01-01

    While time-dependent quantum mechanics has been successfully applied to many three atom systems, it was nevertheless a computational challenge to use wavepacket methods to study four atom systems, systems with several heavy atoms, and systems with deep potential wells. S.K. Gray and the author are studying the reaction of OH + CO ↔ (HOCO) ↔ H + CO 2 , a difficult reaction by all the above criteria. Memory considerations alone made it impossible to use a single IBM RS/6000 workstation to study a four degree-of-freedom model of this system. They have developed a scalable parallel wavepacket code for the IBM SP1 and have run it on the SP1 at Argonne and at the Cornell Theory Center. The wavepacket, defined on a four dimensional grid, is spread out among the processors. Two-dimensional FFT's are used to compute the kinetic energy operator acting on the wavepacket. Accomplishing this task, which is the computationally intensive part of the calculation, requires a global transpose of the data. This transpose is the only serious communication between processors. Since the problem is essentially data-parallel, communication is regular and load-balancing is excellent. But as the problem is moderately fine-grained and messages are long, the ratio of communication to computation is somewhat high and they typically get about 55% of ideal speed-up

  17. Chemical memory reactions induced bursting dynamics in gene expression.

    Science.gov (United States)

    Tian, Tianhai

    2013-01-01

    Memory is a ubiquitous phenomenon in biological systems in which the present system state is not entirely determined by the current conditions but also depends on the time evolutionary path of the system. Specifically, many memorial phenomena are characterized by chemical memory reactions that may fire under particular system conditions. These conditional chemical reactions contradict to the extant stochastic approaches for modeling chemical kinetics and have increasingly posed significant challenges to mathematical modeling and computer simulation. To tackle the challenge, I proposed a novel theory consisting of the memory chemical master equations and memory stochastic simulation algorithm. A stochastic model for single-gene expression was proposed to illustrate the key function of memory reactions in inducing bursting dynamics of gene expression that has been observed in experiments recently. The importance of memory reactions has been further validated by the stochastic model of the p53-MDM2 core module. Simulations showed that memory reactions is a major mechanism for realizing both sustained oscillations of p53 protein numbers in single cells and damped oscillations over a population of cells. These successful applications of the memory modeling framework suggested that this innovative theory is an effective and powerful tool to study memory process and conditional chemical reactions in a wide range of complex biological systems.

  18. A Josephson ternary associative memory cell

    International Nuclear Information System (INIS)

    Morisue, M.; Suzuki, K.

    1989-01-01

    This paper describes a three-valued content addressable memory cell using a Josephson complementary ternary logic circuit named as JCTL. The memory cell proposed here can perform three operations of searching, writing and reading in ternary logic system. The principle of the memory circuit is illustrated in detail by using the threshold-characteristics of the JCTL. In order to investigate how a high performance operation can be achieved, computer simulations have been made. Simulation results show that the cycle time of memory operation is 120psec, power consumption is about 0.5 μW/cell and tolerances of writing and reading operation are +-15% and +-24%, respectively

  19. Josephson Thermal Memory

    Science.gov (United States)

    Guarcello, Claudio; Solinas, Paolo; Braggio, Alessandro; Di Ventra, Massimiliano; Giazotto, Francesco

    2018-01-01

    We propose a superconducting thermal memory device that exploits the thermal hysteresis in a flux-controlled temperature-biased superconducting quantum-interference device (SQUID). This system reveals a flux-controllable temperature bistability, which can be used to define two well-distinguishable thermal logic states. We discuss a suitable writing-reading procedure for these memory states. The time of the memory writing operation is expected to be on the order of approximately 0.2 ns for a Nb-based SQUID in thermal contact with a phonon bath at 4.2 K. We suggest a noninvasive readout scheme for the memory states based on the measurement of the effective resonance frequency of a tank circuit inductively coupled to the SQUID. The proposed device paves the way for a practical implementation of thermal logic and computation. The advantage of this proposal is that it represents also an example of harvesting thermal energy in superconducting circuits.

  20. Evaluation of the computer code system RADHEAT-V4 by analysing benchmark problems on radiation shielding

    International Nuclear Information System (INIS)

    Sakamoto, Yukio; Naito, Yoshitaka

    1990-11-01

    A computer code system RADHEAT-V4 has been developed for safety evaluation on radiation shielding of nuclear fuel facilities. To evaluate the performance of the code system, 18 benchmark problem were selected and analysed. Evaluated radiations are neutron and gamma-ray. Benchmark problems consist of penetration, streaming and skyshine. The computed results show more accurate than those by the Sn codes ANISN and DOT3.5 or the Monte Carlo code MORSE. Big core memory and many times I/O are, however, required for RADHEAT-V4. (author)

  1. Overview of emerging nonvolatile memory technologies.

    Science.gov (United States)

    Meena, Jagan Singh; Sze, Simon Min; Chand, Umesh; Tseng, Tseung-Yuen

    2014-01-01

    Nonvolatile memory technologies in Si-based electronics date back to the 1990s. Ferroelectric field-effect transistor (FeFET) was one of the most promising devices replacing the conventional Flash memory facing physical scaling limitations at those times. A variant of charge storage memory referred to as Flash memory is widely used in consumer electronic products such as cell phones and music players while NAND Flash-based solid-state disks (SSDs) are increasingly displacing hard disk drives as the primary storage device in laptops, desktops, and even data centers. The integration limit of Flash memories is approaching, and many new types of memory to replace conventional Flash memories have been proposed. Emerging memory technologies promise new memories to store more data at less cost than the expensive-to-build silicon chips used by popular consumer gadgets including digital cameras, cell phones and portable music players. They are being investigated and lead to the future as potential alternatives to existing memories in future computing systems. Emerging nonvolatile memory technologies such as magnetic random-access memory (MRAM), spin-transfer torque random-access memory (STT-RAM), ferroelectric random-access memory (FeRAM), phase-change memory (PCM), and resistive random-access memory (RRAM) combine the speed of static random-access memory (SRAM), the density of dynamic random-access memory (DRAM), and the nonvolatility of Flash memory and so become very attractive as another possibility for future memory hierarchies. Many other new classes of emerging memory technologies such as transparent and plastic, three-dimensional (3-D), and quantum dot memory technologies have also gained tremendous popularity in recent years. Subsequently, not an exaggeration to say that computer memory could soon earn the ultimate commercial validation for commercial scale-up and production the cheap plastic knockoff. Therefore, this review is devoted to the rapidly developing new

  2. Overview of emerging nonvolatile memory technologies

    Science.gov (United States)

    2014-01-01

    Nonvolatile memory technologies in Si-based electronics date back to the 1990s. Ferroelectric field-effect transistor (FeFET) was one of the most promising devices replacing the conventional Flash memory facing physical scaling limitations at those times. A variant of charge storage memory referred to as Flash memory is widely used in consumer electronic products such as cell phones and music players while NAND Flash-based solid-state disks (SSDs) are increasingly displacing hard disk drives as the primary storage device in laptops, desktops, and even data centers. The integration limit of Flash memories is approaching, and many new types of memory to replace conventional Flash memories have been proposed. Emerging memory technologies promise new memories to store more data at less cost than the expensive-to-build silicon chips used by popular consumer gadgets including digital cameras, cell phones and portable music players. They are being investigated and lead to the future as potential alternatives to existing memories in future computing systems. Emerging nonvolatile memory technologies such as magnetic random-access memory (MRAM), spin-transfer torque random-access memory (STT-RAM), ferroelectric random-access memory (FeRAM), phase-change memory (PCM), and resistive random-access memory (RRAM) combine the speed of static random-access memory (SRAM), the density of dynamic random-access memory (DRAM), and the nonvolatility of Flash memory and so become very attractive as another possibility for future memory hierarchies. Many other new classes of emerging memory technologies such as transparent and plastic, three-dimensional (3-D), and quantum dot memory technologies have also gained tremendous popularity in recent years. Subsequently, not an exaggeration to say that computer memory could soon earn the ultimate commercial validation for commercial scale-up and production the cheap plastic knockoff. Therefore, this review is devoted to the rapidly developing new

  3. An integrated on-line system for the evaluation of ECG patterns with a small process computer

    International Nuclear Information System (INIS)

    Schoffa, G.; Eggenberger, O.; Krueger, G.; Karlsruhe Univ.

    1975-01-01

    This paper describes an on-line system for ECG processing with a small computer (8K memory) and a magnetic tape cassette for mass storage capable to evaluate 30 ECG patterns in a twelfe lead system per day. The use of a small computer was possible by a compact and easy-to-handle operating system and space-saving programs. The system described was specifically intended for use in smaller hospitals with a low number of ECG's per day which do not allow an economic operation of greater DP installations. The economy calculations, based on the 'Break-even-point method' with special regard to the installations, mainennance and personnel costs already grant an economic operation of a small computer at a rate of 5 ECG's per day. (orig.) [de

  4. SABRE: a computer-based system for the assessment of body radioactivity by photon spectrometry. Part 4

    International Nuclear Information System (INIS)

    Venn, J.B.

    1982-02-01

    A PDP-11/10 computer system is described for the acquisition and processing of pulse height spectra from detectors used for the measurement of body radioactivity. Version 4 of SABRE (System for the Assessment of Body Radioactivity) provides control of multiple detection systems from visual display consoles by means of a command language. A wide range of facilities is available for the display, processing and storage of acquired spectra and complex operations may be pre-programmed by means of the SABRE MACRO language. The hardware includes a CAMAC interface to the detection systems, disc cartridge drives for mass storage of data and programs, and data-links to other computers. The software is written in assembler language and includes special features for the dynamic allocation of computer memory and for safeguarding acquired data. (author)

  5. A Scalable Unsegmented Multiport Memory for FPGA-Based Systems

    Directory of Open Access Journals (Sweden)

    Kevin R. Townsend

    2015-01-01

    Full Text Available On-chip multiport memory cores are crucial primitives for many modern high-performance reconfigurable architectures and multicore systems. Previous approaches for scaling memory cores come at the cost of operating frequency, communication overhead, and logic resources without increasing the storage capacity of the memory. In this paper, we present two approaches for designing multiport memory cores that are suitable for reconfigurable accelerators with substantial on-chip memory or complex communication. Our design approaches tackle these challenges by banking RAM blocks and utilizing interconnect networks which allows scaling without sacrificing logic resources. With banking, memory congestion is unavoidable and we evaluate our multiport memory cores under different memory access patterns to gain insights about different design trade-offs. We demonstrate our implementation with up to 256 memory ports using a Xilinx Virtex-7 FPGA. Our experimental results report high throughput memories with resource usage that scales with the number of ports.

  6. PREFACE: Special section on Computational Fluid Dynamics—in memory of Professor Kunio Kuwahara Special section on Computational Fluid Dynamics—in memory of Professor Kunio Kuwahara

    Science.gov (United States)

    Ishii, Katsuya

    2011-08-01

    This issue includes a special section on computational fluid dynamics (CFD) in memory of the late Professor Kunio Kuwahara, who passed away on 15 September 2008, at the age of 66. In this special section, five articles are included that are based on the lectures and discussions at `The 7th International Nobeyama Workshop on CFD: To the Memory of Professor Kuwahara' held in Tokyo on 23 and 24 September 2009. Professor Kuwahara started his research in fluid dynamics under Professor Imai at the University of Tokyo. His first paper was published in 1969 with the title 'Steady Viscous Flow within Circular Boundary', with Professor Imai. In this paper, he combined theoretical and numerical methods in fluid dynamics. Since that time, he made significant and seminal contributions to computational fluid dynamics. He undertook pioneering numerical studies on the vortex method in 1970s. From then to the early nineties, he developed numerical analyses on a variety of three-dimensional unsteady phenomena of incompressible and compressible fluid flows and/or complex fluid flows using his own supercomputers with academic and industrial co-workers and members of his private research institute, ICFD in Tokyo. In addition, a number of senior and young researchers of fluid mechanics around the world were invited to ICFD and the Nobeyama workshops, which were held near his villa, and they intensively discussed new frontier problems of fluid physics and fluid engineering at Professor Kuwahara's kind hospitality. At the memorial Nobeyama workshop held in 2009, 24 overseas speakers presented their papers, including the talks of Dr J P Boris (Naval Research Laboratory), Dr E S Oran (Naval Research Laboratory), Professor Z J Wang (Iowa State University), Dr M Meinke (RWTH Aachen), Professor K Ghia (University of Cincinnati), Professor U Ghia (University of Cincinnati), Professor F Hussain (University of Houston), Professor M Farge (École Normale Superieure), Professor J Y Yong (National

  7. Integral Images: Efficient Algorithms for Their Computation and Storage in Resource-Constrained Embedded Vision Systems.

    Science.gov (United States)

    Ehsan, Shoaib; Clark, Adrian F; Naveed ur Rehman; McDonald-Maier, Klaus D

    2015-07-10

    The integral image, an intermediate image representation, has found extensive use in multi-scale local feature detection algorithms, such as Speeded-Up Robust Features (SURF), allowing fast computation of rectangular features at constant speed, independent of filter size. For resource-constrained real-time embedded vision systems, computation and storage of integral image presents several design challenges due to strict timing and hardware limitations. Although calculation of the integral image only consists of simple addition operations, the total number of operations is large owing to the generally large size of image data. Recursive equations allow substantial decrease in the number of operations but require calculation in a serial fashion. This paper presents two new hardware algorithms that are based on the decomposition of these recursive equations, allowing calculation of up to four integral image values in a row-parallel way without significantly increasing the number of operations. An efficient design strategy is also proposed for a parallel integral image computation unit to reduce the size of the required internal memory (nearly 35% for common HD video). Addressing the storage problem of integral image in embedded vision systems, the paper presents two algorithms which allow substantial decrease (at least 44.44%) in the memory requirements. Finally, the paper provides a case study that highlights the utility of the proposed architectures in embedded vision systems.

  8. Integral Images: Efficient Algorithms for Their Computation and Storage in Resource-Constrained Embedded Vision Systems

    Directory of Open Access Journals (Sweden)

    Shoaib Ehsan

    2015-07-01

    Full Text Available The integral image, an intermediate image representation, has found extensive use in multi-scale local feature detection algorithms, such as Speeded-Up Robust Features (SURF, allowing fast computation of rectangular features at constant speed, independent of filter size. For resource-constrained real-time embedded vision systems, computation and storage of integral image presents several design challenges due to strict timing and hardware limitations. Although calculation of the integral image only consists of simple addition operations, the total number of operations is large owing to the generally large size of image data. Recursive equations allow substantial decrease in the number of operations but require calculation in a serial fashion. This paper presents two new hardware algorithms that are based on the decomposition of these recursive equations, allowing calculation of up to four integral image values in a row-parallel way without significantly increasing the number of operations. An efficient design strategy is also proposed for a parallel integral image computation unit to reduce the size of the required internal memory (nearly 35% for common HD video. Addressing the storage problem of integral image in embedded vision systems, the paper presents two algorithms which allow substantial decrease (at least 44.44% in the memory requirements. Finally, the paper provides a case study that highlights the utility of the proposed architectures in embedded vision systems.

  9. An introduction to digital computing

    CERN Document Server

    George, F H

    2014-01-01

    An Introduction to Digital Computing provides information pertinent to the fundamental aspects of digital computing. This book represents a major step towards the universal availability of programmed material.Organized into four chapters, this book begins with an overview of the fundamental workings of the computer, including the way it handles simple arithmetic problems. This text then provides a brief survey of the basic features of a typical computer that is divided into three sections, namely, the input and output system, the memory system for data storage, and a processing system. Other c

  10. Memory systems, processes, and tasks: taxonomic clarification via factor analysis.

    Science.gov (United States)

    Bruss, Peter J; Mitchell, David B

    2009-01-01

    The nature of various memory systems was examined using factor analysis. We reanalyzed data from 11 memory tasks previously reported in Mitchell and Bruss (2003). Four well-defined factors emerged, closely resembling episodic and semantic memory and conceptual and perceptual implicit memory, in line with both memory systems and transfer-appropriate processing accounts. To explore taxonomic issues, we ran separate analyses on the implicit tasks. Using a cross-format manipulation (pictures vs. words), we identified 3 prototypical tasks. Word fragment completion and picture fragment identification tasks were "factor pure," tapping perceptual processes uniquely. Category exemplar generation revealed its conceptual nature, yielding both cross-format priming and a picture superiority effect. In contrast, word stem completion and picture naming were more complex, revealing attributes of both processes.

  11. Memory under stress: from single systems to network changes.

    Science.gov (United States)

    Schwabe, Lars

    2017-02-01

    Stressful events have profound effects on learning and memory. These effects are mainly mediated by catecholamines and glucocorticoid hormones released from the adrenals during stressful encounters. It has been known for long that both catecholamines and glucocorticoids influence the functioning of the hippocampus, a critical hub for episodic memory. However, areas implicated in other forms of memory, such as the insula or the dorsal striatum, can be affected by stress as well. Beyond changes in single memory systems, acute stress triggers the reconfiguration of large scale neural networks which sets the stage for a shift from thoughtful, 'cognitive' control of learning and memory toward more reflexive, 'habitual' processes. Stress-related alterations in amygdala connectivity with the hippocampus, dorsal striatum, and prefrontal cortex seem to play a key role in this shift. The bias toward systems proficient in threat processing and the implementation of well-established routines may facilitate coping with an acute stressor. Overreliance on these reflexive systems or the inability to shift flexibly between them, however, may represent a risk factor for psychopathology in the long-run. © 2016 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  12. Models of parallel computation :a survey and classification

    Institute of Scientific and Technical Information of China (English)

    ZHANG Yunquan; CHEN Guoliang; SUN Guangzhong; MIAO Qiankun

    2007-01-01

    In this paper,the state-of-the-art parallel computational model research is reviewed.We will introduce various models that were developed during the past decades.According to their targeting architecture features,especially memory organization,we classify these parallel computational models into three generations.These models and their characteristics are discussed based on three generations classification.We believe that with the ever increasing speed gap between the CPU and memory systems,incorporating non-uniform memory hierarchy into computational models will become unavoidable.With the emergence of multi-core CPUs,the parallelism hierarchy of current computing platforms becomes more and more complicated.Describing this complicated parallelism hierarchy in future computational models becomes more and more important.A semi-automatic toolkit that can extract model parameters and their values on real computers can reduce the model analysis complexity,thus allowing more complicated models with more parameters to be adopted.Hierarchical memory and hierarchical parallelism will be two very important features that should be considered in future model design and research.

  13. The Development of Attention Systems and Working Memory in Infancy.

    Science.gov (United States)

    Reynolds, Greg D; Romano, Alexandra C

    2016-01-01

    In this article, we review research and theory on the development of attention and working memory in infancy using a developmental cognitive neuroscience framework. We begin with a review of studies examining the influence of attention on neural and behavioral correlates of an earlier developing and closely related form of memory (i.e., recognition memory). Findings from studies measuring attention utilizing looking measures, heart rate, and event-related potentials (ERPs) indicate significant developmental change in sustained and selective attention across the infancy period. For example, infants show gains in the magnitude of the attention related response and spend a greater proportion of time engaged in attention with increasing age (Richards and Turner, 2001). Throughout infancy, attention has a significant impact on infant performance on a variety of tasks tapping into recognition memory; however, this approach to examining the influence of infant attention on memory performance has yet to be utilized in research on working memory. In the second half of the article, we review research on working memory in infancy focusing on studies that provide insight into the developmental timing of significant gains in working memory as well as research and theory related to neural systems potentially involved in working memory in early development. We also examine issues related to measuring and distinguishing between working memory and recognition memory in infancy. To conclude, we discuss relations between the development of attention systems and working memory.

  14. Comparison of systems for memory allocation in the C programming language

    OpenAIRE

    Zavrtanik, Matej

    2016-01-01

    The bachelor thesis describes memory allocation. Work begins with description of mechanism, system calls and data structures used in memory allocators. Goals of memory allocation ares listed along with problems which must be avoided. Afterwards construction and allocating of popular memory allocators is described. Work ends with comparison of memory allocators based on time of execution of programs and memory usage, on which conclusion is based.

  15. Development of Ethernet emulation driver for reflective memory

    International Nuclear Information System (INIS)

    Seo, Seong-Heon

    2010-01-01

    Reflective memory (RFM) is adopted as a real time network in the KSTAR plasma control system (PCS). Since the data uploaded from any computer are automatically shared among all the computers on the RFM network, the design of a distributed control system based on RFM is easily implemented through the management of memory mapping. The data providers and consumers are logically well seperated so that, if memory mapping information is given, a new control unit can be added without any modification to the existing system except connecting a new RFM module through an optical cable. The KSTAR PCS is also connected with the Ethernet in addition to the RFM because the RFM does not support the Transmission Control Protocol/Internet Protocol (TCP/IP) and many network services of the operating system such as the Network File System (NFS) and the Secure Shell (SSH) are based on the TCP/IP. Therefore we developed an Ethernet emulation driver for the RFM to eliminate the need for a separate Ethernet network. The driver was tested on the Linux kernel 2.6.31. The algorithm of the emulation driver is explained and the experimental setup is presented.

  16. RAM-efficient external memory sorting

    DEFF Research Database (Denmark)

    Arge, Lars; Thorup, Mikkel

    2013-01-01

    In recent years a large number of problems have been considered in external memory models of computation, where the complexity measure is the number of blocks of data that are moved between slow external memory and fast internal memory (also called I/Os). In practice, however, internal memory time...... often dominates the total running time once I/O-efficiency has been obtained. In this paper we study algorithms for fundamental problems that are simultaneously I/O-efficient and internal memory efficient in the RAM model of computation....

  17. Photon echo quantum random access memory integration in a quantum computer

    International Nuclear Information System (INIS)

    Moiseev, Sergey A; Andrianov, Sergey N

    2012-01-01

    We have analysed an efficient integration of multi-qubit echo quantum memory (QM) into the quantum computer scheme based on squids, quantum dots or atomic resonant ensembles in a quantum electrodynamics cavity. Here, one atomic ensemble with controllable inhomogeneous broadening is used for the QM node and other nodes characterized by the homogeneously broadened resonant line are used for processing. We have found the optimal conditions for the efficient integration of the multi-qubit QM modified for the analysed scheme, and we have determined the self-temporal modes providing a perfect reversible transfer of the photon qubits between the QM node and arbitrary processing nodes. The obtained results open the way for realization of a full-scale solid state quantum computing based on the efficient multi-qubit QM. (paper)

  18. Analysis of the Organization of Lexical Memory

    National Research Council Canada - National Science Library

    Miller, George

    1997-01-01

    The practical outcome of the project, Analysis of the Organization of Lexical Memory, is an electronic lexical database called WordNet that can be incorporated into computer systems for processing English text...

  19. Determination of memory performance

    International Nuclear Information System (INIS)

    Gopych, P.M.

    1999-01-01

    Within the scope of testing statistical hypotheses theory a model definition and a computer method for model calculation of widely used in neuropsychology human memory performance (free recall, cued recall, and recognition probabilities), a model definition and a computer method for model calculation of intensities of cues used in experiments for testing human memory quality are proposed. Models for active and passive traces of memory and their relations are found. It was shown that autoassociative memory unit in the form of short two-layer artificial neural network with (or without) damages can be used for model description of memory performance in subjects with (or without) local brain lesions

  20. Stress and the engagement of multiple memory systems: integration of animal and human studies.

    Science.gov (United States)

    Schwabe, Lars

    2013-11-01

    Learning and memory can be controlled by distinct memory systems. How these systems are coordinated to optimize learning and behavior has long been unclear. Accumulating evidence indicates that stress may modulate the engagement of multiple memory systems. In particular, rodent and human studies demonstrate that stress facilitates dorsal striatum-dependent "habit" memory, at the expense of hippocampus-dependent "cognitive" memory. Based on these data, a model is proposed which states that the impact of stress on the relative use of multiple memory systems is due to (i) differential effects of hormones and neurotransmitters that are released during stressful events on hippocampal and dorsal striatal memory systems, thus changing the relative strength of and the interactions between these systems, and (ii) a modulatory influence of the amygdala which biases learning toward dorsal striatum-based memory after stress. This shift to habit memory after stress can be adaptive with respect to current performance but might contribute to psychopathology in vulnerable individuals. Copyright © 2013 Wiley Periodicals, Inc.

  1. A versatile data handling system for nuclear physics experiments based on PDP 11/03 micro-computers

    International Nuclear Information System (INIS)

    Raaf, A.J. de

    1979-01-01

    A reliable and low cost data handling system for nuclear physics experiments is described. It is based on two PDP 11/03 micro-computers together with Gec-Elliott CAMAC equipment. For the acquisition of the experimental data a fast system has been designed. It consists of a controller for four ADCs together with an intelligent 38k MOS memory with a word size of 24 bits. (Auth.)

  2. The Impact of Transactive Memory System and Interaction Platform in Collaborative Knowledge Construction on Social Presence and Self-Regulation

    Science.gov (United States)

    Yilmaz, Ramazan; Karaoglan Yilmaz, Fatma Gizem; Kilic Cakmak, Ebru

    2017-01-01

    The purpose of this study is to examine the impacts of transactive memory system (TMS) and interaction platforms in computer-supported collaborative learning (CSCL) on social presence perceptions and self-regulation skills of learners. Within the scope of the study, social presence perceptions and self-regulation skills of students in…

  3. Memory systems interaction in the pigeon: working and reference memory.

    Science.gov (United States)

    Roberts, William A; Strang, Caroline; Macpherson, Krista

    2015-04-01

    Pigeons' performance on a working memory task, symbolic delayed matching-to-sample, was used to examine the interaction between working memory and reference memory. Reference memory was established by training pigeons to discriminate between the comparison cues used in delayed matching as S+ and S- stimuli. Delayed matching retention tests then measured accuracy when working and reference memory were congruent and incongruent. In 4 experiments, it was shown that the interaction between working and reference memory is reciprocal: Strengthening either type of memory leads to a decrease in the influence of the other type of memory. A process dissociation procedure analysis of the data from Experiment 4 showed independence of working and reference memory, and a model of working memory and reference memory interaction was shown to predict the findings reported in the 4 experiments. (PsycINFO Database Record (c) 2015 APA, all rights reserved).

  4. Self-correcting quantum memory in a thermal environment

    International Nuclear Information System (INIS)

    Chesi, Stefano; Roethlisberger, Beat; Loss, Daniel

    2010-01-01

    The ability to store information is of fundamental importance to any computer, be it classical or quantum. To identify systems for quantum memories, which rely, analogously to classical memories, on passive error protection (''self-correction''), is of greatest interest in quantum information science. While systems with topological ground states have been considered to be promising candidates, a large class of them was recently proven unstable against thermal fluctuations. Here, we propose two-dimensional (2D) spin models unaffected by this result. Specifically, we introduce repulsive long-range interactions in the toric code and establish a memory lifetime polynomially increasing with the system size. This remarkable stability is shown to originate directly from the repulsive long-range nature of the interactions. We study the time dynamics of the quantum memory in terms of diffusing anyons and support our analytical results with extensive numerical simulations. Our findings demonstrate that self-correcting quantum memories can exist in 2D at finite temperatures.

  5. Emotional Arousal and Multiple Memory Systems in the Mammalian Brain

    Directory of Open Access Journals (Sweden)

    Mark G. Packard

    2012-03-01

    Full Text Available Emotional arousal induced by stress and/or anxiety can exert complex effects on learning and memory processes in mammals. Recent studies have begun to link study of the influence of emotional arousal on memory with earlier research indicating that memory is organized in multiple systems in the brain that differ in terms of the type of memory they mediate. Specifically, these studies have examined whether emotional arousal may have a differential effect on the cognitive and stimulus-response habit memory processes subserved by the hippocampus and dorsal striatum, respectively. Evidence indicates that stress or the peripheral injection of anxiogenic drugs can bias animals and humans towards the use of striatal-dependent habit memory in dual-solution tasks in which both hippocampal and stritatal-based strategies can provide an adequate solution. A bias towards the use of habit memory can also be produced by intra-basolateral amygdala administration of anxiogenic drugs, consistent with the well documented role of efferent projections of this brain region in mediating the modulatory influence of emotional arousal on memory. In some learning situations, the bias towards the use of habit memory produced by emotional arousal appears to result from an impairing effect on hippocampus-dependent cognitive memory. Further research examining the neural mechanisms linking emotion and the relative use of multiple memory systems should prove useful in view of the potential role for maladaptive habitual behaviors in various human psychopathologies.

  6. Mass memory formatter subsystem of the adaptive intrusion data system

    International Nuclear Information System (INIS)

    Corlis, N.E.

    1980-09-01

    The Mass Memory Formatter was developed as part of the Adaptive Intrusion Data System (AIDS) to control a 2.4-megabit mass memory. The data from a Memory Controlled Processor is formatted before it is stored in the memory and reformatted during the readout mode. The data is then transmitted to a NOVA 2 minicomputer-controlled magnetic tape recorder for storage. Techniques and circuits are described

  7. Coupling Computer Codes for The Analysis of Severe Accident Using A Pseudo Shared Memory Based on MPI

    International Nuclear Information System (INIS)

    Cho, Young Chul; Park, Chang-Hwan; Kim, Dong-Min

    2016-01-01

    As there are four codes in-vessel analysis code (CSPACE), ex-vessel analysis code (SACAP), corium behavior analysis code (COMPASS), and fission product behavior analysis code, for the analysis of severe accident, it is complex to implement the coupling of codes with the similar methodologies for RELAP and CONTEMPT or SPACE and CAP. Because of that, an efficient coupling so called Pseudo shared memory architecture was introduced. In this paper, coupling methodologies will be compared and the methodology used for the analysis of severe accident will be discussed in detail. The barrier between in-vessel and ex-vessel has been removed for the analysis of severe accidents with the implementation of coupling computer codes with pseudo shared memory architecture based on MPI. The remaining are proper choice and checking of variables and values for the selected severe accident scenarios, e.g., TMI accident. Even though it is possible to couple more than two computer codes with pseudo shared memory architecture, the methodology should be revised to couple parallel codes especially when they are programmed using MPI

  8. Coupling Computer Codes for The Analysis of Severe Accident Using A Pseudo Shared Memory Based on MPI

    Energy Technology Data Exchange (ETDEWEB)

    Cho, Young Chul; Park, Chang-Hwan; Kim, Dong-Min [FNC Technology Co., Yongin (Korea, Republic of)

    2016-10-15

    As there are four codes in-vessel analysis code (CSPACE), ex-vessel analysis code (SACAP), corium behavior analysis code (COMPASS), and fission product behavior analysis code, for the analysis of severe accident, it is complex to implement the coupling of codes with the similar methodologies for RELAP and CONTEMPT or SPACE and CAP. Because of that, an efficient coupling so called Pseudo shared memory architecture was introduced. In this paper, coupling methodologies will be compared and the methodology used for the analysis of severe accident will be discussed in detail. The barrier between in-vessel and ex-vessel has been removed for the analysis of severe accidents with the implementation of coupling computer codes with pseudo shared memory architecture based on MPI. The remaining are proper choice and checking of variables and values for the selected severe accident scenarios, e.g., TMI accident. Even though it is possible to couple more than two computer codes with pseudo shared memory architecture, the methodology should be revised to couple parallel codes especially when they are programmed using MPI.

  9. The Effects of 3D Computer Simulation on Biology Students' Achievement and Memory Retention

    Science.gov (United States)

    Elangovan, Tavasuria; Ismail, Zurida

    2014-01-01

    A quasi experimental study was conducted for six weeks to determine the effectiveness of two different 3D computer simulation based teaching methods, that is, realistic simulation and non-realistic simulation on Form Four Biology students' achievement and memory retention in Perak, Malaysia. A sample of 136 Form Four Biology students in Perak,…

  10. [Artificial intelligence meeting neuropsychology. Semantic memory in normal and pathological aging].

    Science.gov (United States)

    Aimé, Xavier; Charlet, Jean; Maillet, Didier; Belin, Catherine

    2015-03-01

    Artificial intelligence (IA) is the subject of much research, but also many fantasies. It aims to reproduce human intelligence in its learning capacity, knowledge storage and computation. In 2014, the Defense Advanced Research Projects Agency (DARPA) started the restoring active memory (RAM) program that attempt to develop implantable technology to bridge gaps in the injured brain and restore normal memory function to people with memory loss caused by injury or disease. In another IA's field, computational ontologies (a formal and shared conceptualization) try to model knowledge in order to represent a structured and unambiguous meaning of the concepts of a target domain. The aim of these structures is to ensure a consensual understanding of their meaning and a univariant use (the same concept is used by all to categorize the same individuals). The first representations of knowledge in the AI's domain are largely based on model tests of semantic memory. This one, as a component of long-term memory is the memory of words, ideas, concepts. It is the only declarative memory system that resists so remarkably to the effects of age. In contrast, non-specific cognitive changes may decrease the performance of elderly in various events and instead report difficulties of access to semantic representations that affect the semantics stock itself. Some dementias, like semantic dementia and Alzheimer's disease, are linked to alteration of semantic memory. We propose in this paper, using the computational ontologies model, a formal and relatively thin modeling, in the service of neuropsychology: 1) for the practitioner with decision support systems, 2) for the patient as cognitive prosthesis outsourced, and 3) for the researcher to study semantic memory.

  11. An 'ADC-Memory' system based on a new principle in data access

    International Nuclear Information System (INIS)

    Pan Dajing; Wu Yongqing; Wang Shibo

    1990-01-01

    A new kind of 'ADC-Memory' (ADC-M) with real time correction of counting loss in dead time is now used in a multiuser data acquisition and processing system based on DUAL/68000 microcomputer. In data access, it replaces the 'DMA + 1' in classical MCA with the new method 'DMA + N', where N is weight factor of correction. The new method is based on the principle of virtual pulse generator. This method is superior to the correction by the software because the correction needn't take the computer time. Thus, this ADC-M can be used in the counting of high rate pulses

  12. Reliable computer systems.

    Science.gov (United States)

    Wear, L L; Pinkert, J R

    1993-11-01

    In this article, we looked at some decisions that apply to the design of reliable computer systems. We began with a discussion of several terms such as testability, then described some systems that call for highly reliable hardware and software. The article concluded with a discussion of methods that can be used to achieve higher reliability in computer systems. Reliability and fault tolerance in computers probably will continue to grow in importance. As more and more systems are computerized, people will want assurances about the reliability of these systems, and their ability to work properly even when sub-systems fail.

  13. Gamma camera investigations using an on-line computer system

    International Nuclear Information System (INIS)

    Vikterloef, K.J.; Beckman, K.-W.; Berne, E.; Liljenfors, B.

    1974-01-01

    A computer system for use with a gamma camera has been developed by Oerebro Regional Hospital and Nukab AB using a PDP 8/e with a 12K core memory connected to a Selektronik gamma camera. It is possible to register, without loss, pictures of high (5kcps) pulse frequency, two separate channels with identical coordinates, fast dynamic functions down to 5 pictures/second, and to perform statistical smoothing and subtraction of two separate pictures. Experience has shown these possibilities to be so valuable that one has difficulty in thinking of a scanning system without them. This applies not only to sophisticated investigations, e.g. dual isotope registration, but also in conventional scanning for avoiding false positive interpretations and increasing the precision. It is possible at relatively low cost to add a dosage planning system. (JIW)

  14. Implantation and use of a version of the GAMALTA computer code in the 3.500 M Lecroy system

    International Nuclear Information System (INIS)

    Auler, L.T.

    1984-05-01

    The GAMALTA computer code was implanted in the 3.500 M Le Croy system, for creating an optional analysis function which is charged in RAM memory from a discket. The mode to construct functions to make part of the menu of the system is explained and a procedure to use the GAMALTA code is done. (M.C.K.) [pt

  15. Amorphous Semiconductors: From Photocatalyst to Computer Memory

    Science.gov (United States)

    Sundararajan, Mayur

    encouraging but inconclusive. Then the method was successfully demonstrated on mesoporous TiO2SiO 2 by showing a shift in its optical bandgap. One of the special class of amorphous semiconductors is chalcogenide glasses, which exhibit high ionic conductivity even at room temperature. When metal doped chalcogenide glasses are under an electric field, they become electronically conductive. These properties are exploited in the computer memory storage application of Conductive Bridging Random Access Memory (CBRAM). CBRAM is a non-volatile memory that is a strong contender to replace conventional volatile RAMs such as DRAM, SRAM, etc. This technology has already been commercialized, but the working mechanism is still not clearly understood especially the nature of the conductive bridge filament. In this project, the CBRAM memory cells are fabricated by thermal evaporation method with Agx(GeSe 2)1-x as the solid electrolyte layer, Ag as the active electrode and Au as the inert electrode. By careful use of cyclic voltammetry, the conductive filaments were grown on the surface and the bulk of the solid electrolyte. The comparison between the two filaments revealed major differences leading to contradiction with the existing working mechanism. After compiling all the results, a modified working mechanism is proposed. SAXS is a powerful tool to characterize nanostructure of glasses. The analysis of the SAXS data to get useful information are usually performed by different programs. In this project, Irena and GIFT programs were compared by performing the analysis of the SAXS data of glass and glass ceramics samples. Irena was shown to be not suitable for the analysis of SAXS data that has a significant contribution from interparticle interactions. GIFT was demonstrated to be better suited for such analysis. Additionally, the results obtained by programs for samples with low interparticle interactions were shown to be consistent.

  16. Bessel function expansion to reduce the calculation time and memory usage for cylindrical computer-generated holograms.

    Science.gov (United States)

    Sando, Yusuke; Barada, Daisuke; Jackin, Boaz Jessie; Yatagai, Toyohiko

    2017-07-10

    This study proposes a method to reduce the calculation time and memory usage required for calculating cylindrical computer-generated holograms. The wavefront on the cylindrical observation surface is represented as a convolution integral in the 3D Fourier domain. The Fourier transformation of the kernel function involving this convolution integral is analytically performed using a Bessel function expansion. The analytical solution can drastically reduce the calculation time and the memory usage without any cost, compared with the numerical method using fast Fourier transform to Fourier transform the kernel function. In this study, we present the analytical derivation, the efficient calculation of Bessel function series, and a numerical simulation. Furthermore, we demonstrate the effectiveness of the analytical solution through comparisons of calculation time and memory usage.

  17. Homeostatic regulation of memory systems and adaptive decisions.

    Science.gov (United States)

    Mizumori, Sheri J Y; Jo, Yong Sang

    2013-11-01

    While it is clear that many brain areas process mnemonic information, understanding how their interactions result in continuously adaptive behaviors has been a challenge. A homeostatic-regulated prediction model of memory is presented that considers the existence of a single memory system that is based on a multilevel coordinated and integrated network (from cells to neural systems) that determines the extent to which events and outcomes occur as predicted. The "multiple memory systems of the brain" have in common output that signals errors in the prediction of events and/or their outcomes, although these signals differ in terms of what the error signal represents (e.g., hippocampus: context prediction errors vs. midbrain/striatum: reward prediction errors). The prefrontal cortex likely plays a pivotal role in the coordination of prediction analysis within and across prediction brain areas. By virtue of its widespread control and influence, and intrinsic working memory mechanisms. Thus, the prefrontal cortex supports the flexible processing needed to generate adaptive behaviors and predict future outcomes. It is proposed that prefrontal cortex continually and automatically produces adaptive responses according to homeostatic regulatory principles: prefrontal cortex may serve as a controller that is intrinsically driven to maintain in prediction areas an experience-dependent firing rate set point that ensures adaptive temporally and spatially resolved neural responses to future prediction errors. This same drive by prefrontal cortex may also restore set point firing rates after deviations (i.e. prediction errors) are detected. In this way, prefrontal cortex contributes to reducing uncertainty in prediction systems. An emergent outcome of this homeostatic view may be the flexible and adaptive control that prefrontal cortex is known to implement (i.e. working memory) in the most challenging of situations. Compromise to any of the prediction circuits should result in

  18. Studies of Human Memory and Language Processing.

    Science.gov (United States)

    Collins, Allan M.

    The purposes of this study were to determine the nature of human semantic memory and to obtain knowledge usable in the future development of computer systems that can converse with people. The work was based on a computer model which is designed to comprehend English text, relating the text to information stored in a semantic data base that is…

  19. Results from the First Two Flights of the Static Computer Memory Integrity Testing Experiment

    Science.gov (United States)

    Hancock, Thomas M., III

    1999-01-01

    This paper details the scientific objectives, experiment design, data collection method, and post flight analysis following the first two flights of the Static Computer Memory Integrity Testing (SCMIT) experiment. SCMIT is designed to detect soft-event upsets in passive magnetic memory. A soft-event upset is a change in the logic state of active or passive forms of magnetic memory, commonly referred to as a "Bitflip". In its mildest form a soft-event upset can cause software exceptions, unexpected events, start spacecraft safeing (ending data collection) or corrupted fault protection and error recovery capabilities. In it's most severe form loss of mission or spacecraft can occur. Analysis after the first flight (in 1991 during STS-40) identified possible soft-event upsets to 25% of the experiment detectors. Post flight analysis after the second flight (in 1997 on STS-87) failed to find any evidence of soft-event upsets. The SCMIT experiment is currently scheduled for a third flight in December 1999 on STS-101.

  20. Distributed memory in a heterogeneous network, as used in the CERN-PS complex timing system

    CERN Document Server

    Kovaltsov, V I

    1995-01-01

    The Distributed Table Manager (DTM) is a fast and efficient utility for distributing named binary data structures called Tables, of arbitrary size and structure, around a heterogeneous network of computers to a set of registered clients. The Tables are transmitted over a UDP network between DTM servers in network format, where the servers perform the conversions to and from host format for local clients. The servers provide clients with synchronization mechanisms, a choice of network data flows, and table options such as keeping table disc copies, shared memory or heap memory table allocation, table read/write permissions, and table subnet broadcasting. DTM has been designed to be easily maintainable, and to automatically recover from the type of errors typically encountered in a large control system network. The DTM system is based on a three level server daemon hierarchy, in which an inter daemon protocol handles network failures, and incorporates recovery procedures which will guarantee table consistency w...

  1. Performing an allreduce operation using shared memory

    Science.gov (United States)

    Archer, Charles J [Rochester, MN; Dozsa, Gabor [Ardsley, NY; Ratterman, Joseph D [Rochester, MN; Smith, Brian E [Rochester, MN

    2012-04-17

    Methods, apparatus, and products are disclosed for performing an allreduce operation using shared memory that include: receiving, by at least one of a plurality of processing cores on a compute node, an instruction to perform an allreduce operation; establishing, by the core that received the instruction, a job status object for specifying a plurality of shared memory allreduce work units, the plurality of shared memory allreduce work units together performing the allreduce operation on the compute node; determining, by an available core on the compute node, a next shared memory allreduce work unit in the job status object; and performing, by that available core on the compute node, that next shared memory allreduce work unit.

  2. Shape memory alloys applied to improve rotor-bearing system dynamics - an experimental investigation

    DEFF Research Database (Denmark)

    Enemark, Søren; Santos, Ilmar; Savi, Marcelo A.

    2015-01-01

    passing through critical speeds. In this work, the feasibility of applying shape memory alloys to a rotating system is experimentally investigated. Shape memory alloys can change their stiffness with temperature variations and thus they may change system dynamics. Shape memory alloys also exhibit...... perturbations and mass imbalance responses of the rotor-bearing system at different temperatures and excitation frequencies are carried out to determine the dynamic behaviour of the system. The behaviour and the performance in terms of vibration reduction and system adaptability are compared against a benchmark...... configuration comprised by the same system having steel springs instead of shape memory alloy springs. The experimental results clearly show that the stiffness changes and hysteretic behaviour of the shape memory alloys springs alter system dynamics both in terms of critical speeds and mode shapes. Vibration...

  3. Evolving spiking networks with variable resistive memories.

    Science.gov (United States)

    Howard, Gerard; Bull, Larry; de Lacy Costello, Ben; Gale, Ella; Adamatzky, Andrew

    2014-01-01

    Neuromorphic computing is a brainlike information processing paradigm that requires adaptive learning mechanisms. A spiking neuro-evolutionary system is used for this purpose; plastic resistive memories are implemented as synapses in spiking neural networks. The evolutionary design process exploits parameter self-adaptation and allows the topology and synaptic weights to be evolved for each network in an autonomous manner. Variable resistive memories are the focus of this research; each synapse has its own conductance profile which modifies the plastic behaviour of the device and may be altered during evolution. These variable resistive networks are evaluated on a noisy robotic dynamic-reward scenario against two static resistive memories and a system containing standard connections only. The results indicate that the extra behavioural degrees of freedom available to the networks incorporating variable resistive memories enable them to outperform the comparative synapse types.

  4. Effects of Violent and Non-Violent Computer Game Content on Memory Performance in Adolescents

    Science.gov (United States)

    Maass, Asja; Kollhorster, Kirsten; Riediger, Annemarie; MacDonald, Vanessa; Lohaus, Arnold

    2011-01-01

    The present study focuses on the short-term effects of electronic entertainment media on memory and learning processes. It compares the effects of violent versus non-violent computer game content in a condition of playing and in another condition of watching the same game. The participants consisted of 83 female and 94 male adolescents with a mean…

  5. Working Memory Interventions with Children: Classrooms or Computers?

    Science.gov (United States)

    Colmar, Susan; Double, Kit

    2017-01-01

    The importance of working memory to classroom functioning and academic outcomes has led to the development of many interventions designed to enhance students' working memory. In this article we briefly review the evidence for the relative effectiveness of classroom and computerised working memory interventions in bringing about measurable and…

  6. A unified theory for systems and cellular memory consolidation.

    Science.gov (United States)

    Dash, Pramod K; Hebert, April E; Runyan, Jason D

    2004-04-01

    The time-limited role of the hippocampus for explicit memory storage has been referred to as systems consolidation where learning-related changes occur first in the hippocampus followed by the gradual development of a more distributed memory trace in the neocortex. Recent experiments are beginning to show that learning induces plasticity-related molecular changes in the neocortex as well as in the hippocampus and with a similar time course. Present memory consolidation theories do not account for these findings. In this report, we present a theory (the C theory) that incorporates these new findings, provides an explanation for the length of time for hippocampal dependency, and that can account for the apparent longer consolidation periods in species with larger brains. This theory proposes that a process of cellular consolidation occurs in the hippocampus and in areas of the neocortex during and shortly after learning resulting in long-term memory storage in both areas. For a limited time, the hippocampus is necessary for memory retrieval, a process involving the coordinated reactivation of these areas. This reactivation is later mediated by longer extrahippocampal connectivity between areas. The delay in hippocampal-independent memory retrieval is the time it takes for gene products in these longer extrahippocampal projections to be transported from the soma to tagged synapses by slow axonal transport. This cellular transport event defines the period of hippocampal dependency and, thus, the duration of memory consolidation. The theoretical description for memory consolidation presented in this review provides alternative explanations for several experimental observations and presents a unification of the concepts of systems and cellular memory consolidation.

  7. Optical quantum memory

    Science.gov (United States)

    Lvovsky, Alexander I.; Sanders, Barry C.; Tittel, Wolfgang

    2009-12-01

    Quantum memory is essential for the development of many devices in quantum information processing, including a synchronization tool that matches various processes within a quantum computer, an identity quantum gate that leaves any state unchanged, and a mechanism to convert heralded photons to on-demand photons. In addition to quantum computing, quantum memory will be instrumental for implementing long-distance quantum communication using quantum repeaters. The importance of this basic quantum gate is exemplified by the multitude of optical quantum memory mechanisms being studied, such as optical delay lines, cavities and electromagnetically induced transparency, as well as schemes that rely on photon echoes and the off-resonant Faraday interaction. Here, we report on state-of-the-art developments in the field of optical quantum memory, establish criteria for successful quantum memory and detail current performance levels.

  8. Towards scalable parellelism in Monte Carlo particle transport codes using remote memory access

    Energy Technology Data Exchange (ETDEWEB)

    Romano, Paul K [Los Alamos National Laboratory; Brown, Forrest B [Los Alamos National Laboratory; Forget, Benoit [MIT

    2010-01-01

    One forthcoming challenge in the area of high-performance computing is having the ability to run large-scale problems while coping with less memory per compute node. In this work, they investigate a novel data decomposition method that would allow Monte Carlo transport calculations to be performed on systems with limited memory per compute node. In this method, each compute node remotely retrieves a small set of geometry and cross-section data as needed and remotely accumulates local tallies when crossing the boundary of the local spatial domain. initial results demonstrate that while the method does allow large problems to be run in a memory-limited environment, achieving scalability may be difficult due to inefficiencies in the current implementation of RMA operations.

  9. Towards scalable parallelism in Monte Carlo particle transport codes using remote memory access

    International Nuclear Information System (INIS)

    Romano, Paul K.; Forget, Benoit; Brown, Forrest

    2010-01-01

    One forthcoming challenge in the area of high-performance computing is having the ability to run large-scale problems while coping with less memory per compute node. In this work, we investigate a novel data decomposition method that would allow Monte Carlo transport calculations to be performed on systems with limited memory per compute node. In this method, each compute node remotely retrieves a small set of geometry and cross-section data as needed and remotely accumulates local tallies when crossing the boundary of the local spatial domain. Initial results demonstrate that while the method does allow large problems to be run in a memory-limited environment, achieving scalability may be difficult due to inefficiencies in the current implementation of RMA operations. (author)

  10. Control programs of multichannel pulse height analyzer with CAMAC system using FACOM U-200 mini-computer

    International Nuclear Information System (INIS)

    Yamagishi, Kojiro

    1978-02-01

    The 4096 channel Pulse Height Analyzer (PHA) assembled with CAMAC plug-in units has been developed in JAERI. The PHA consists of ADC unit, CRT-display unit, and CAMAC plug-in units, which are memory-controller, MCA-timer, 4K words RAM memory and CRT-driver. The system is on-line connected to FACOM U-200 Mini-Computer through CAMAC interface unit Crate-controller. The softwares for on-line data acquisition of the system have been developed. These are four utility programs written in FORTRAN and two program packages written in assembler language FASP which are CAMAC Program Package and Basic Input/Output Program Package. CAMAC Program Package has 18 subroutine programs for control of CAMAC plug-in units from FACOM U-200 Mini-Computer; and Basic Input/Output Program Package has 26 subroutine programs to input/output data to/from a typewriter, keyboard, cassette magnetic tape and open reel magnetic tape. These subroutine programs are all FORTRAN callable. The PHA with CAMAC system is first outlined, and then usage is described in detail of four utility programs, CAMAC Program Package and Basic Input/Output Program Package. (auth.)

  11. All-spin logic operations: Memory device and reconfigurable computing

    Science.gov (United States)

    Patra, Moumita; Maiti, Santanu K.

    2018-02-01

    Exploiting spin degree of freedom of electron a new proposal is given to characterize spin-based logical operations using a quantum interferometer that can be utilized as a programmable spin logic device (PSLD). The ON and OFF states of both inputs and outputs are described by spin state only, circumventing spin-to-charge conversion at every stage as often used in conventional devices with the inclusion of extra hardware that can eventually diminish the efficiency. All possible logic functions can be engineered from a single device without redesigning the circuit which certainly offers the opportunities of designing new generation spintronic devices. Moreover, we also discuss the utilization of the present model as a memory device and suitable computing operations with proposed experimental setups.

  12. Simulation of radiation effects on three-dimensional computer optical memories

    Science.gov (United States)

    Moscovitch, M.; Emfietzoglou, D.

    1997-01-01

    A model was developed to simulate the effects of heavy charged-particle (HCP) radiation on the information stored in three-dimensional computer optical memories. The model is based on (i) the HCP track radial dose distribution, (ii) the spatial and temporal distribution of temperature in the track, (iii) the matrix-specific radiation-induced changes that will affect the response, and (iv) the kinetics of transition of photochromic molecules from the colored to the colorless isomeric form (bit flip). It is shown that information stored in a volume of several nanometers radius around the particle's track axis may be lost. The magnitude of the effect is dependent on the particle's track structure.

  13. Read method compensating parasitic sneak currents in a crossbar memristive memory

    KAUST Repository

    Zidan, Mohammed A.; Omran, Hesham; Naous, Rawan; Salem, Ahmed Sultan; Salama, Khaled N.

    2017-01-01

    properties of the computer memory system to address this sneak-paths problem. The method of the invention is a method for reading a target memory cell located at an intersection of a target row of a gateless array and a target column of the gateless array

  14. On the Performance of Three In-Memory Data Systems for On Line Analytical Processing

    Directory of Open Access Journals (Sweden)

    Ionut HRUBARU

    2017-01-01

    Full Text Available In-memory database systems are among the most recent and most promising Big Data technologies, being developed and released either as brand new distributed systems or as extensions of old monolith (centralized database systems. As name suggests, in-memory systems cache all the data into special memory structures. Many are part of the NewSQL strand and target to bridge the gap between OLTP and OLAP into so-called Hybrid Transactional Analytical Systems (HTAP. This paper aims to test the performance of using such type of systems for TPCH analytical workloads. Performance is analyzed in terms of data loading, memory footprint and execution time of the TPCH query set for three in-memory data systems: Oracle, SQL Server and MemSQL. Tests are subsequently deployed on classical on-disk architectures and results compared to in-memory solutions. As in-memory is an enterprise edition feature, associated costs are also considered.

  15. Static Computer Memory Integrity Testing (SCMIT): An experiment flown on STS-40 as part of GAS payload G-616

    Science.gov (United States)

    Hancock, Thomas

    1993-01-01

    This experiment investigated the integrity of static computer memory (floppy disk media) when exposed to the environment of low earth orbit. The experiment attempted to record soft-event upsets (bit-flips) in static computer memory. Typical conditions that exist in low earth orbit that may cause soft-event upsets include: cosmic rays, low level background radiation, charged fields, static charges, and the earth's magnetic field. Over the years several spacecraft have been affected by soft-event upsets (bit-flips), and these events have caused a loss of data or affected spacecraft guidance and control. This paper describes a commercial spin-off that is being developed from the experiment.

  16. Dissociation of spatial memory systems in Williams syndrome.

    Science.gov (United States)

    Bostelmann, Mathilde; Fragnière, Emilie; Costanzo, Floriana; Di Vara, Silvia; Menghini, Deny; Vicari, Stefano; Lavenex, Pierre; Lavenex, Pamela Banta

    2017-11-01

    Williams syndrome (WS), a genetic deletion syndrome, is characterized by severe visuospatial deficits affecting performance on both tabletop spatial tasks and on tasks which assess orientation and navigation. Nevertheless, previous studies of WS spatial capacities have ignored the fact that two different spatial memory systems are believed to contribute parallel spatial representations supporting navigation. The place learning system depends on the hippocampal formation and creates flexible relational representations of the environment, also known as cognitive maps. The spatial response learning system depends on the striatum and creates fixed stimulus-response representations, also known as habits. Indeed, no study assessing WS spatial competence has used tasks which selectively target these two spatial memory systems. Here, we report that individuals with WS exhibit a dissociation in their spatial abilities subserved by these two memory systems. As compared to typically developing (TD) children in the same mental age range, place learning performance was impaired in individuals with WS. In contrast, their spatial response learning performance was facilitated. Our findings in individuals with WS and TD children suggest that place learning and response learning interact competitively to control the behavioral strategies normally used to support human spatial navigation. Our findings further suggest that the neural pathways supporting place learning may be affected by the genetic deletion that characterizes WS, whereas those supporting response learning may be relatively preserved. The dissociation observed between these two spatial memory systems provides a coherent theoretical framework to characterize the spatial abilities of individuals with WS, and may lead to the development of new learning strategies based on their facilitated response learning abilities. © 2017 Wiley Periodicals, Inc.

  17. Mini-computer in standard CAMAC

    International Nuclear Information System (INIS)

    Meyer, J.M.; Perrin, J.; Lecoq, J.; Tedjini, H.; Metzger, G.

    1975-01-01

    CAMAC is the designation of rules for the design and use of modular electronic data-handling equipment. The rules offer a standard scheme for interfacing computers to transducers and actuators in on-line systems. Where systems do not need a large memory capacity or where computing power is provided by an associated computer, a processor implemented in a CAMAC structure will be of a great interest for such a standard. In such a way built such a processor with an INTEL 8008 CPU chip with use of a CAMAC crate, a memory bus, an 1/0 bus or CAMAC horizontal Dataway and a bus connecting the CPU to the operator's panel. The interrupt system has six levels. To allow multi-programmation, the 8008's instruction set was extended with the creating of an Jump and mark instruction. A multi-task operating system was implemented allowing the execution of real time tasks, process control and program debugging. Three units have been built nowadays for: process control, education, test of CAMAC modules, image processing [fr

  18. Solving linear systems in FLICA-4, thermohydraulic code for 3-D transient computations

    International Nuclear Information System (INIS)

    Allaire, G.

    1995-01-01

    FLICA-4 is a computer code, developed at the CEA (France), devoted to steady state and transient thermal-hydraulic analysis of nuclear reactor cores, for small size problems (around 100 mesh cells) as well as for large ones (more than 100000), on, either standard workstations or vector super-computers. As for time implicit codes, the largest time and memory consuming part of FLICA-4 is the routine dedicated to solve the linear system (the size of which is of the order of the number of cells). Therefore, the efficiency of the code is crucially influenced by the optimization of the algorithms used in assembling and solving linear systems: direct methods as the Gauss (or LU) decomposition for moderate size problems, iterative methods as the preconditioned conjugate gradient for large problems. 6 figs., 13 refs

  19. Fluctuations in interacting particle systems with memory

    International Nuclear Information System (INIS)

    Harris, Rosemary J

    2015-01-01

    We consider the effects of long-range temporal correlations in many-particle systems, focusing particularly on fluctuations about the typical behaviour. For a specific class of memory dependence we discuss the modification of the large deviation principle describing the probability of rare currents and show how superdiffusive behaviour can emerge. We illustrate the general framework with detailed calculations for a memory-dependent version of the totally asymmetric simple exclusion process as well as indicating connections to other recent work

  20. BLACKCOMB2: Hardware-software co-design for non-volatile memory in exascale systems

    Energy Technology Data Exchange (ETDEWEB)

    Mudge, Trevor [Univ. of Michigan, Ann Arbor, MI (United States)

    2017-12-15

    This work was part of a larger project, Blackcomb2, centered at Oak Ridge National Labs (Jeff Vetter PI) to investigate the opportunities for replacing or supplementing DRAM main memory with nonvolatile memory (NVmemory) in Exascale memory systems. The goal was to reduce the energy consumed by in future supercomputer memory systems and to improve their resiliency. Building on the accomplishments of the original Blackcomb Project, funded in 2010, the goal for Blackcomb2 was to identify, evaluate, and optimize the most promising emerging memory technologies, architecture hardware and software technologies, which are essential to provide the necessary memory capacity, performance, resilience, and energy efficiency in Exascale systems. Capacity and energy are the key drivers.

  1. Computer Simulations of Developmental Change: The Contributions of Working Memory Capacity and Long-Term Knowledge

    Science.gov (United States)

    Jones, Gary; Gobet, Fernand; Pine, Julian M.

    2008-01-01

    Increasing working memory (WM) capacity is often cited as a major influence on children's development and yet WM capacity is difficult to examine independently of long-term knowledge. A computational model of children's nonword repetition (NWR) performance is presented that independently manipulates long-term knowledge and WM capacity to determine…

  2. Exploiting Data Sparsity for Large-Scale Matrix Computations

    KAUST Repository

    Akbudak, Kadir

    2018-02-24

    Exploiting data sparsity in dense matrices is an algorithmic bridge between architectures that are increasingly memory-austere on a per-core basis and extreme-scale applications. The Hierarchical matrix Computations on Manycore Architectures (HiCMA) library tackles this challenging problem by achieving significant reductions in time to solution and memory footprint, while preserving a specified accuracy requirement of the application. HiCMA provides a high-performance implementation on distributed-memory systems of one of the most widely used matrix factorization in large-scale scientific applications, i.e., the Cholesky factorization. It employs the tile low-rank data format to compress the dense data-sparse off-diagonal tiles of the matrix. It then decomposes the matrix computations into interdependent tasks and relies on the dynamic runtime system StarPU for asynchronous out-of-order scheduling, while allowing high user-productivity. Performance comparisons and memory footprint on matrix dimensions up to eleven million show a performance gain and memory saving of more than an order of magnitude for both metrics on thousands of cores, against state-of-the-art open-source and vendor optimized numerical libraries. This represents an important milestone in enabling large-scale matrix computations toward solving big data problems in geospatial statistics for climate/weather forecasting applications.

  3. Exploiting Data Sparsity for Large-Scale Matrix Computations

    KAUST Repository

    Akbudak, Kadir; Ltaief, Hatem; Mikhalev, Aleksandr; Charara, Ali; Keyes, David E.

    2018-01-01

    Exploiting data sparsity in dense matrices is an algorithmic bridge between architectures that are increasingly memory-austere on a per-core basis and extreme-scale applications. The Hierarchical matrix Computations on Manycore Architectures (HiCMA) library tackles this challenging problem by achieving significant reductions in time to solution and memory footprint, while preserving a specified accuracy requirement of the application. HiCMA provides a high-performance implementation on distributed-memory systems of one of the most widely used matrix factorization in large-scale scientific applications, i.e., the Cholesky factorization. It employs the tile low-rank data format to compress the dense data-sparse off-diagonal tiles of the matrix. It then decomposes the matrix computations into interdependent tasks and relies on the dynamic runtime system StarPU for asynchronous out-of-order scheduling, while allowing high user-productivity. Performance comparisons and memory footprint on matrix dimensions up to eleven million show a performance gain and memory saving of more than an order of magnitude for both metrics on thousands of cores, against state-of-the-art open-source and vendor optimized numerical libraries. This represents an important milestone in enabling large-scale matrix computations toward solving big data problems in geospatial statistics for climate/weather forecasting applications.

  4. Analyses of Markov decision process structure regarding the possible strategic use of interacting memory systems

    Directory of Open Access Journals (Sweden)

    Eric A Zilli

    2008-12-01

    Full Text Available Behavioral tasks are often used to study the different memory systems present in humans and animals. Such tasks are usually designed to isolate and measure some aspect of a single memory system. However, it is not necessarily clear that any given task actually does isolate a system or that the strategy used by a subject in the experiment is the one desired by the experimenter. We have previously shown that when tasks are written mathematically as a form of partially-observable Markov decision processes, the structure of the tasks provide information regarding the possible utility of certain memory systems. These previous analyses dealt with the disambiguation problem: given a specific ambiguous observation of the environment, is there information provided by a given memory strategy that can disambiguate that observation to allow a correct decisionµ Here we extend this approach to cases where multiple memory systems can be strategically combined in different ways. Specifically, we analyze the disambiguation arising from three ways by which episodic-like memory retrieval might be cued (by another episodic-like memory, by a semantic association, or by working memory for some earlier observation. We also consider the disambiguation arising from holding earlier working memories, episodic-like memories or semantic associations in working memory. From these analyses we can begin to develop a quantitative hierarchy among memory systems in which stimulus-response memories and semantic associations provide no disambiguation while the episodic memory system provides the most flexible

  5. Data systems and computer science programs: Overview

    Science.gov (United States)

    Smith, Paul H.; Hunter, Paul

    1991-01-01

    An external review of the Integrated Technology Plan for the Civil Space Program is presented. The topics are presented in viewgraph form and include the following: onboard memory and storage technology; advanced flight computers; special purpose flight processors; onboard networking and testbeds; information archive, access, and retrieval; visualization; neural networks; software engineering; and flight control and operations.

  6. Operating systems. [of computers

    Science.gov (United States)

    Denning, P. J.; Brown, R. L.

    1984-01-01

    A counter operating system creates a hierarchy of levels of abstraction, so that at a given level all details concerning lower levels can be ignored. This hierarchical structure separates functions according to their complexity, characteristic time scale, and level of abstraction. The lowest levels include the system's hardware; concepts associated explicitly with the coordination of multiple tasks appear at intermediate levels, which conduct 'primitive processes'. Software semaphore is the mechanism controlling primitive processes that must be synchronized. At higher levels lie, in rising order, the access to the secondary storage devices of a particular machine, a 'virtual memory' scheme for managing the main and secondary memories, communication between processes by way of a mechanism called a 'pipe', access to external input and output devices, and a hierarchy of directories cataloguing the hardware and software objects to which access must be controlled.

  7. Two years experience with a computer-assisted monitoring and recording system used in gynecological afterloading therapy

    International Nuclear Information System (INIS)

    Kaulich, T.W.; Boedi, R.; Nuesslin, F.; Hirnle, P.

    1990-01-01

    A computer program running on a simple desk-calculator has been developed for monitoring and recording gynecological high-dose afterloading therapy. For treatment monitoring the multiple-probe AM6-system (PTW-Freiburg) is used which allows for dose measurements in the urinary bladder and the rectum. The probe signals are processed on line in order to indicate the actual dose at the measuring points. After completing the irradiation the treatment is documented. Performing fractionated treatment the measuring data are stored in the computer memory for calculating total accumulated dose. The above-described monitoring- and protocolling system has proven its usefulness during two years of clinical work. (orig.) [de

  8. HTMT-class Latency Tolerant Parallel Architecture for Petaflops Scale Computation

    Science.gov (United States)

    Sterling, Thomas; Bergman, Larry

    2000-01-01

    Computational Aero Sciences and other numeric intensive computation disciplines demand computing throughputs substantially greater than the Teraflops scale systems only now becoming available. The related fields of fluids, structures, thermal, combustion, and dynamic controls are among the interdisciplinary areas that in combination with sufficient resolution and advanced adaptive techniques may force performance requirements towards Petaflops. This will be especially true for compute intensive models such as Navier-Stokes are or when such system models are only part of a larger design optimization computation involving many design points. Yet recent experience with conventional MPP configurations comprising commodity processing and memory components has shown that larger scale frequently results in higher programming difficulty and lower system efficiency. While important advances in system software and algorithms techniques have had some impact on efficiency and programmability for certain classes of problems, in general it is unlikely that software alone will resolve the challenges to higher scalability. As in the past, future generations of high-end computers may require a combination of hardware architecture and system software advances to enable efficient operation at a Petaflops level. The NASA led HTMT project has engaged the talents of a broad interdisciplinary team to develop a new strategy in high-end system architecture to deliver petaflops scale computing in the 2004/5 timeframe. The Hybrid-Technology, MultiThreaded parallel computer architecture incorporates several advanced technologies in combination with an innovative dynamic adaptive scheduling mechanism to provide unprecedented performance and efficiency within practical constraints of cost, complexity, and power consumption. The emerging superconductor Rapid Single Flux Quantum electronics can operate at 100 GHz (the record is 770 GHz) and one percent of the power required by convention

  9. Energy-aware Thread and Data Management in Heterogeneous Multi-core, Multi-memory Systems

    Energy Technology Data Exchange (ETDEWEB)

    Su, Chun-Yi [Virginia Polytechnic Inst. and State Univ. (Virginia Tech), Blacksburg, VA (United States)

    2014-12-16

    By 2004, microprocessor design focused on multicore scaling—increasing the number of cores per die in each generation—as the primary strategy for improving performance. These multicore processors typically equip multiple memory subsystems to improve data throughput. In addition, these systems employ heterogeneous processors such as GPUs and heterogeneous memories like non-volatile memory to improve performance, capacity, and energy efficiency. With the increasing volume of hardware resources and system complexity caused by heterogeneity, future systems will require intelligent ways to manage hardware resources. Early research to improve performance and energy efficiency on heterogeneous, multi-core, multi-memory systems focused on tuning a single primitive or at best a few primitives in the systems. The key limitation of past efforts is their lack of a holistic approach to resource management that balances the tradeoff between performance and energy consumption. In addition, the shift from simple, homogeneous systems to these heterogeneous, multicore, multi-memory systems requires in-depth understanding of efficient resource management for scalable execution, including new models that capture the interchange between performance and energy, smarter resource management strategies, and novel low-level performance/energy tuning primitives and runtime systems. Tuning an application to control available resources efficiently has become a daunting challenge; managing resources in automation is still a dark art since the tradeoffs among programming, energy, and performance remain insufficiently understood. In this dissertation, I have developed theories, models, and resource management techniques to enable energy-efficient execution of parallel applications through thread and data management in these heterogeneous multi-core, multi-memory systems. I study the effect of dynamic concurrent throttling on the performance and energy of multi-core, non-uniform memory access

  10. CAM: A Collaborative Object Memory System

    NARCIS (Netherlands)

    Vyas, Dhaval; Nijholt, Antinus; Kröner, Alexander

    2010-01-01

    Physical design objects such as sketches, drawings, collages, storyboards and models play an important role in supporting communication and coordination in design studios. CAM (Cooperative Artefact Memory) is a mobile-tagging based messaging system that allows designers to collaboratively store

  11. Multiple Systems of Spatial Memory: Evidence from Described Scenes

    Science.gov (United States)

    Avraamides, Marios N.; Kelly, Jonathan W.

    2010-01-01

    Recent models in spatial cognition posit that distinct memory systems are responsible for maintaining transient and enduring spatial relations. The authors used perspective-taking performance to assess the presence of these enduring and transient spatial memories for locations encoded through verbal descriptions. Across 3 experiments, spatial…

  12. Concepts and implementation of a virtual memory developments for business orientation

    International Nuclear Information System (INIS)

    Sablet, Georges de

    1976-05-01

    APL is a very powerful language especially adapted for the manipulation of very large arrays. It is generally implemented as an interpreter included in a general System. The great power of the APL System and the great size of the information on which it may work, need big computers and restrict the use of APL. We tried to find a memory management which permits the implementation of an optimized APL interpreter on a mini-computer. This report presents the most important classical ways of managing memory and explains the System developed on the MULTI-20 (Intertechnique). The memory management is based on the virtual memory principles with paging and segmentation. Two different size of pages are available: small ones and large ones which may work simultaneously and which optimize Input/Output and the use of auxiliary space. The other part of this report describes facilities for developing this language for users which are especially interested in business. We introduce generalized arrays, which suppress the concept of files. The files are only structured arrays and for the user it has no interest to know how to manage tapes or a disk. Everything seems for the user to be in the core memory. (author) [fr

  13. A single-system model predicts recognition memory and repetition priming in amnesia.

    Science.gov (United States)

    Berry, Christopher J; Kessels, Roy P C; Wester, Arie J; Shanks, David R

    2014-08-13

    We challenge the claim that there are distinct neural systems for explicit and implicit memory by demonstrating that a formal single-system model predicts the pattern of recognition memory (explicit) and repetition priming (implicit) in amnesia. In the current investigation, human participants with amnesia categorized pictures of objects at study and then, at test, identified fragmented versions of studied (old) and nonstudied (new) objects (providing a measure of priming), and made a recognition memory judgment (old vs new) for each object. Numerous results in the amnesic patients were predicted in advance by the single-system model, as follows: (1) deficits in recognition memory and priming were evident relative to a control group; (2) items judged as old were identified at greater levels of fragmentation than items judged new, regardless of whether the items were actually old or new; and (3) the magnitude of the priming effect (the identification advantage for old vs new items) overall was greater than that of items judged new. Model evidence measures also favored the single-system model over two formal multiple-systems models. The findings support the single-system model, which explains the pattern of recognition and priming in amnesia primarily as a reduction in the strength of a single dimension of memory strength, rather than a selective explicit memory system deficit. Copyright © 2014 the authors 0270-6474/14/3410963-12$15.00/0.

  14. Subversion: The Neglected Aspect of Computer Security.

    Science.gov (United States)

    1980-06-01

    it into the memory of the computer . These are called flows on covert channels... A simple covert channel is the running time of a program . Because... program and, in doing so, gives it ’permission’ to perform its covert functions. Not only will most computer systems not prevent the employment of such a...R. Schell, Major, USAF, June 1974. 109 11. Lackey, R.p., "Penetration of Computer Systems, an Overviev , Honeywell Computer Journal, Vol. 8, no. 21974

  15. Portable wireless neurofeedback system of EEG alpha rhythm enhances memory.

    Science.gov (United States)

    Wei, Ting-Ying; Chang, Da-Wei; Liu, You-De; Liu, Chen-Wei; Young, Chung-Ping; Liang, Sheng-Fu; Shaw, Fu-Zen

    2017-11-13

    Effect of neurofeedback training (NFT) on enhancement of cognitive function or amelioration of clinical symptoms is inconclusive. The trainability of brain rhythm using a neurofeedback system is uncertainty because various experimental designs are used in previous studies. The current study aimed to develop a portable wireless NFT system for alpha rhythm and to validate effect of the NFT system on memory with a sham-controlled group. The proposed system contained an EEG signal analysis device and a smartphone with wireless Bluetooth low-energy technology. Instantaneous 1-s EEG power and contiguous 5-min EEG power throughout the training were developed as feedback information. The training performance and its progression were kept to boost usability of our device. Participants were blinded and randomly assigned into either the control group receiving random 4-Hz power or Alpha group receiving 8-12-Hz power. Working memory and episodic memory were assessed by the backward digital span task and word-pair task, respectively. The portable neurofeedback system had advantages of a tiny size and long-term recording and demonstrated trainability of alpha rhythm in terms of significant increase of power and duration of 8-12 Hz. Moreover, accuracies of the backward digital span task and word-pair task showed significant enhancement in the Alpha group after training compared to the control group. Our tiny portable device demonstrated success trainability of alpha rhythm and enhanced two kinds of memories. The present study suggest that the portable neurofeedback system provides an alternative intervention for memory enhancement.

  16. Collectively loading an application in a parallel computer

    Science.gov (United States)

    Aho, Michael E.; Attinella, John E.; Gooding, Thomas M.; Miller, Samuel J.; Mundy, Michael B.

    2016-01-05

    Collectively loading an application in a parallel computer, the parallel computer comprising a plurality of compute nodes, including: identifying, by a parallel computer control system, a subset of compute nodes in the parallel computer to execute a job; selecting, by the parallel computer control system, one of the subset of compute nodes in the parallel computer as a job leader compute node; retrieving, by the job leader compute node from computer memory, an application for executing the job; and broadcasting, by the job leader to the subset of compute nodes in the parallel computer, the application for executing the job.

  17. Energy-Efficient Abundant-Data Computing: The N3XT 1,000X

    OpenAIRE

    Aly Mohamed M. Sabry; Gao Mingyu; Hills Gage; Lee Chi-Shuen; Pinter Greg; Shulaker Max M.; Wu Tony F.; Asheghi Mehdi; Bokor Jeff; Franchetti Franz; Goodson Kenneth E.; Kozyrakis Christos; Markov Igor; Olukotun Kunle; Pileggi Larry

    2015-01-01

    Next generation information technologies will process unprecedented amounts of loosely structured data that overwhelm existing computing systems. N3XT improves the energy efficiency of abundant data applications 1000 fold by using new logic and memory technologies 3D integration with fine grained connectivity and new architectures for computation immersed in memory.

  18. Assessing Programming Costs of Explicit Memory Localization on a Large Scale Shared Memory Multiprocessor

    Directory of Open Access Journals (Sweden)

    Silvio Picano

    1992-01-01

    Full Text Available We present detailed experimental work involving a commercially available large scale shared memory multiple instruction stream-multiple data stream (MIMD parallel computer having a software controlled cache coherence mechanism. To make effective use of such an architecture, the programmer is responsible for designing the program's structure to match the underlying multiprocessors capabilities. We describe the techniques used to exploit our multiprocessor (the BBN TC2000 on a network simulation program, showing the resulting performance gains and the associated programming costs. We show that an efficient implementation relies heavily on the user's ability to explicitly manage the memory system.

  19. A Gamma Memory Neural Network for System Identification

    Science.gov (United States)

    Motter, Mark A.; Principe, Jose C.

    1992-01-01

    A gamma neural network topology is investigated for a system identification application. A discrete gamma memory structure is used in the input layer, providing delayed values of both the control inputs and the network output to the input layer. The discrete gamma memory structure implements a tapped dispersive delay line, with the amount of dispersion regulated by a single, adaptable parameter. The network is trained using static back propagation, but captures significant features of the system dynamics. The system dynamics identified with the network are the Mach number dynamics of the 16 Foot Transonic Tunnel at NASA Langley Research Center, Hampton, Virginia. The training data spans an operating range of Mach numbers from 0.4 to 1.3.

  20. NRAM: a disruptive carbon-nanotube resistance-change memory

    Science.gov (United States)

    Gilmer, D. C.; Rueckes, T.; Cleveland, L.

    2018-04-01

    Advanced memory technology based on carbon nanotubes (CNTs) (NRAM) possesses desired properties for implementation in a host of integrated systems due to demonstrated advantages of its operation including high speed (nanotubes can switch state in picoseconds), high endurance (over a trillion), and low power (with essential zero standby power). The applicable integrated systems for NRAM have markets that will see compound annual growth rates (CAGR) of over 62% between 2018 and 2023, with an embedded systems CAGR of 115% in 2018-2023 (http://bccresearch.com/pressroom/smc/bcc-research-predicts:-nram-(finally)-to-revolutionize-computer-memory). These opportunities are helping drive the realization of a shift from silicon-based to carbon-based (NRAM) memories. NRAM is a memory cell made up of an interlocking matrix of CNTs, either touching or slightly separated, leading to low or higher resistance states respectively. The small movement of atoms, as opposed to moving electrons for traditional silicon-based memories, renders NRAM with a more robust endurance and high temperature retention/operation which, along with high speed/low power, is expected to blossom in this memory technology to be a disruptive replacement for the current status quo of DRAM (dynamic RAM), SRAM (static RAM), and NAND flash memories.

  1. Organizational memory and the completeness of process modeling in ERP systems

    NARCIS (Netherlands)

    van Stijn, E.J.; Wensley, A.K.P.

    2001-01-01

    Enterprise resource planning (ERP) systems not only have a broad functional scope promising to support many different business processes, they also embed many different aspects of the company’s organizational memory. Disparities can exist between those memory contents in the ERP system and related

  2. Computational experiment for the purpose of determining the probabilistic and temporal characteristics of information security systems against unauthorized access in automated information systems

    Directory of Open Access Journals (Sweden)

    A. V. Skrypnikov

    2017-01-01

    Full Text Available The article is devoted to the method of experimental estimation of parameters of functioning of standard information protection systems from unauthorized access, certified, widely used in organizations operating automated information systems. In the course of the experiment, statistical data were evaluated in the dynamics of the functioning of information security systems against unauthorized access in automated information systems. Registration of the parameters for the execution time of protective protection functions was carried out using a special utility called ProcessMonitor from the Sysinternals suite of utilities used to filter processes and threads. The loading of the processor and main memory of the computer with the use of special software, specially designed for performing experimental research, simulates the operation of GIS in real-world work for its intended purpose. A special software for simulating the work of a system with high load is developed in "VisualStudio 2015" within the framework of "ConsoleApplication". At the same time, the processor is loaded at a level of 50-70% and 60-80% of the operative memory. The obtained values of the time of implementation of protective functions in conditions of high utilization of resources of computer facilities for their intended purpose will allow us to assess the conflict and dynamic properties of the GIS. In the future, the obtained experimental estimates can be used to develop a model of information security in automated information systems, as well as in the formation of quality requirements (resource intensity, response time to the user's request, availability, etc.. Also, the results of the computational experiment in the future can be used to develop a software package for assessing the dynamic performance of information security systems against unauthorized access in automated information systems

  3. Digital computer structure and design

    CERN Document Server

    Townsend, R

    2014-01-01

    Digital Computer Structure and Design, Second Edition discusses switching theory, counters, sequential circuits, number representation, and arithmetic functions The book also describes computer memories, the processor, data flow system of the processor, the processor control system, and the input-output system. Switching theory, which is purely a mathematical concept, centers on the properties of interconnected networks of ""gates."" The theory deals with binary functions of 1 and 0 which can change instantaneously from one to the other without intermediate values. The binary number system is

  4. MSAProbs-MPI: parallel multiple sequence aligner for distributed-memory systems.

    Science.gov (United States)

    González-Domínguez, Jorge; Liu, Yongchao; Touriño, Juan; Schmidt, Bertil

    2016-12-15

    MSAProbs is a state-of-the-art protein multiple sequence alignment tool based on hidden Markov models. It can achieve high alignment accuracy at the expense of relatively long runtimes for large-scale input datasets. In this work we present MSAProbs-MPI, a distributed-memory parallel version of the multithreaded MSAProbs tool that is able to reduce runtimes by exploiting the compute capabilities of common multicore CPU clusters. Our performance evaluation on a cluster with 32 nodes (each containing two Intel Haswell processors) shows reductions in execution time of over one order of magnitude for typical input datasets. Furthermore, MSAProbs-MPI using eight nodes is faster than the GPU-accelerated QuickProbs running on a Tesla K20. Another strong point is that MSAProbs-MPI can deal with large datasets for which MSAProbs and QuickProbs might fail due to time and memory constraints, respectively. Source code in C ++ and MPI running on Linux systems as well as a reference manual are available at http://msaprobs.sourceforge.net CONTACT: jgonzalezd@udc.esSupplementary information: Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  5. A computer-controlled system for rapid soil analysis of 226Ra

    International Nuclear Information System (INIS)

    Doane, R.W.; Berven, B.A.; Blair, M.S.

    1984-01-01

    A computer-controlled multichannel analysis system has been developed by the Radiological Survey Activities (RASA) Group at Oak Ridge National Laboratory (ORNL) for the Department of Energy (DOE) in support of the DOE's remedial action programs. The purpose of this system is to provide a rapid estimate of the 226 Ra concentration in soil samples using a 6 x 9 inch NaI(T1) crystal containing a 3.25 inch deep by 3.5 inch diameter well. This gamma detection system is controlled by a minicomputer with a dual floppy disk storage medium, line printer, and optional X-Y plotter. A two-chip interface was also designed at ORNL which handles all control signals generated from the computer keyboard. These computer-generated control signals are processed in machine language for rapid data transfer and BASIC language is used for data processing. The computer system is a Commodore Business Machines (CBM) Model 8032 personal computer with CBM peripherals. Control and data signals are utilized via the parallel user's port to the interface unit. The analog-to-digital converter (ADC) is controlled in machine language, bootstrapped to high memory, and is addressed through the BASIC program. The BASIC program is designed to be ''user friendly'' and provides the operator with several modes of operation such as background and analysis acquisition. Any number of energy regions-of-interest (ROI) may be analyzed with automatic background substraction. Also employed in the BASIC program are the 226 Ra algorithms which utilize linear and polynomial regression equations for data conversion and look-up tables for radon equilibrating coefficients. The optional X-Y plotter may be used with two- or three-dimensional curve programs to enhance data analysis and presentation. A description of the system is presented and typical applications are discussed

  6. SiGe epitaxial memory for neuromorphic computing with reproducible high performance based on engineered dislocations

    Science.gov (United States)

    Choi, Shinhyun; Tan, Scott H.; Li, Zefan; Kim, Yunjo; Choi, Chanyeol; Chen, Pai-Yu; Yeon, Hanwool; Yu, Shimeng; Kim, Jeehwan

    2018-01-01

    Although several types of architecture combining memory cells and transistors have been used to demonstrate artificial synaptic arrays, they usually present limited scalability and high power consumption. Transistor-free analog switching devices may overcome these limitations, yet the typical switching process they rely on—formation of filaments in an amorphous medium—is not easily controlled and hence hampers the spatial and temporal reproducibility of the performance. Here, we demonstrate analog resistive switching devices that possess desired characteristics for neuromorphic computing networks with minimal performance variations using a single-crystalline SiGe layer epitaxially grown on Si as a switching medium. Such epitaxial random access memories utilize threading dislocations in SiGe to confine metal filaments in a defined, one-dimensional channel. This confinement results in drastically enhanced switching uniformity and long retention/high endurance with a high analog on/off ratio. Simulations using the MNIST handwritten recognition data set prove that epitaxial random access memories can operate with an online learning accuracy of 95.1%.

  7. Computer assisted treatments for image pattern data of laser plasma experiments

    International Nuclear Information System (INIS)

    Yaoita, Akira; Matsushima, Isao

    1987-01-01

    An image data processing system for laser-plasma experiments has been constructed. These image data are two dimensional images taken by X-ray, UV, infrared and visible light television cameras and also taken by streak cameras. They are digitized by frame memories. The digitized image data are stored in disk memories with the aid of a microcomputer. The data are processed by a host computer and stored in the files of the host computer and on magnetic tapes. In this paper, the over view of the image data processing system and some software for data handling in the host computer are reported. (author)

  8. Memory handling in the ATLAS submission system from job definition to sites limits

    CERN Document Server

    Forti, Alessandra; The ATLAS collaboration

    2016-01-01

    The ATLAS workload management system is a pilot system based on a late binding philosophy that avoided for many years to pass fine grained job requirements to the batch system. In particular for memory most of the requirements were set to request 4GB vmem as defined in the EGI portal VO card, i.e. 2GB RAM + 2GB swap. However in the past few years several changes have happened in the operating system kernel and in the applications that make such a definition of memory to use for requesting slots obsolete and ATLAS has introduced the new PRODSYS2 workload management which has a more flexible system to evaluate the memory requirements and to submit to appropriate queues. The work stemmed in particular from the introduction of 64bit multicore workloads and the increased memory requirements of some of the single core applications. This paper describes the overall review and changes of memory handling starting from the definition of tasks, the way tasks memory requirements are set using scout jobs and the new memor...

  9. Memory handling in the ATLAS submission system from job definition to sites limits

    Science.gov (United States)

    Forti, A. C.; Walker, R.; Maeno, T.; Love, P.; Rauschmayr, N.; Filipcic, A.; Di Girolamo, A.

    2017-10-01

    In the past few years the increased luminosity of the LHC, changes in the linux kernel and a move to a 64bit architecture have affected the ATLAS jobs memory usage and the ATLAS workload management system had to be adapted to be more flexible and pass memory parameters to the batch systems, which in the past wasn’t a necessity. This paper describes the steps required to add the capability to better handle memory requirements, included the review of how each component definition and parametrization of the memory is mapped to the other components, and what changes had to be applied to make the submission chain work. These changes go from the definition of tasks and the way tasks memory requirements are set using scout jobs, through the new memory tool developed to do that, to how these values are used by the submission component of the system and how the jobs are treated by the sites through the CEs, batch systems and ultimately the kernel.

  10. Memory handling in the ATLAS submission system from job definition to sites limits

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00027700; The ATLAS collaboration; Walker, Rodney; Maeno, Tadashi; Love, Peter; Rauschmayr, Nathalie; Filipcic, Andrej; Di Girolamo, Alessandro

    2017-01-01

    In the past few years the increased luminosity of the LHC, changes in the linux kernel and a move to a 64bit architecture have affected the ATLAS jobs memory usage and the ATLAS workload management system had to be adapted to be more flexible and pass memory parameters to the batch systems, which in the past wasn’t a necessity. This paper describes the steps required to add the capability to better handle memory requirements, included the review of how each component definition and parametrization of the memory is mapped to the other components, and what changes had to be applied to make the submission chain work. These changes go from the definition of tasks and the way tasks memory requirements are set using scout jobs, through the new memory tool developed to do that, to how these values are used by the submission component of the system and how the jobs are treated by the sites through the CEs, batch systems and ultimately the kernel.

  11. Explicit time integration of finite element models on a vectorized, concurrent computer with shared memory

    Science.gov (United States)

    Gilbertsen, Noreen D.; Belytschko, Ted

    1990-01-01

    The implementation of a nonlinear explicit program on a vectorized, concurrent computer with shared memory is described and studied. The conflict between vectorization and concurrency is described and some guidelines are given for optimal block sizes. Several example problems are summarized to illustrate the types of speed-ups which can be achieved by reprogramming as compared to compiler optimization.

  12. Gestures make memories, but what kind? Patients with impaired procedural memory display disruptions in gesture production and comprehension.

    Science.gov (United States)

    Klooster, Nathaniel B; Cook, Susan W; Uc, Ergun Y; Duff, Melissa C

    2014-01-01

    Hand gesture, a ubiquitous feature of human interaction, facilitates communication. Gesture also facilitates new learning, benefiting speakers and listeners alike. Thus, gestures must impact cognition beyond simply supporting the expression of already-formed ideas. However, the cognitive and neural mechanisms supporting the effects of gesture on learning and memory are largely unknown. We hypothesized that gesture's ability to drive new learning is supported by procedural memory and that procedural memory deficits will disrupt gesture production and comprehension. We tested this proposal in patients with intact declarative memory, but impaired procedural memory as a consequence of Parkinson's disease (PD), and healthy comparison participants with intact declarative and procedural memory. In separate experiments, we manipulated the gestures participants saw and produced in a Tower of Hanoi (TOH) paradigm. In the first experiment, participants solved the task either on a physical board, requiring high arching movements to manipulate the discs from peg to peg, or on a computer, requiring only flat, sideways movements of the mouse. When explaining the task, healthy participants with intact procedural memory displayed evidence of their previous experience in their gestures, producing higher, more arching hand gestures after solving on a physical board, and smaller, flatter gestures after solving on a computer. In the second experiment, healthy participants who saw high arching hand gestures in an explanation prior to solving the task subsequently moved the mouse with significantly higher curvature than those who saw smaller, flatter gestures prior to solving the task. These patterns were absent in both gesture production and comprehension experiments in patients with procedural memory impairment. These findings suggest that the procedural memory system supports the ability of gesture to drive new learning.

  13. Short-term memory and long-term memory are still different.

    Science.gov (United States)

    Norris, Dennis

    2017-09-01

    A commonly expressed view is that short-term memory (STM) is nothing more than activated long-term memory. If true, this would overturn a central tenet of cognitive psychology-the idea that there are functionally and neurobiologically distinct short- and long-term stores. Here I present an updated case for a separation between short- and long-term stores, focusing on the computational demands placed on any STM system. STM must support memory for previously unencountered information, the storage of multiple tokens of the same type, and variable binding. None of these can be achieved simply by activating long-term memory. For example, even a simple sequence of digits such as "1, 3, 1" where there are 2 tokens of the digit "1" cannot be stored in the correct order simply by activating the representations of the digits "1" and "3" in LTM. I also review recent neuroimaging data that has been presented as evidence that STM is activated LTM and show that these data are exactly what one would expect to see based on a conventional 2-store view. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  14. A selective logging mechanism for hardware transactional memory systems

    OpenAIRE

    Lupon Navazo, Marc; Magklis, Grigorios; González Colás, Antonio María

    2011-01-01

    Log-based Hardware Transactional Memory (HTM) systems offer an elegant solution to handle speculative data that overflow transactional L1 caches. By keeping the pre-transactional values on a software-resident log, speculative values can be safely moved across the memory hierarchy, without requiring expensive searches on L1 misses or commits.

  15. Capability-based computer systems

    CERN Document Server

    Levy, Henry M

    2014-01-01

    Capability-Based Computer Systems focuses on computer programs and their capabilities. The text first elaborates capability- and object-based system concepts, including capability-based systems, object-based approach, and summary. The book then describes early descriptor architectures and explains the Burroughs B5000, Rice University Computer, and Basic Language Machine. The text also focuses on early capability architectures. Dennis and Van Horn's Supervisor; CAL-TSS System; MIT PDP-1 Timesharing System; and Chicago Magic Number Machine are discussed. The book then describes Plessey System 25

  16. Methods for reducing interference in the Complementary Learning Systems model: oscillating inhibition and autonomous memory rehearsal.

    Science.gov (United States)

    Norman, Kenneth A; Newman, Ehren L; Perotte, Adler J

    2005-11-01

    The stability-plasticity problem (i.e. how the brain incorporates new information into its model of the world, while at the same time preserving existing knowledge) has been at the forefront of computational memory research for several decades. In this paper, we critically evaluate how well the Complementary Learning Systems theory of hippocampo-cortical interactions addresses the stability-plasticity problem. We identify two major challenges for the model: Finding a learning algorithm for cortex and hippocampus that enacts selective strengthening of weak memories, and selective punishment of competing memories; and preventing catastrophic forgetting in the case of non-stationary environments (i.e. when items are temporarily removed from the training set). We then discuss potential solutions to these problems: First, we describe a recently developed learning algorithm that leverages neural oscillations to find weak parts of memories (so they can be strengthened) and strong competitors (so they can be punished), and we show how this algorithm outperforms other learning algorithms (CPCA Hebbian learning and Leabra at memorizing overlapping patterns. Second, we describe how autonomous re-activation of memories (separately in cortex and hippocampus) during REM sleep, coupled with the oscillating learning algorithm, can reduce the rate of forgetting of input patterns that are no longer present in the environment. We then present a simple demonstration of how this process can prevent catastrophic interference in an AB-AC learning paradigm.

  17. A revised limbic system model for memory, emotion and behaviour.

    Science.gov (United States)

    Catani, Marco; Dell'acqua, Flavio; Thiebaut de Schotten, Michel

    2013-09-01

    Emotion, memories and behaviour emerge from the coordinated activities of regions connected by the limbic system. Here, we propose an update of the limbic model based on the seminal work of Papez, Yakovlev and MacLean. In the revised model we identify three distinct but partially overlapping networks: (i) the Hippocampal-diencephalic and parahippocampal-retrosplenial network dedicated to memory and spatial orientation; (ii) The temporo-amygdala-orbitofrontal network for the integration of visceral sensation and emotion with semantic memory and behaviour; (iii) the default-mode network involved in autobiographical memories and introspective self-directed thinking. The three networks share cortical nodes that are emerging as principal hubs in connectomic analysis. This revised network model of the limbic system reconciles recent functional imaging findings with anatomical accounts of clinical disorders commonly associated with limbic pathology. Copyright © 2013 Elsevier Ltd. All rights reserved.

  18. PIMS: Memristor-Based Processing-in-Memory-and-Storage.

    Energy Technology Data Exchange (ETDEWEB)

    Cook, Jeanine

    2018-02-01

    Continued progress in computing has augmented the quest for higher performance with a new quest for higher energy efficiency. This has led to the re-emergence of Processing-In-Memory (PIM) ar- chitectures that offer higher density and performance with some boost in energy efficiency. Past PIM work either integrated a standard CPU with a conventional DRAM to improve the CPU- memory link, or used a bit-level processor with Single Instruction Multiple Data (SIMD) control, but neither matched the energy consumption of the memory to the computation. We originally proposed to develop a new architecture derived from PIM that more effectively addressed energy efficiency for high performance scientific, data analytics, and neuromorphic applications. We also originally planned to implement a von Neumann architecture with arithmetic/logic units (ALUs) that matched the power consumption of an advanced storage array to maximize energy efficiency. Implementing this architecture in storage was our original idea, since by augmenting storage (in- stead of memory), the system could address both in-memory computation and applications that accessed larger data sets directly from storage, hence Processing-in-Memory-and-Storage (PIMS). However, as our research matured, we discovered several things that changed our original direc- tion, the most important being that a PIM that implements a standard von Neumann-type archi- tecture results in significant energy efficiency improvement, but only about a O(10) performance improvement. In addition to this, the emergence of new memory technologies moved us to propos- ing a non-von Neumann architecture, called Superstrider, implemented not in storage, but in a new DRAM technology called High Bandwidth Memory (HBM). HBM is a stacked DRAM tech- nology that includes a logic layer where an architecture such as Superstrider could potentially be implemented.

  19. From experiment to design -- Fault characterization and detection in parallel computer systems using computational accelerators

    Science.gov (United States)

    Yim, Keun Soo

    This dissertation summarizes experimental validation and co-design studies conducted to optimize the fault detection capabilities and overheads in hybrid computer systems (e.g., using CPUs and Graphics Processing Units, or GPUs), and consequently to improve the scalability of parallel computer systems using computational accelerators. The experimental validation studies were conducted to help us understand the failure characteristics of CPU-GPU hybrid computer systems under various types of hardware faults. The main characterization targets were faults that are difficult to detect and/or recover from, e.g., faults that cause long latency failures (Ch. 3), faults in dynamically allocated resources (Ch. 4), faults in GPUs (Ch. 5), faults in MPI programs (Ch. 6), and microarchitecture-level faults with specific timing features (Ch. 7). The co-design studies were based on the characterization results. One of the co-designed systems has a set of source-to-source translators that customize and strategically place error detectors in the source code of target GPU programs (Ch. 5). Another co-designed system uses an extension card to learn the normal behavioral and semantic execution patterns of message-passing processes executing on CPUs, and to detect abnormal behaviors of those parallel processes (Ch. 6). The third co-designed system is a co-processor that has a set of new instructions in order to support software-implemented fault detection techniques (Ch. 7). The work described in this dissertation gains more importance because heterogeneous processors have become an essential component of state-of-the-art supercomputers. GPUs were used in three of the five fastest supercomputers that were operating in 2011. Our work included comprehensive fault characterization studies in CPU-GPU hybrid computers. In CPUs, we monitored the target systems for a long period of time after injecting faults (a temporally comprehensive experiment), and injected faults into various types of

  20. A single-trace dual-process model of episodic memory: a novel computational account of familiarity and recollection.

    Science.gov (United States)

    Greve, Andrea; Donaldson, David I; van Rossum, Mark C W

    2010-02-01

    Dual-process theories of episodic memory state that retrieval is contingent on two independent processes: familiarity (providing a sense of oldness) and recollection (recovering events and their context). A variety of studies have reported distinct neural signatures for familiarity and recollection, supporting dual-process theory. One outstanding question is whether these signatures reflect the activation of distinct memory traces or the operation of different retrieval mechanisms on a single memory trace. We present a computational model that uses a single neuronal network to store memory traces, but two distinct and independent retrieval processes access the memory. The model is capable of performing familiarity and recollection-based discrimination between old and new patterns, demonstrating that dual-process models need not to rely on multiple independent memory traces, but can use a single trace. Importantly, our putative familiarity and recollection processes exhibit distinct characteristics analogous to those found in empirical data; they diverge in capacity and sensitivity to sparse and correlated patterns, exhibit distinct ROC curves, and account for performance on both item and associative recognition tests. The demonstration that a single-trace, dual-process model can account for a range of empirical findings highlights the importance of distinguishing between neuronal processes and the neuronal representations on which they operate.

  1. The role of the dorsal striatum in extinction: A memory systems perspective.

    Science.gov (United States)

    Goodman, Jarid; Packard, Mark G

    2018-04-01

    The present review describes a role for the dorsal striatum in extinction. Evidence from brain lesion and pharmacological studies indicate that the dorsolateral region of the striatum (DLS) mediates extinction in various maze learning and instrumental learning tasks. Within the context of a multiple memory systems view, the role of the DLS in extinction appears to be selective. Specifically, the DLS mediates extinction of habit memory and is not required for extinction of cognitive memory. Thus, extinction mechanisms mediated by the DLS may involve response-produced inhibition (e.g. inhibition of existing stimulus-response associations or formation of new inhibitory stimulus-response associations), as opposed to cognitive mechanisms (e.g. changes in expectation). Evidence also suggests that NMDA-dependent forms of synaptic plasticity may be part of the mechanism through which the DLS mediates extinction of habit memory. In addition, in some learning situations, DLS inactivation enhances extinction, suggesting a competitive interaction between multiple memory systems during extinction training. Consistent with a multiple memory systems perspective, it is suggested that the DLS represents one of several distinct neural systems that specialize in extinction of different kinds of memory. The relevance of these findings to the development of behavioral and pharmacological therapies that target the maladaptive habit-like symptoms in human psychopathology is also briefly considered. Published by Elsevier Inc.

  2. Assessment of serotonergic system in formation of memory and learning

    Directory of Open Access Journals (Sweden)

    J. C. da Silva

    2017-11-01

    Full Text Available Abstract We evaluated the involvement of the serotonergic system on memory formation and learning processes in healthy adults Wistar rats. Fifty-seven rats of 5 groups had one serotonergic nuclei damaged by an electric current. Electrolytic lesion was carried out using a continuous current of 2mA during two seconds by stereotactic surgery. Animals were submitted to learning and memory tests. Rats presented different responses in the memory tests depending on the serotonergic nucleus involved. Both explicit and implicit memory may be affected after lesion although some groups showed significant difference and others did not. A damage in the serotonergic nucleus was able to cause impairment in the memory of Wistar. The formation of implicit and explicit memory is impaired after injury in some serotonergic nuclei.

  3. Computational aspects of feedback in neural circuits.

    Directory of Open Access Journals (Sweden)

    Wolfgang Maass

    2007-01-01

    Full Text Available It has previously been shown that generic cortical microcircuit models can perform complex real-time computations on continuous input streams, provided that these computations can be carried out with a rapidly fading memory. We investigate the computational capability of such circuits in the more realistic case where not only readout neurons, but in addition a few neurons within the circuit, have been trained for specific tasks. This is essentially equivalent to the case where the output of trained readout neurons is fed back into the circuit. We show that this new model overcomes the limitation of a rapidly fading memory. In fact, we prove that in the idealized case without noise it can carry out any conceivable digital or analog computation on time-varying inputs. But even with noise, the resulting computational model can perform a large class of biologically relevant real-time computations that require a nonfading memory. We demonstrate these computational implications of feedback both theoretically, and through computer simulations of detailed cortical microcircuit models that are subject to noise and have complex inherent dynamics. We show that the application of simple learning procedures (such as linear regression or perceptron learning to a few neurons enables such circuits to represent time over behaviorally relevant long time spans, to integrate evidence from incoming spike trains over longer periods of time, and to process new information contained in such spike trains in diverse ways according to the current internal state of the circuit. In particular we show that such generic cortical microcircuits with feedback provide a new model for working memory that is consistent with a large set of biological constraints. Although this article examines primarily the computational role of feedback in circuits of neurons, the mathematical principles on which its analysis is based apply to a variety of dynamical systems. Hence they may also

  4. The development of an oscilloscope visualization system for the hybrid computer E.A.I. 8900

    International Nuclear Information System (INIS)

    Djukanovic, Radojka

    1970-01-01

    This report was the first subject of a thesis submitted to the Faculte des Sciences in Paris, on the 30 of June 1970 by Mistress Radojka Djukanovic-Remsak, in order to obtain the grade of doctor engineer. A visualization system was studied and developed, whereby various figures could be displayed on an oscilloscope screen without a memory by means of points and segments. This system was realized by the utilisation of the analog and logic elements of an analog computer E.A.I. 8800 and a series of programs intended to be used in conjunction with the E.A.I. 8400 digital computer. The second subject: 'The evolution of multiprogramming' was dealt with in a note CEA-N-1346. (author) [fr

  5. Computation in the Learning System of Cephalopods.

    Science.gov (United States)

    Young, J Z

    1991-04-01

    The memory mechanisms of cephalopods consist of a series of matrices of intersecting axes, which find associations between the signals of input events and their consequences. The tactile memory is distributed among eight such matrices, and there is also some suboesophageal learning capacity. The visual memory lies in the optic lobe and four matrices, with some re-exciting pathways. In both systems, damage to any part reduces proportionally the effectiveness of the whole memory. These matrices are somewhat like those in mammals, for instance those in the hippocampus. The first matrix in both visual and tactile systems receives signals of vision and taste, and its output serves to increase the tendency to attack or to take with the arms. The second matrix provides for the correlation of groups of signals on its neurons, which pass signals to the third matrix. Here large cells find clusters in the sets of signals. Their output re-excites those of the first lobe, unless pain occurs. In that case, this set of cells provides a record that ensures retreat. There is experimental evidence that these distributed memory systems allow for the identification of categories of visual and tactile inputs, for generalization, and for decision on appropriate behavior in the light of experience. The evidence suggests that learning in cephalopods is not localized to certain layers or "grandmother cells" but is distributed with high redundance in serial networks, with recurrent circuits.

  6. A Heterogeneous Multiprocessor Graphics System Using Processor-Enhanced Memories

    Science.gov (United States)

    1989-02-01

    frames per second, font generation directly from conic spline descriptions, and rapid calculation of radiosity form factors. The hardware consists of...generality for rendering curved surfaces, volume data, objects dcscri id with Constructive Solid Geometry, for rendering scenes using the radiosity ...f.aces and for computing a spherical radiosity lighting model (see Section 7.6). Custom Memory Chips \\ 208 bits x 128 pixels - Renderer Board ix p o a

  7. Fencing direct memory access data transfers in a parallel active messaging interface of a parallel computer

    Science.gov (United States)

    Blocksome, Michael A.; Mamidala, Amith R.

    2013-09-03

    Fencing direct memory access (`DMA`) data transfers in a parallel active messaging interface (`PAMI`) of a parallel computer, the PAMI including data communications endpoints, each endpoint including specifications of a client, a context, and a task, the endpoints coupled for data communications through the PAMI and through DMA controllers operatively coupled to segments of shared random access memory through which the DMA controllers deliver data communications deterministically, including initiating execution through the PAMI of an ordered sequence of active DMA instructions for DMA data transfers between two endpoints, effecting deterministic DMA data transfers through a DMA controller and a segment of shared memory; and executing through the PAMI, with no FENCE accounting for DMA data transfers, an active FENCE instruction, the FENCE instruction completing execution only after completion of all DMA instructions initiated prior to execution of the FENCE instruction for DMA data transfers between the two endpoints.

  8. Operating experience of the TPA-1001 mini-computer in experimental control systems of main synchrophasotron parameters

    International Nuclear Information System (INIS)

    Kazanskij, G.S.; Khoshenko, A.A.

    1978-01-01

    The experience of application of a Mini-computer, TPA-1001 to control the basic parameters of a synchrophasotron is discussed. The available data have shown that the efficiency of a computer management and measurement system (CMMS) for an accelerator can be determined as a trade-off between the accelerator and the system reliability, and betWeen the system mobility and its softWare. At present, the system employs two VT-340 display units, an arithmetic unit and an accelerating frequency measurement loop. In addition, the system memory is expanded up to 12 K. A new interactive program has been developed which enables the user to interact with the system Via three units (a teletype and two display units). An accelerating frequency measuring and control flowchart has been implemented and covers the whole duty cycle, while its measuring accuracy is better than 4x10 -4

  9. Pacing a data transfer operation between compute nodes on a parallel computer

    Science.gov (United States)

    Blocksome, Michael A [Rochester, MN

    2011-09-13

    Methods, systems, and products are disclosed for pacing a data transfer between compute nodes on a parallel computer that include: transferring, by an origin compute node, a chunk of an application message to a target compute node; sending, by the origin compute node, a pacing request to a target direct memory access (`DMA`) engine on the target compute node using a remote get DMA operation; determining, by the origin compute node, whether a pacing response to the pacing request has been received from the target DMA engine; and transferring, by the origin compute node, a next chunk of the application message if the pacing response to the pacing request has been received from the target DMA engine.

  10. Simulation of radiation effects on three-dimensional computer optical memories

    International Nuclear Information System (INIS)

    Moscovitch, M.; Emfietzoglou, D.

    1997-01-01

    A model was developed to simulate the effects of heavy charged-particle (HCP) radiation on the information stored in three-dimensional computer optical memories. The model is based on (i) the HCP track radial dose distribution, (ii) the spatial and temporal distribution of temperature in the track, (iii) the matrix-specific radiation-induced changes that will affect the response, and (iv) the kinetics of transition of photochromic molecules from the colored to the colorless isomeric form (bit flip). It is shown that information stored in a volume of several nanometers radius around the particle close-quote s track axis may be lost. The magnitude of the effect is dependent on the particle close-quote s track structure. copyright 1997 American Institute of Physics

  11. MDGRAPE-4: a special-purpose computer system for molecular dynamics simulations.

    Science.gov (United States)

    Ohmura, Itta; Morimoto, Gentaro; Ohno, Yousuke; Hasegawa, Aki; Taiji, Makoto

    2014-08-06

    We are developing the MDGRAPE-4, a special-purpose computer system for molecular dynamics (MD) simulations. MDGRAPE-4 is designed to achieve strong scalability for protein MD simulations through the integration of general-purpose cores, dedicated pipelines, memory banks and network interfaces (NIFs) to create a system on chip (SoC). Each SoC has 64 dedicated pipelines that are used for non-bonded force calculations and run at 0.8 GHz. Additionally, it has 65 Tensilica Xtensa LX cores with single-precision floating-point units that are used for other calculations and run at 0.6 GHz. At peak performance levels, each SoC can evaluate 51.2 G interactions per second. It also has 1.8 MB of embedded shared memory banks and six network units with a peak bandwidth of 7.2 GB s(-1) for the three-dimensional torus network. The system consists of 512 (8×8×8) SoCs in total, which are mounted on 64 node modules with eight SoCs. The optical transmitters/receivers are used for internode communication. The expected maximum power consumption is 50 kW. While MDGRAPE-4 software has still been improved, we plan to run MD simulations on MDGRAPE-4 in 2014. The MDGRAPE-4 system will enable long-time molecular dynamics simulations of small systems. It is also useful for multiscale molecular simulations where the particle simulation parts often become bottlenecks.

  12. The cholinergic system, circadian rhythmicity, and time memory

    NARCIS (Netherlands)

    Hut, R. A.; Van der Zee, E. A.

    2011-01-01

    This review provides an overview of the interaction between the mammalian cholinergic system and circadian system, and its possible role in time memory. Several studies made clear that circadian (daily) fluctuations in acetylcholine (ACh) release, cholinergic enzyme activity and cholinergic receptor

  13. New algorithm to reduce the number of computing steps in reliability formula of Weighted-k-out-of-n system

    Directory of Open Access Journals (Sweden)

    Tatsunari Ohkura

    2007-02-01

    Full Text Available In the disjoint products version of reliability analysis of weighted–k–out–of–n systems, it is necessary to determine the order in which the weight of components is to be considered. The k–out–of–n:G(F system consists of n components; each com-ponent has its own probability and positive integer weight such that the system is operational (failed if and only if the total weight of some operational (failure components is at least k. This paper designs a method to compute the reliability in O(nk computing time and in O(nk memory space. The proposed method expresses the system reliability in fewer product terms than those already published.

  14. Resilient and Robust High Performance Computing Platforms for Scientific Computing Integrity

    Energy Technology Data Exchange (ETDEWEB)

    Jin, Yier [Univ. of Central Florida, Orlando, FL (United States)

    2017-07-14

    As technology advances, computer systems are subject to increasingly sophisticated cyber-attacks that compromise both their security and integrity. High performance computing platforms used in commercial and scientific applications involving sensitive, or even classified data, are frequently targeted by powerful adversaries. This situation is made worse by a lack of fundamental security solutions that both perform efficiently and are effective at preventing threats. Current security solutions fail to address the threat landscape and ensure the integrity of sensitive data. As challenges rise, both private and public sectors will require robust technologies to protect its computing infrastructure. The research outcomes from this project try to address all these challenges. For example, we present LAZARUS, a novel technique to harden kernel Address Space Layout Randomization (KASLR) against paging-based side-channel attacks. In particular, our scheme allows for fine-grained protection of the virtual memory mappings that implement the randomization. We demonstrate the effectiveness of our approach by hardening a recent Linux kernel with LAZARUS, mitigating all of the previously presented side-channel attacks on KASLR. Our extensive evaluation shows that LAZARUS incurs only 0.943% overhead for standard benchmarks, and is therefore highly practical. We also introduced HA2lloc, a hardware-assisted allocator that is capable of leveraging an extended memory management unit to detect memory errors in the heap. We also perform testing using HA2lloc in a simulation environment and find that the approach is capable of preventing common memory vulnerabilities.

  15. Computer systems a programmer's perspective

    CERN Document Server

    Bryant, Randal E

    2016-01-01

    Computer systems: A Programmer’s Perspective explains the underlying elements common among all computer systems and how they affect general application performance. Written from the programmer’s perspective, this book strives to teach readers how understanding basic elements of computer systems and executing real practice can lead them to create better programs. Spanning across computer science themes such as hardware architecture, the operating system, and systems software, the Third Edition serves as a comprehensive introduction to programming. This book strives to create programmers who understand all elements of computer systems and will be able to engage in any application of the field--from fixing faulty software, to writing more capable programs, to avoiding common flaws. It lays the groundwork for readers to delve into more intensive topics such as computer architecture, embedded systems, and cybersecurity. This book focuses on systems that execute an x86-64 machine code, and recommends th...

  16. An innovative ultra-capacitor driven shape memory alloy actuator with an embedded control system

    International Nuclear Information System (INIS)

    Li, Peng; Song, Gangbing

    2014-01-01

    In this paper, an innovative ultra-capacitor driven shape memory alloy (SMA) actuator with an embedded control system is proposed targeting high power high-duty cycle SMA applications. The ultra-capacitor, which is capable of delivering massive amounts of instantaneous current in a compact dimension for high power applications, is chosen as the main component of the power supply. A specialized embedded system is designed from the ground up to control the ultra-capacitor driven SMA system. The control of the ultra-capacitor driven SMA is different from that of a regular constant voltage powered SMA system in that the energy and the voltage of the ultra-capacitor decrease as the system load increases. The embedded control system is also different from a computer-based control system in that it has limited computational power, and the control algorithm has to be designed to be simple while effective so that it can fit into the embedded system environment. The problem of a variable voltage power source induced by the use of the ultra-capacitor is solved by using a fuzzy PID (proportional integral and derivative) control. The method of using an ultra-capacitor to drive SMA actuators enabled SMA as a good candidate for high power high-duty cycle applications. The proposed embedded control system provides a good and ready-to-use solution for SMA high power applications. (paper)

  17. Memory and reward systems coproduce 'nostalgic' experiences in the brain.

    Science.gov (United States)

    Oba, Kentaro; Noriuchi, Madoka; Atomi, Tomoaki; Moriguchi, Yoshiya; Kikuchi, Yoshiaki

    2016-07-01

    People sometimes experience an emotional state known as 'nostalgia', which involves experiencing predominantly positive emotions while remembering autobiographical events. Nostalgia is thought to play an important role in psychological resilience. Previous neuroimaging studies have shown involvement of memory and reward systems in such experiences. However, it remains unclear how these two systems are collaboratively involved with nostalgia experiences. Here, we conducted a functional magnetic resonance imaging study of healthy females to investigate the relationship between memory-reward co-activation and nostalgia, using childhood-related visual stimuli. Moreover, we examined the factors constituting nostalgia and their neural correlates. We confirmed the presence of nostalgia-related activity in both memory and reward systems, including the hippocampus (HPC), substantia nigra/ventral tegmental area (SN/VTA), and ventral striatum (VS). We also found significant HPC-VS co-activation, with its strength correlating with individual 'nostalgia tendencies'. Factor analyses showed that two dimensions underlie nostalgia: emotional and personal significance and chronological remoteness, with the former correlating with caudal SN/VTA and left anterior HPC activity, and the latter correlating with rostral SN/VTA activity. These findings demonstrate the cooperative activity of memory and reward systems, where each system has a specific role in the construction of the factors that underlie the experience of nostalgia. © The Author (2015). Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  18. Computational Fluid Dynamics (CFD) Computations With Zonal Navier-Stokes Flow Solver (ZNSFLOW) Common High Performance Computing Scalable Software Initiative (CHSSI) Software

    National Research Council Canada - National Science Library

    Edge, Harris

    1999-01-01

    ...), computational fluid dynamics (CFD) 6 project. Under the project, a proven zonal Navier-Stokes solver was rewritten for scalable parallel performance on both shared memory and distributed memory high performance computers...

  19. MEMORY MODULATION

    Science.gov (United States)

    Roozendaal, Benno; McGaugh, James L.

    2011-01-01

    Our memories are not all created equally strong: Some experiences are well remembered while others are remembered poorly, if at all. Research on memory modulation investigates the neurobiological processes and systems that contribute to such differences in the strength of our memories. Extensive evidence from both animal and human research indicates that emotionally significant experiences activate hormonal and brain systems that regulate the consolidation of newly acquired memories. These effects are integrated through noradrenergic activation of the basolateral amygdala which regulates memory consolidation via interactions with many other brain regions involved in consolidating memories of recent experiences. Modulatory systems not only influence neurobiological processes underlying the consolidation of new information, but also affect other mnemonic processes, including memory extinction, memory recall and working memory. In contrast to their enhancing effects on consolidation, adrenal stress hormones impair memory retrieval and working memory. Such effects, as with memory consolidation, require noradrenergic activation of the basolateral amygdala and interactions with other brain regions. PMID:22122145

  20. Internode data communications in a parallel computer

    Science.gov (United States)

    Archer, Charles J.; Blocksome, Michael A.; Miller, Douglas R.; Parker, Jeffrey J.; Ratterman, Joseph D.; Smith, Brian E.

    2013-09-03

    Internode data communications in a parallel computer that includes compute nodes that each include main memory and a messaging unit, the messaging unit including computer memory and coupling compute nodes for data communications, in which, for each compute node at compute node boot time: a messaging unit allocates, in the messaging unit's computer memory, a predefined number of message buffers, each message buffer associated with a process to be initialized on the compute node; receives, prior to initialization of a particular process on the compute node, a data communications message intended for the particular process; and stores the data communications message in the message buffer associated with the particular process. Upon initialization of the particular process, the process establishes a messaging buffer in main memory of the compute node and copies the data communications message from the message buffer of the messaging unit into the message buffer of main memory.

  1. Blood leakage detection during dialysis therapy based on fog computing with array photocell sensors and heteroassociative memory model.

    Science.gov (United States)

    Wu, Jian-Xing; Huang, Ping-Tzan; Lin, Chia-Hung; Li, Chien-Ming

    2018-02-01

    Blood leakage and blood loss are serious life-threatening complications occurring during dialysis therapy. These events have been of concerns to both healthcare givers and patients. More than 40% of adult blood volume can be lost in just a few minutes, resulting in morbidities and mortality. The authors intend to propose the design of a warning tool for the detection of blood leakage/blood loss during dialysis therapy based on fog computing with an array of photocell sensors and heteroassociative memory (HAM) model. Photocell sensors are arranged in an array on a flexible substrate to detect blood leakage via the resistance changes with illumination in the visible spectrum of 500-700 nm. The HAM model is implemented to design a virtual alarm unit using electricity changes in an embedded system. The proposed warning tool can indicate the risk level in both end-sensing units and remote monitor devices via a wireless network and fog/cloud computing. The animal experimental results (pig blood) will demonstrate the feasibility.

  2. Tiling and Asynchronous Communication Optimizations for Stencil Computations

    KAUST Repository

    Malas, Tareq

    2015-12-07

    The importance of stencil-based algorithms in computational science has focused attention on optimized parallel implementations for multilevel cache-based processors. Temporal blocking schemes leverage the large bandwidth and low latency of caches to accelerate stencil updates and approach theoretical peak performance. A key ingredient is the reduction of data traffic across slow data paths, especially the main memory interface. Most of the established work concentrates on updating separate cache blocks per thread, which works on all types of shared memory systems, regardless of whether there is a shared cache among the cores. This approach is memory-bandwidth limited in several situations, where the cache space for each thread can be too small to provide sufficient in-cache data reuse. We introduce a generalized multi-dimensional intra-tile parallelization scheme for shared-cache multicore processors that results in a significant reduction of cache size requirements and shows a large saving in memory bandwidth usage compared to existing approaches. It also provides data access patterns that allow efficient hardware prefetching. Our parameterized thread groups concept provides a controllable trade-off between concurrency and memory usage, shifting the pressure between the memory interface and the Central Processing Unit (CPU).We also introduce efficient diamond tiling structure for both shared memory cache blocking and distributed memory relaxed-synchronization communication, demonstrated using one-dimensional domain decomposition. We describe the approach and our open-source testbed implementation details (called Girih), present performance results on contemporary Intel processors, and apply advanced performance modeling techniques to reconcile the observed performance with hardware capabilities. Furthermore, we conduct a comparison with the state-of-the-art stencil frameworks PLUTO and Pochoir in shared memory, using corner-case stencil operators. We study the

  3. Computer Operating System Maintenance.

    Science.gov (United States)

    1982-06-01

    FACILITY The Computer Management Information Facility ( CMIF ) system was developed by Rapp Systems to fulfill the need at the CRF to record and report on...computer center resource usage and utilization. The foundation of the CMIF system is a System 2000 data base (CRFMGMT) which stores and permits access

  4. Gestures make memories, but what kind? Patients with impaired procedural memory display disruptions in gesture production and comprehension

    Directory of Open Access Journals (Sweden)

    Nathaniel Bloem Klooster

    2015-01-01

    Full Text Available Hand gesture, a ubiquitous feature of human interaction, facilitates communication. Gesture also facilitates new learning, benefiting speakers and listeners alike. Thus, gestures must impact cognition beyond simply supporting the expression of already-formed ideas. However, the cognitive and neural mechanisms supporting the effects of gesture on learning and memory are largely unknown. We hypothesized that gesture’s ability to drive new learning is supported by procedural memory and that procedural memory deficits will disrupt gesture production and comprehension. We tested this proposal in patients with intact declarative memory, but impaired procedural memory as a consequence of Parkinson’s disease, and healthy comparison participants with intact declarative and procedural memory. In separate experiments, we manipulated the gestures participants saw and produced in a Tower of Hanoi paradigm. In the first experiment, participants solved the task either on a physical board, requiring high arching movements to manipulate the discs from peg to peg, or on a computer, requiring only flat, sideways movements of the mouse. When explaining the task, healthy participants with intact procedural memory displayed evidence of their previous experience in their gestures, producing higher, more arching hand gestures after solving on a physical board, and smaller, flatter gestures after solving on a computer. In the second experiment, healthy participants who saw high arching hand gestures in an explanation prior to solving the task subsequently moved the mouse with significantly higher curvature than those who saw smaller, flatter gestures prior to solving the task. These patterns were absent in both gesture production and comprehension experiments in patients with procedural memory impairment. These findings suggest that the procedural memory system supports the ability of gesture to drive new learning.

  5. Logic and memory concepts for all-magnetic computing based on transverse domain walls

    International Nuclear Information System (INIS)

    Vandermeulen, J; Van de Wiele, B; Dupré, L; Van Waeyenberge, B

    2015-01-01

    We introduce a non-volatile digital logic and memory concept in which the binary data is stored in the transverse magnetic domain walls present in in-plane magnetized nanowires with sufficiently small cross sectional dimensions. We assign the digital bit to the two possible orientations of the transverse domain wall. Numerical proofs-of-concept are presented for a NOT-, AND- and OR-gate, a FAN-out as well as a reading and writing device. Contrary to the chirality based vortex domain wall logic gates introduced in Omari and Hayward (2014 Phys. Rev. Appl. 2 044001), the presented concepts remain applicable when miniaturized and are driven by electrical currents, making the technology compatible with the in-plane racetrack memory concept. The individual devices can be easily combined to logic networks working with clock speeds that scale linearly with decreasing design dimensions. This opens opportunities to an all-magnetic computing technology where the digital data is stored and processed under the same magnetic representation. (paper)

  6. Opportunity for Realizing Ideal Computing System using Cloud Computing Model

    OpenAIRE

    Sreeramana Aithal; Vaikunth Pai T

    2017-01-01

    An ideal computing system is a computing system with ideal characteristics. The major components and their performance characteristics of such hypothetical system can be studied as a model with predicted input, output, system and environmental characteristics using the identified objectives of computing which can be used in any platform, any type of computing system, and for application automation, without making modifications in the form of structure, hardware, and software coding by an exte...

  7. Reward-related learning via multiple memory systems.

    Science.gov (United States)

    Delgado, Mauricio R; Dickerson, Kathryn C

    2012-07-15

    The application of a neuroeconomic approach to the study of reward-related processes has provided significant insights in our understanding of human learning and decision making. Much of this research has focused primarily on the contributions of the corticostriatal circuitry, involved in trial-and-error reward learning. As a result, less consideration has been allotted to the potential influence of different neural mechanisms such as the hippocampus or to more common ways in human society in which information is acquired and utilized to reach a decision, such as through explicit instruction rather than trial-and-error learning. This review examines the individual contributions of multiple learning and memory neural systems and their interactions during human decision making in both normal and neuropsychiatric populations. Specifically, the anatomical and functional connectivity across multiple memory systems are highlighted to suggest that probing the role of the hippocampus and its interactions with the corticostriatal circuitry via the application of model-based neuroeconomic approaches may provide novel insights into neuropsychiatric populations that suffer from damage to one of these structures and as a consequence have deficits in learning, memory, or decision making. Copyright © 2012 Society of Biological Psychiatry. Published by Elsevier Inc. All rights reserved.

  8. Stochastic memory: getting memory out of noise

    Science.gov (United States)

    Stotland, Alexander; di Ventra, Massimiliano

    2011-03-01

    Memory circuit elements, namely memristors, memcapacitors and meminductors, can store information without the need of a power source. These systems are generally defined in terms of deterministic equations of motion for the state variables that are responsible for memory. However, in real systems noise sources can never be eliminated completely. One would then expect noise to be detrimental for memory. Here, we show that under specific conditions on the noise intensity memory can actually be enhanced. We illustrate this phenomenon using a physical model of a memristor in which the addition of white noise into the state variable equation improves the memory and helps the operation of the system. We discuss under which conditions this effect can be realized experimentally, discuss its implications on existing memory systems discussed in the literature, and also analyze the effects of colored noise. Work supported in part by NSF.

  9. Bank switched memory interface for an image processor

    International Nuclear Information System (INIS)

    Barron, M.; Downward, J.

    1980-09-01

    A commercially available image processor is interfaced to a PDP-11/45 through an 8K window of memory addresses. When the image processor was not in use it was desired to be able to use the 8K address space as real memory. The standard method of accomplishing this would have been to use UNIBUS switches to switch in either the physical 8K bank of memory or the image processor memory. This method has the disadvantage of being rather expensive. As a simple alternative, a device was built to selectively enable or disable either an 8K bank of memory or the image processor memory. To enable the image processor under program control, GEN is contracted in size, the memory is disabled, a device partition for the image processor is created above GEN, and the image processor memory is enabled. The process is reversed to restore memory to GEN. The hardware to enable/disable the image and computer memories is controlled using spare bits from a DR-11K output register. The image processor and physical memory can be switched in or out on line with no adverse affects on the system's operation

  10. Memory Analysis of the KBeast Linux Rootkit: Investigating Publicly Available Linux Rootkit Using the Volatility Memory Analysis Framework

    Science.gov (United States)

    2015-06-01

    examine how a computer forensic investigator/incident handler, without specialised computer memory or software reverse engineering skills , can successfully...memory images and malware, this new series of reports will be directed at those who must analyse Linux malware-infected memory images. The skills ...disable 1287 1000 1000 /usr/lib/policykit-1-gnome/polkit-gnome-authentication- agent-1 1310 1000 1000 /usr/lib/pulseaudio/pulse/gconf- helper 1350

  11. Iterative schemes for parallel Sn algorithms in a shared-memory computing environment

    International Nuclear Information System (INIS)

    Haghighat, A.; Hunter, M.A.; Mattis, R.E.

    1995-01-01

    Several two-dimensional spatial domain partitioning S n transport theory algorithms are developed on the basis of different iterative schemes. These algorithms are incorporated into TWOTRAN-II and tested on the shared-memory CRAY Y-MP C90 computer. For a series of fixed-source r-z geometry homogeneous problems, it is demonstrated that the concurrent red-black algorithms may result in large parallel efficiencies (>60%) on C90. It is also demonstrated that for a realistic shielding problem, the use of the negative flux fixup causes high load imbalance, which results in a significant loss of parallel efficiency

  12. Applied computation and security systems

    CERN Document Server

    Saeed, Khalid; Choudhury, Sankhayan; Chaki, Nabendu

    2015-01-01

    This book contains the extended version of the works that have been presented and discussed in the First International Doctoral Symposium on Applied Computation and Security Systems (ACSS 2014) held during April 18-20, 2014 in Kolkata, India. The symposium has been jointly organized by the AGH University of Science & Technology, Cracow, Poland and University of Calcutta, India. The Volume I of this double-volume book contains fourteen high quality book chapters in three different parts. Part 1 is on Pattern Recognition and it presents four chapters. Part 2 is on Imaging and Healthcare Applications contains four more book chapters. The Part 3 of this volume is on Wireless Sensor Networking and it includes as many as six chapters. Volume II of the book has three Parts presenting a total of eleven chapters in it. Part 4 consists of five excellent chapters on Software Engineering ranging from cloud service design to transactional memory. Part 5 in Volume II is on Cryptography with two book...

  13. The impact of taxing working memory on negative and positive memories

    NARCIS (Netherlands)

    Engelhard, I.M.; van Uijen, S.L.; Van den Hout, M.A.

    2010-01-01

    BACKGROUND: Earlier studies have shown that horizontal eye movement (EM) during retrieval of a negative memory reduces its vividness and emotionality. This may be due to both tasks competing for working memory (WM) resources. This study examined whether playing the computer game "Tetris" also blurs

  14. The Interaction between Semantic Representation and Episodic Memory.

    Science.gov (United States)

    Fang, Jing; Rüther, Naima; Bellebaum, Christian; Wiskott, Laurenz; Cheng, Sen

    2018-02-01

    The experimental evidence on the interrelation between episodic memory and semantic memory is inconclusive. Are they independent systems, different aspects of a single system, or separate but strongly interacting systems? Here, we propose a computational role for the interaction between the semantic and episodic systems that might help resolve this debate. We hypothesize that episodic memories are represented as sequences of activation patterns. These patterns are the output of a semantic representational network that compresses the high-dimensional sensory input. We show quantitatively that the accuracy of episodic memory crucially depends on the quality of the semantic representation. We compare two types of semantic representations: appropriate representations, which means that the representation is used to store input sequences that are of the same type as those that it was trained on, and inappropriate representations, which means that stored inputs differ from the training data. Retrieval accuracy is higher for appropriate representations because the encoded sequences are less divergent than those encoded with inappropriate representations. Consistent with our model prediction, we found that human subjects remember some aspects of episodes significantly more accurately if they had previously been familiarized with the objects occurring in the episode, as compared to episodes involving unfamiliar objects. We thus conclude that the interaction with the semantic system plays an important role for episodic memory.

  15. Cortical Thickness and Episodic Memory Impairment in Systemic Lupus Erythematosus.

    Science.gov (United States)

    Bizzo, Bernardo Canedo; Sanchez, Tiago Arruda; Tukamoto, Gustavo; Zimmermann, Nicolle; Netto, Tania Maria; Gasparetto, Emerson Leandro

    2017-01-01

    The purpose of this study was to investigate differences in brain cortical thickness of systemic lupus erythematosus (SLE) patients with and without episodic memory impairment and healthy controls. We studied 51 patients divided in 2 groups (SLE with episodic memory deficit, n = 17; SLE without episodic memory deficit, n = 34) by the Rey Auditory Verbal Learning Test and 34 healthy controls. Groups were paired based on sex, age, education, Mini-Mental State Examination score, and accumulation of disease burden. Cortical thickness from magnetic resonance imaging scans was determined using the FreeSurfer software package. SLE patients with episodic memory deficits presented reduced cortical thickness in the left supramarginal cortex and superior temporal gyrus when compared to the control group and in the right superior frontal, caudal, and rostral middle frontal and precentral gyri when compared to the SLE group without episodic memory impairment considering time since diagnosis of SLE as covaried. There were no significant differences in the cortical thickness between the SLE without episodic memory and control groups. Different memory-related cortical regions thinning were found in the episodic memory deficit group when individually compared to the groups of patients without memory impairment and healthy controls. Copyright © 2016 by the American Society of Neuroimaging.

  16. Memory Dysfunction

    Science.gov (United States)

    Matthews, Brandy R.

    2015-01-01

    Purpose of Review: This article highlights the dissociable human memory systems of episodic, semantic, and procedural memory in the context of neurologic illnesses known to adversely affect specific neuroanatomic structures relevant to each memory system. Recent Findings: Advances in functional neuroimaging and refinement of neuropsychological and bedside assessment tools continue to support a model of multiple memory systems that are distinct yet complementary and to support the potential for one system to be engaged as a compensatory strategy when a counterpart system fails. Summary: Episodic memory, the ability to recall personal episodes, is the subtype of memory most often perceived as dysfunctional by patients and informants. Medial temporal lobe structures, especially the hippocampal formation and associated cortical and subcortical structures, are most often associated with episodic memory loss. Episodic memory dysfunction may present acutely, as in concussion; transiently, as in transient global amnesia (TGA); subacutely, as in thiamine deficiency; or chronically, as in Alzheimer disease. Semantic memory refers to acquired knowledge about the world. Anterior and inferior temporal lobe structures are most often associated with semantic memory loss. The semantic variant of primary progressive aphasia (svPPA) is the paradigmatic disorder resulting in predominant semantic memory dysfunction. Working memory, associated with frontal lobe function, is the active maintenance of information in the mind that can be potentially manipulated to complete goal-directed tasks. Procedural memory, the ability to learn skills that become automatic, involves the basal ganglia, cerebellum, and supplementary motor cortex. Parkinson disease and related disorders result in procedural memory deficits. Most memory concerns warrant bedside cognitive or neuropsychological evaluation and neuroimaging to assess for specific neuropathologies and guide treatment. PMID:26039844

  17. A Semi-Preemptive Computational Service System with Limited Resources and Dynamic Resource Ranking

    Directory of Open Access Journals (Sweden)

    Fang-Yie Leu

    2012-03-01

    Full Text Available In this paper, we integrate a grid system and a wireless network to present a convenient computational service system, called the Semi-Preemptive Computational Service system (SePCS for short, which provides users with a wireless access environment and through which a user can share his/her resources with others. In the SePCS, each node is dynamically given a score based on its CPU level, available memory size, current length of waiting queue, CPU utilization and bandwidth. With the scores, resource nodes are classified into three levels. User requests based on their time constraints are also classified into three types. Resources of higher levels are allocated to more tightly constrained requests so as to increase the total performance of the system. To achieve this, a resource broker with the Semi-Preemptive Algorithm (SPA is also proposed. When the resource broker cannot find suitable resources for the requests of higher type, it preempts the resource that is now executing a lower type request so that the request of higher type can be executed immediately. The SePCS can be applied to a Vehicular Ad Hoc Network (VANET, users of which can then exploit the convenient mobile network services and the wireless distributed computing. As a result, the performance of the system is higher than that of the tested schemes.

  18. Memory Interventions in the Criminal Justice System: Some Practical Ethical Considerations.

    Science.gov (United States)

    Cabrera, Laura Y; Elger, Bernice S

    2016-03-01

    In recent years, discussion around memory modification interventions has gained attention. However, discussion around the use of memory interventions in the criminal justice system has been mostly absent. In this paper we start by highlighting the importance memory has for human well-being and personal identity, as well as its role within the criminal forensic setting; in particular, for claiming and accepting legal responsibility, for moral learning, and for retribution. We provide examples of memory interventions that are currently available for medical purposes, but that in the future could be used in the forensic setting to modify criminal offenders' memories. In this section we contrast the cases of (1) dampening and (2) enhancing memories of criminal offenders. We then present from a pragmatic approach some pressing ethical issues associated with these types of memory interventions. The paper ends up highlighting how these pragmatic considerations can help establish ethically justified criteria regarding the possibility of interventions aimed at modifying criminal offenders' memories.

  19. System Consolidation of Spatial Memories in Mice: Effects of Enriched Environment

    Directory of Open Access Journals (Sweden)

    Joyce Bonaccorsi

    2013-01-01

    Full Text Available Environmental enrichment (EE is known to enhance learning and memory. Declarative memories are thought to undergo a first rapid and local consolidation process, followed by a prolonged process of system consolidation, which consist in a time-dependent gradual reorganization of brain regions supporting remote memory storage and crucial for the formation of enduring memories. At present, it is not known whether EE can affect the process of declarative memory system consolidation. We characterized the time course of hippocampal and cortical activation following recall of progressively more remote spatial memories. Wild-type mice either exposed to EE for 40 days or left in standard environment were subjected to spatial learning in the Morris water maze and to the probe test 1, 10, 20, 30, and 50 days after learning. Following the probe test, regional expression of the inducible immediate early gene c-Fos was mapped by immunohistochemistry, as an indicator of neuronal activity. We found that activation of the medial prefrontal cortex (mPFC, suggested to have a privileged role in processing remote spatial memories, was evident at shorter time intervals after learning in EE mice; in addition, EE induced the progressive activation of a distributed cortical network not activated in non-EE mice. This suggests that EE not only accelerates the process of mPFC recruitment but also recruits additional cortical areas into the network supporting remote spatial memories.

  20. The Memory Aid study: protocol for a randomized controlled clinical trial evaluating the effect of computer-based working memory training in elderly patients with mild cognitive impairment (MCI).

    Science.gov (United States)

    Flak, Marianne M; Hernes, Susanne S; Chang, Linda; Ernst, Thomas; Douet, Vanessa; Skranes, Jon; Løhaugen, Gro C C

    2014-05-03

    Mild cognitive impairment (MCI) is a condition characterized by memory problems that are more severe than the normal cognitive changes due to aging, but less severe than dementia. Reduced working memory (WM) is regarded as one of the core symptoms of an MCI condition. Recent studies have indicated that WM can be improved through computer-based training. The objective of this study is to evaluate if WM training is effective in improving cognitive function in elderly patients with MCI, and if cognitive training induces structural changes in the white and gray matter of the brain, as assessed by structural MRI. The proposed study is a blinded, randomized, controlled trail that will include 90 elderly patients diagnosed with MCI at a hospital-based memory clinic. The participants will be randomized to either a training program or a placebo version of the program. The intervention is computerized WM training performed for 45 minutes of 25 sessions over 5 weeks. The placebo version is identical in duration but is non-adaptive in the difficulty level of the tasks. Neuropsychological assessment and structural MRI will be performed before and 1 month after training, and at a 5-month folllow-up. If computer-based training results in positive changes to memory functions in patients with MCI this may represent a new, cost-effective treatment for MCI. Secondly, evaluation of any training-induced structural changes to gray or white matter will improve the current understanding of the mechanisms behind effective cognitive interventions in patients with MCI. ClinicalTrials.gov NCT01991405. November 18, 2013.

  1. Use of non-volatile memories for SSC detector readout

    International Nuclear Information System (INIS)

    Fennelly, A.J.; Woosley, J.K.; Johnson, M.B.

    1990-01-01

    Use of non-volatile memory units at the end of each fiber optic bunch/strand would substantially increase information available from experiments by providing a complete event history, in addition to easing real time processing requirements. This may be an alternative to enhancing technology to optical computing techniques. Available and low-risk projected technologies will be surveyed, with costing addressed. Some discussion will be given to covnersion of optical signals, to electronic information, concepts for providing timing pulses to the memory units, and to the magnetoresistive (MRAM) and ferroelectric (FERAM) random access memory technologies that may be utilized in the prototype system

  2. Exploiting Data Sparsity In Covariance Matrix Computations on Heterogeneous Systems

    KAUST Repository

    Charara, Ali M.

    2018-05-24

    Covariance matrices are ubiquitous in computational sciences, typically describing the correlation of elements of large multivariate spatial data sets. For example, covari- ance matrices are employed in climate/weather modeling for the maximum likelihood estimation to improve prediction, as well as in computational ground-based astronomy to enhance the observed image quality by filtering out noise produced by the adap- tive optics instruments and atmospheric turbulence. The structure of these covariance matrices is dense, symmetric, positive-definite, and often data-sparse, therefore, hier- archically of low-rank. This thesis investigates the performance limit of dense matrix computations (e.g., Cholesky factorization) on covariance matrix problems as the number of unknowns grows, and in the context of the aforementioned applications. We employ recursive formulations of some of the basic linear algebra subroutines (BLAS) to accelerate the covariance matrix computation further, while reducing data traffic across the memory subsystems layers. However, dealing with large data sets (i.e., covariance matrices of billions in size) can rapidly become prohibitive in memory footprint and algorithmic complexity. Most importantly, this thesis investigates the tile low-rank data format (TLR), a new compressed data structure and layout, which is valuable in exploiting data sparsity by approximating the operator. The TLR com- pressed data structure allows approximating the original problem up to user-defined numerical accuracy. This comes at the expense of dealing with tasks with much lower arithmetic intensities than traditional dense computations. In fact, this thesis con- solidates the two trends of dense and data-sparse linear algebra for HPC. Not only does the thesis leverage recursive formulations for dense Cholesky-based matrix al- gorithms, but it also implements a novel TLR-Cholesky factorization using batched linear algebra operations to increase hardware occupancy and

  3. Computer vision camera with embedded FPGA processing

    Science.gov (United States)

    Lecerf, Antoine; Ouellet, Denis; Arias-Estrada, Miguel

    2000-03-01

    Traditional computer vision is based on a camera-computer system in which the image understanding algorithms are embedded in the computer. To circumvent the computational load of vision algorithms, low-level processing and imaging hardware can be integrated in a single compact module where a dedicated architecture is implemented. This paper presents a Computer Vision Camera based on an open architecture implemented in an FPGA. The system is targeted to real-time computer vision tasks where low level processing and feature extraction tasks can be implemented in the FPGA device. The camera integrates a CMOS image sensor, an FPGA device, two memory banks, and an embedded PC for communication and control tasks. The FPGA device is a medium size one equivalent to 25,000 logic gates. The device is connected to two high speed memory banks, an IS interface, and an imager interface. The camera can be accessed for architecture programming, data transfer, and control through an Ethernet link from a remote computer. A hardware architecture can be defined in a Hardware Description Language (like VHDL), simulated and synthesized into digital structures that can be programmed into the FPGA and tested on the camera. The architecture of a classical multi-scale edge detection algorithm based on a Laplacian of Gaussian convolution has been developed to show the capabilities of the system.

  4. Nonlinear dynamics of a pseudoelastic shape memory alloy system - theory and experiment

    DEFF Research Database (Denmark)

    Enemark, Søren; A Savi, M.; Santos, Ilmar

    2014-01-01

    In this work, a helical spring made from a pseudoelastic shape memory alloy was embedded in a dynamic system also composed of a mass, a linear spring and an excitation system. The mechanical behaviour of shape memory alloys is highly complex, involving hysteresis, which leads to damping capabilit...

  5. Computer Sciences and Data Systems, volume 1

    Science.gov (United States)

    1987-01-01

    Topics addressed include: software engineering; university grants; institutes; concurrent processing; sparse distributed memory; distributed operating systems; intelligent data management processes; expert system for image analysis; fault tolerant software; and architecture research.

  6. An Artificial Flexible Visual Memory System Based on an UV-Motivated Memristor.

    Science.gov (United States)

    Chen, Shuai; Lou, Zheng; Chen, Di; Shen, Guozhen

    2018-02-01

    For the mimicry of human visual memory, a prominent challenge is how to detect and store the image information by electronic devices, which demands a multifunctional integration to sense light like eyes and to memorize image information like the brain by transforming optical signals to electrical signals that can be recognized by electronic devices. Although current image sensors can perceive simple images in real time, the image information fades away when the external image stimuli are removed. The deficiency between the state-of-the-art image sensors and visual memory system inspires the logical integration of image sensors and memory devices to realize the sensing and memory process toward light information for the bionic design of human visual memory. Hence, a facile architecture is designed to construct artificial flexible visual memory system by employing an UV-motivated memristor. The visual memory arrays can realize the detection and memory process of UV light distribution with a patterned image for a long-term retention and the stored image information can be reset by a negative voltage sweep and reprogrammed to the same or an other image distribution, which proves the effective reusability. These results provide new opportunities for the mimicry of human visual memory and enable the flexible visual memory device to be applied in future wearable electronics, electronic eyes, multifunctional robotics, and auxiliary equipment for visual handicapped. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  7. Improving the Performance and Energy Efficiency of Phase Change Memory Systems

    Institute of Scientific and Technical Information of China (English)

    王琪; 李佳芮; 王东辉

    2015-01-01

    Phase change memory (PCM) is a promising technology for future memory thanks to its better scalability and lower leakage power than DRAM (dynamic random-access memory). However, adopting PCM as main memory needs to overcome its write issues, such as long write latency and high write power. In this paper, we propose two techniques to improve the performance and energy-efficiency of PCM memory systems. First, we propose a victim cache technique utilizing the existing buffer in the memory controller to reduce PCM memory accesses. The key idea is reorganizing the buffer into a victim cache structure (RBC) to provide additional hits for the LLC (last level cache). Second, we propose a chip parallelism-aware replacement policy (CPAR) for the victim cache to further improve performance. Instead of evicting one cache line once, CPAR evicts multiple cache lines that access different PCM chips. CPAR can reduce the frequent victim cache eviction and improve the write parallelism of PCM chips. The evaluation results show that, compared with the baseline, RBC can improve PCM memory system performance by up to 9.4% and 5.4% on average. Combing CPAR with RBC (RBC+CPAR) can improve performance by up to 19.0% and 12.1% on average. Moreover, RBC and RBC+CPAR can reduce memory energy consumption by 8.3%and 6.6%on average, respectively.

  8. Context-sensitive autoassociative memories as expert systems in medical diagnosis

    Directory of Open Access Journals (Sweden)

    Olivera Fernando

    2006-11-01

    Full Text Available Abstract Background The complexity of our contemporary medical practice has impelled the development of different decision-support aids based on artificial intelligence and neural networks. Distributed associative memories are neural network models that fit perfectly well to the vision of cognition emerging from current neurosciences. Methods We present the context-dependent autoassociative memory model. The sets of diseases and symptoms are mapped onto a pair of basis of orthogonal vectors. A matrix memory stores the associations between the signs and symptoms, and their corresponding diseases. A minimal numerical example is presented to show how to instruct the memory and how the system works. In order to provide a quick appreciation of the validity of the model and its potential clinical relevance we implemented an application with real data. A memory was trained with published data of neonates with suspected late-onset sepsis in a neonatal intensive care unit (NICU. A set of personal clinical observations was used as a test set to evaluate the capacity of the model to discriminate between septic and non-septic neonates on the basis of clinical and laboratory findings. Results We show here that matrix memory models with associations modulated by context can perform automatic medical diagnosis. The sequential availability of new information over time makes the system progress in a narrowing process that reduces the range of diagnostic possibilities. At each step the system provides a probabilistic map of the different possible diagnoses to that moment. The system can incorporate the clinical experience, building in that way a representative database of historical data that captures geo-demographical differences between patient populations. The trained model succeeds in diagnosing late-onset sepsis within the test set of infants in the NICU: sensitivity 100%; specificity 80%; percentage of true positives 91%; percentage of true negatives 100

  9. Context-sensitive autoassociative memories as expert systems in medical diagnosis

    Science.gov (United States)

    Pomi, Andrés; Olivera, Fernando

    2006-01-01

    Background The complexity of our contemporary medical practice has impelled the development of different decision-support aids based on artificial intelligence and neural networks. Distributed associative memories are neural network models that fit perfectly well to the vision of cognition emerging from current neurosciences. Methods We present the context-dependent autoassociative memory model. The sets of diseases and symptoms are mapped onto a pair of basis of orthogonal vectors. A matrix memory stores the associations between the signs and symptoms, and their corresponding diseases. A minimal numerical example is presented to show how to instruct the memory and how the system works. In order to provide a quick appreciation of the validity of the model and its potential clinical relevance we implemented an application with real data. A memory was trained with published data of neonates with suspected late-onset sepsis in a neonatal intensive care unit (NICU). A set of personal clinical observations was used as a test set to evaluate the capacity of the model to discriminate between septic and non-septic neonates on the basis of clinical and laboratory findings. Results We show here that matrix memory models with associations modulated by context can perform automatic medical diagnosis. The sequential availability of new information over time makes the system progress in a narrowing process that reduces the range of diagnostic possibilities. At each step the system provides a probabilistic map of the different possible diagnoses to that moment. The system can incorporate the clinical experience, building in that way a representative database of historical data that captures geo-demographical differences between patient populations. The trained model succeeds in diagnosing late-onset sepsis within the test set of infants in the NICU: sensitivity 100%; specificity 80%; percentage of true positives 91%; percentage of true negatives 100%; accuracy (true positives

  10. Memory and reward systems coproduce ‘nostalgic’ experiences in the brain

    Science.gov (United States)

    Oba, Kentaro; Noriuchi, Madoka; Atomi, Tomoaki; Moriguchi, Yoshiya

    2016-01-01

    People sometimes experience an emotional state known as ‘nostalgia’, which involves experiencing predominantly positive emotions while remembering autobiographical events. Nostalgia is thought to play an important role in psychological resilience. Previous neuroimaging studies have shown involvement of memory and reward systems in such experiences. However, it remains unclear how these two systems are collaboratively involved with nostalgia experiences. Here, we conducted a functional magnetic resonance imaging study of healthy females to investigate the relationship between memory-reward co-activation and nostalgia, using childhood-related visual stimuli. Moreover, we examined the factors constituting nostalgia and their neural correlates. We confirmed the presence of nostalgia-related activity in both memory and reward systems, including the hippocampus (HPC), substantia nigra/ventral tegmental area (SN/VTA), and ventral striatum (VS). We also found significant HPC-VS co-activation, with its strength correlating with individual ‘nostalgia tendencies’. Factor analyses showed that two dimensions underlie nostalgia: emotional and personal significance and chronological remoteness, with the former correlating with caudal SN/VTA and left anterior HPC activity, and the latter correlating with rostral SN/VTA activity. These findings demonstrate the cooperative activity of memory and reward systems, where each system has a specific role in the construction of the factors that underlie the experience of nostalgia. PMID:26060325

  11. Enhancing Assisted Living Technology with Extended Visual Memory

    Directory of Open Access Journals (Sweden)

    Joo-Hwee Lim

    2011-05-01

    Full Text Available Human vision and memory are powerful cognitive faculties by which we understand the world. However, they are imperfect and further, subject to deterioration with age. We propose a cognitive-inspired computational model, Extended Visual Memory (EVM, within the Computer-Aided Vision (CAV framework, to assist human in vision-related tasks. We exploit wearable sensors such as cameras, GPS and ambient computing facilities to complement a user's vision and memory functions by answering four types of queries central to visual activities, namely, Retrieval, Understanding, Navigation and Search. Learning of EVM relies on both frequency-based and attention-driven mechanisms to store view-based visual fragments (VF, which are abstracted into high-level visual schemas (VS, both in the visual long-term memory. During inference, the visual short-term memory plays a key role in visual similarity computation between input (or its schematic representation and VF, exemplified from VS when necessary. We present an assisted living scenario, termed EViMAL (Extended Visual Memory for Assisted Living, targeted at mild dementia patients to provide novel functions such as hazard-warning, visual reminder, object look-up and event review. We envisage EVM having the potential benefits in alleviating memory loss, improving recall precision and enhancing memory capacity through external support.

  12. Electro-optical system for the high speed reconstruction of computed tomography images

    International Nuclear Information System (INIS)

    Tresp, V.

    1989-01-01

    An electro-optical system for the high-speed reconstruction of computed tomography (CT) images has been built and studied. The system is capable of reconstructing high-contrast and high-resolution images at video rate (30 images per second), which is more than two orders of magnitude faster than the reconstruction rate achieved by special purpose digital computers used in commercial CT systems. The filtered back-projection algorithm which was implemented in the reconstruction system requires the filtering of all projections with a prescribed filter function. A space-integrating acousto-optical convolver, a surface acoustic wave filter and a digital finite-impulse response filter were used for this purpose and their performances were compared. The second part of the reconstruction, the back projection of the filtered projections, is computationally very expensive. An optical back projector has been built which maps the filtered projections onto the two-dimensional image space using an anamorphic lens system and a prism image rotator. The reconstructed image is viewed by a video camera, routed through a real-time image-enhancement system, and displayed on a TV monitor. The system reconstructs parallel-beam projection data, and in a modified version, is also capable of reconstructing fan-beam projection data. This extension is important since the latter are the kind of projection data actually acquired in high-speed X-ray CT scanners. The reconstruction system was tested by reconstructing precomputed projection data of phantom images. These were stored in a special purpose projection memory and transmitted to the reconstruction system as an electronic signal. In this way, a projection measurement system that acquires projections sequentially was simulated

  13. From Augustine of Hippo's Memory Systems to Our Modern Taxonomy in Cognitive Psychology and Neuroscience of Memory: A 16-Century Nap of Intuition before Light of Evidence.

    Science.gov (United States)

    Cassel, Jean-Christophe; Cassel, Daniel; Manning, Lilianne

    2013-03-01

    Over the last half century, neuropsychologists, cognitive psychologists and cognitive neuroscientists interested in human memory have accumulated evidence showing that there is not one general memory function but a variety of memory systems deserving distinct (but for an organism, complementary) functional entities. The first attempts to organize memory systems within a taxonomic construct are often traced back to the French philosopher Maine de Biran (1766-1824), who, in his book first published in 1803, distinguished mechanical memory, sensitive memory and representative memory, without, however, providing any experimental evidence in support of his view. It turns out, however, that what might be regarded as the first elaborated taxonomic proposal is 14 centuries older and is due to Augustine of Hippo (354-430), also named St Augustine, who, in Book 10 of his Confessions, by means of an introspective process that did not aim at organizing memory systems, nevertheless distinguished and commented on sensible memory, intellectual memory, memory of memories, memory of feelings and passion, and memory of forgetting. These memories were envisaged as different and complementary instances. In the current study, after a short biographical synopsis of St Augustine, we provide an outline of the philosopher's contribution, both in terms of questions and answers, and focus on how this contribution almost perfectly fits with several viewpoints of modern psychology and neuroscience of memory about human memory functions, including the notion that episodic autobiographical memory stores events of our personal history in their what, where and when dimensions, and from there enables our mental time travel. It is not at all meant that St Augustine's elaboration was the basis for the modern taxonomy, but just that the similarity is striking, and that the architecture of our current viewpoints about memory systems might have preexisted as an outstanding intuition in the philosopher

  14. External Memory Pipelining Made Easy With TPIE

    OpenAIRE

    Arge, Lars; Rav, Mathias; Svendsen, Svend C.; Truelsen, Jakob

    2017-01-01

    When handling large datasets that exceed the capacity of the main memory, movement of data between main memory and external memory (disk), rather than actual (CPU) computation time, is often the bottleneck in the computation. Since data is moved between disk and main memory in large contiguous blocks, this has led to the development of a large number of I/O-efficient algorithms that minimize the number of such block movements. TPIE is one of two major libraries that have been developed to sup...

  15. CUBESIM, Hypercube and Denelcor Hep Parallel Computer Simulation

    International Nuclear Information System (INIS)

    Dunigan, T.H.

    1988-01-01

    1 - Description of program or function: CUBESIM is a set of subroutine libraries and programs for the simulation of message-passing parallel computers and shared-memory parallel computers. Subroutines are supplied to simulate the Intel hypercube and the Denelcor HEP parallel computers. The system permits a user to develop and test parallel programs written in C or FORTRAN on a single processor. The user may alter such hypercube parameters as message startup times, packet size, and the computation-to-communication ratio. The simulation generates a trace file that can be used for debugging, performance analysis, or graphical display. 2 - Method of solution: The CUBESIM simulator is linked with the user's parallel application routines to run as a single UNIX process. The simulator library provides a small operating system to perform process and message management. 3 - Restrictions on the complexity of the problem: Up to 128 processors can be simulated with a virtual memory limit of 6 million bytes. Up to 1000 processes can be simulated

  16. Contention Modeling for Multithreaded Distributed Shared Memory Machines: The Cray XMT

    Energy Technology Data Exchange (ETDEWEB)

    Secchi, Simone; Tumeo, Antonino; Villa, Oreste

    2011-07-27

    Distributed Shared Memory (DSM) machines are a wide class of multi-processor computing systems where a large virtually-shared address space is mapped on a network of physically distributed memories. High memory latency and network contention are two of the main factors that limit performance scaling of such architectures. Modern high-performance computing DSM systems have evolved toward exploitation of massive hardware multi-threading and fine-grained memory hashing to tolerate irregular latencies, avoid network hot-spots and enable high scaling. In order to model the performance of such large-scale machines, parallel simulation has been proved to be a promising approach to achieve good accuracy in reasonable times. One of the most critical factors in solving the simulation speed-accuracy trade-off is network modeling. The Cray XMT is a massively multi-threaded supercomputing architecture that belongs to the DSM class, since it implements a globally-shared address space abstraction on top of a physically distributed memory substrate. In this paper, we discuss the development of a contention-aware network model intended to be integrated in a full-system XMT simulator. We start by measuring the effects of network contention in a 128-processor XMT machine and then investigate the trade-off that exists between simulation accuracy and speed, by comparing three network models which operate at different levels of accuracy. The comparison and model validation is performed by executing a string-matching algorithm on the full-system simulator and on the XMT, using three datasets that generate noticeably different contention patterns.

  17. Disentangling the Relationship Between the Adoption of In-Memory Computing and Firm Performance

    DEFF Research Database (Denmark)

    Fay, Marua; Müller, Oliver; vom Brocke, Jan

    2016-01-01

    Recent growth in data volume, variety, and velocity led to an increased demand for high-performance data processing and analytics solutions. In-memory computing (IMC) enables organizations to boost their information processing capacity, and is widely acknowledged to be one of the leading strategic...... at explaining the relationship between the adoption of IMC solutions and firm performance. In this research-in-progress paper we discuss the theoretical background of our work, describe the proposed research design, and develop five hypotheses for later testing. Our work aims at contributing to the research...

  18. Spin-wave interference patterns created by spin-torque nano-oscillators for memory and computation

    International Nuclear Information System (INIS)

    Macia, Ferran; Kent, Andrew D; Hoppensteadt, Frank C

    2011-01-01

    Magnetization dynamics in nanomagnets has attracted broad interest since it was predicted that a dc current flowing through a thin magnetic layer can create spin-wave excitations. These excitations are due to spin momentum transfer, a transfer of spin angular momentum between conduction electrons and the background magnetization, that enables new types of information processing. Here we show how arrays of spin-torque nano-oscillators can create propagating spin-wave interference patterns of use for memory and computation. Memristic transponders distributed on the thin film respond to threshold tunnel magnetoresistance values, thereby allowing spin-wave detection and creating new excitation patterns. We show how groups of transponders create resonant (reverberating) spin-wave interference patterns that may be used for polychronous wave computation and information storage.

  19. Efficient computation of aerodynamic influence coefficients for aeroelastic analysis on a transputer network

    Science.gov (United States)

    Janetzke, David C.; Murthy, Durbha V.

    1991-01-01

    Aeroelastic analysis is multi-disciplinary and computationally expensive. Hence, it can greatly benefit from parallel processing. As part of an effort to develop an aeroelastic capability on a distributed memory transputer network, a parallel algorithm for the computation of aerodynamic influence coefficients is implemented on a network of 32 transputers. The aerodynamic influence coefficients are calculated using a 3-D unsteady aerodynamic model and a parallel discretization. Efficiencies up to 85 percent were demonstrated using 32 processors. The effect of subtask ordering, problem size, and network topology are presented. A comparison to results on a shared memory computer indicates that higher speedup is achieved on the distributed memory system.

  20. Parallel computation of aerodynamic influence coefficients for aeroelastic analysis on a transputer network

    Science.gov (United States)

    Janetzke, D. C.; Murthy, D. V.

    1991-01-01

    Aeroelastic analysis is mult-disciplinary and computationally expensive. Hence, it can greatly benefit from parallel processing. As part of an effort to develop an aeroelastic analysis capability on a distributed-memory transputer network, a parallel algorithm for the computation of aerodynamic influence coefficients is implemented on a network of 32 transputers. The aerodynamic influence coefficients are calculated using a three-dimensional unsteady aerodynamic model and a panel discretization. Efficiencies up to 85 percent are demonstrated using 32 processors. The effects of subtask ordering, problem size and network topology are presented. A comparison to results on a shared-memory computer indicates that higher speedup is achieved on the distributed-memory system.

  1. I. WORKING MEMORY CAPACITY IN CONTEXT: MODELING DYNAMIC PROCESSES OF BEHAVIOR, MEMORY, AND DEVELOPMENT.

    Science.gov (United States)

    Simmering, Vanessa R

    2016-09-01

    Working memory is a vital cognitive skill that underlies a broad range of behaviors. Higher cognitive functions are reliably predicted by working memory measures from two domains: children's performance on complex span tasks, and infants' performance in looking paradigms. Despite the similar predictive power across these research areas, theories of working memory development have not connected these different task types and developmental periods. The current project takes a first step toward bridging this gap by presenting a process-oriented theory, focusing on two tasks designed to assess visual working memory capacity in infants (the change-preference task) versus children and adults (the change detection task). Previous studies have shown inconsistent results, with capacity estimates increasing from one to four items during infancy, but only two to three items during early childhood. A probable source of this discrepancy is the different task structures used with each age group, but prior theories were not sufficiently specific to explain how performance relates across tasks. The current theory focuses on cognitive dynamics, that is, how memory representations are formed, maintained, and used within specific task contexts over development. This theory was formalized in a computational model to generate three predictions: 1) capacity estimates in the change-preference task should continue to increase beyond infancy; 2) capacity estimates should be higher in the change-preference versus change detection task when tested within individuals; and 3) performance should correlate across tasks because both rely on the same underlying memory system. I also tested a fourth prediction, that development across tasks could be explained through increasing real-time stability, realized computationally as strengthening connectivity within the model. Results confirmed these predictions, supporting the cognitive dynamics account of performance and developmental changes in real

  2. Resummed memory kernels in generalized system-bath master equations

    International Nuclear Information System (INIS)

    Mavros, Michael G.; Van Voorhis, Troy

    2014-01-01

    Generalized master equations provide a concise formalism for studying reduced population dynamics. Usually, these master equations require a perturbative expansion of the memory kernels governing the dynamics; in order to prevent divergences, these expansions must be resummed. Resummation techniques of perturbation series are ubiquitous in physics, but they have not been readily studied for the time-dependent memory kernels used in generalized master equations. In this paper, we present a comparison of different resummation techniques for such memory kernels up to fourth order. We study specifically the spin-boson Hamiltonian as a model system bath Hamiltonian, treating the diabatic coupling between the two states as a perturbation. A novel derivation of the fourth-order memory kernel for the spin-boson problem is presented; then, the second- and fourth-order kernels are evaluated numerically for a variety of spin-boson parameter regimes. We find that resumming the kernels through fourth order using a Padé approximant results in divergent populations in the strong electronic coupling regime due to a singularity introduced by the nature of the resummation, and thus recommend a non-divergent exponential resummation (the “Landau-Zener resummation” of previous work). The inclusion of fourth-order effects in a Landau-Zener-resummed kernel is shown to improve both the dephasing rate and the obedience of detailed balance over simpler prescriptions like the non-interacting blip approximation, showing a relatively quick convergence on the exact answer. The results suggest that including higher-order contributions to the memory kernel of a generalized master equation and performing an appropriate resummation can provide a numerically-exact solution to system-bath dynamics for a general spectral density, opening the way to a new class of methods for treating system-bath dynamics

  3. The Focus of Attention is similar to other memory systems rather than uniquely different

    Directory of Open Access Journals (Sweden)

    Olivia eBeaudry

    2014-02-01

    Full Text Available According to some current theories, the Focus of Attention, part of working memory, represents items in a privileged state that is more accessible than items stored in other memory systems. One line of evidence supporting the distinction between the focus of attention and other memory systems is the finding that items in the focus of attention are immune to proactive interference (when something learned earlier impairs the ability to remember something learned more recently. The focus of attention, then, is held to be unique: it is the only memory system that is not susceptible to proactive interference. We review the literature used to support this claim, and although there are many studies in which proactive interference was not observed, we found more studies in which it was observed. We conclude that the focus of attention is not immune to proactive interference: items in the focus of attention are susceptible to proactive interference just like items in every other memory system. And, just as in all other memory systems, it is how the items are represented and processed that plays a critical role in determining whether proactive interference will be observed.

  4. Assessing Working Memory in Children: The Comprehensive Assessment Battery for Children - Working Memory (CABC-WM).

    Science.gov (United States)

    Cabbage, Kathryn; Brinkley, Shara; Gray, Shelley; Alt, Mary; Cowan, Nelson; Green, Samuel; Kuo, Trudy; Hogan, Tiffany P

    2017-06-12

    The Comprehensive Assessment Battery for Children - Working Memory (CABC-WM) is a computer-based battery designed to assess different components of working memory in young school-age children. Working memory deficits have been identified in children with language-based learning disabilities, including dyslexia 1 , 2 and language impairment 3 , 4 , but it is not clear whether these children exhibit deficits in subcomponents of working memory, such as visuospatial or phonological working memory. The CABC-WM is administered on a desktop computer with a touchscreen interface and was specifically developed to be engaging and motivating for children. Although the long-term goal of the CABC-WM is to provide individualized working memory profiles in children, the present study focuses on the initial success and utility of the CABC-WM for measuring central executive, visuospatial, phonological loop, and binding constructs in children with typical development. Immediate next steps are to administer the CABC-WM to children with specific language impairment, dyslexia, and comorbid specific language impairment and dyslexia.

  5. Memristor-based nanoelectronic computing circuits and architectures

    CERN Document Server

    Vourkas, Ioannis

    2016-01-01

    This book considers the design and development of nanoelectronic computing circuits, systems and architectures focusing particularly on memristors, which represent one of today’s latest technology breakthroughs in nanoelectronics. The book studies, explores, and addresses the related challenges and proposes solutions for the smooth transition from conventional circuit technologies to emerging computing memristive nanotechnologies. Its content spans from fundamental device modeling to emerging storage system architectures and novel circuit design methodologies, targeting advanced non-conventional analog/digital massively parallel computational structures. Several new results on memristor modeling, memristive interconnections, logic circuit design, memory circuit architectures, computer arithmetic systems, simulation software tools, and applications of memristors in computing are presented. High-density memristive data storage combined with memristive circuit-design paradigms and computational tools applied t...

  6. Very Dense High Speed 3u VPX Memory and Processing Space Systems, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — While VPX shows promise as an open standard COTS computing and memory platform, there are several challenges that must be overcome to migrate the technology for a...

  7. Modeling aspects of human memory for scientific study.

    Energy Technology Data Exchange (ETDEWEB)

    Caudell, Thomas P. (University of New Mexico); Watson, Patrick (University of Illinois - Champaign-Urbana Beckman Institute); McDaniel, Mark A. (Washington University); Eichenbaum, Howard B. (Boston University); Cohen, Neal J. (University of Illinois - Champaign-Urbana Beckman Institute); Vineyard, Craig Michael; Taylor, Shawn Ellis; Bernard, Michael Lewis; Morrow, James Dan; Verzi, Stephen J.

    2009-10-01

    Working with leading experts in the field of cognitive neuroscience and computational intelligence, SNL has developed a computational architecture that represents neurocognitive mechanisms associated with how humans remember experiences in their past. The architecture represents how knowledge is organized and updated through information from individual experiences (episodes) via the cortical-hippocampal declarative memory system. We compared the simulated behavioral characteristics with those of humans measured under well established experimental standards, controlling for unmodeled aspects of human processing, such as perception. We used this knowledge to create robust simulations of & human memory behaviors that should help move the scientific community closer to understanding how humans remember information. These behaviors were experimentally validated against actual human subjects, which was published. An important outcome of the validation process will be the joining of specific experimental testing procedures from the field of neuroscience with computational representations from the field of cognitive modeling and simulation.

  8. The Processing Using Memory Paradigm:In-DRAM Bulk Copy, Initialization, Bitwise AND and OR

    OpenAIRE

    Seshadri, Vivek; Mutlu, Onur

    2016-01-01

    In existing systems, the off-chip memory interface allows the memory controller to perform only read or write operations. Therefore, to perform any operation, the processor must first read the source data and then write the result back to memory after performing the operation. This approach consumes high latency, bandwidth, and energy for operations that work on a large amount of data. Several works have proposed techniques to process data near memory by adding a small amount of compute logic...

  9. NI Based System for Seu Testing of Memory Chips for Avionics

    Directory of Open Access Journals (Sweden)

    Boruzdina Anna

    2016-01-01

    Full Text Available This paper presents the results of implementation of National Instrument based system for Single Event Upset testing of memory chips into neutron generator experimental facility, which used for SEU tests for avionics purposes. Basic SEU testing algorithm with error correction and constant errors detection is presented. The issues of radiation shielding of NI based system are discussed and solved. The examples of experimental results show the applicability of the presented system for SEU memory testing under neutrons influence.

  10. The accessibility of memory items in children’s working memory

    OpenAIRE

    Roome, Hannah; Towse, John

    2016-01-01

    This thesis investigates the processes and systems that support recall in working memory. In particular it seeks to apply ideas from the adult-based dual-memory framework (Unsworth & Engle, 2007b) that claims primary memory and secondary memory are independent contributors to working memory capacity. These two memory systems are described as domain-general processes that combine control of attention and basic memory abilities to retain information. The empirical contribution comprises five ex...

  11. Resilient computer system design

    CERN Document Server

    Castano, Victor

    2015-01-01

    This book presents a paradigm for designing new generation resilient and evolving computer systems, including their key concepts, elements of supportive theory, methods of analysis and synthesis of ICT with new properties of evolving functioning, as well as implementation schemes and their prototyping. The book explains why new ICT applications require a complete redesign of computer systems to address challenges of extreme reliability, high performance, and power efficiency. The authors present a comprehensive treatment for designing the next generation of computers, especially addressing safety-critical, autonomous, real time, military, banking, and wearable health care systems.   §  Describes design solutions for new computer system - evolving reconfigurable architecture (ERA) that is free from drawbacks inherent in current ICT and related engineering models §  Pursues simplicity, reliability, scalability principles of design implemented through redundancy and re-configurability; targeted for energy-,...

  12. Attacks on computer systems

    Directory of Open Access Journals (Sweden)

    Dejan V. Vuletić

    2012-01-01

    Full Text Available Computer systems are a critical component of the human society in the 21st century. Economic sector, defense, security, energy, telecommunications, industrial production, finance and other vital infrastructure depend on computer systems that operate at local, national or global scales. A particular problem is that, due to the rapid development of ICT and the unstoppable growth of its application in all spheres of the human society, their vulnerability and exposure to very serious potential dangers increase. This paper analyzes some typical attacks on computer systems.

  13. Glucocorticoids interact with the hippocampal endocannabinoid system in impairing retrieval of contextual fear memory

    Science.gov (United States)

    Atsak, Piray; Hauer, Daniela; Campolongo, Patrizia; Schelling, Gustav; McGaugh, James L.; Roozendaal, Benno

    2012-01-01

    There is extensive evidence that glucocorticoid hormones impair the retrieval of memory of emotionally arousing experiences. Although it is known that glucocorticoid effects on memory retrieval impairment depend on rapid interactions with arousal-induced noradrenergic activity, the exact mechanism underlying this presumably nongenomically mediated glucocorticoid action remains to be elucidated. Here, we show that the hippocampal endocannabinoid system, a rapidly activated retrograde messenger system, is involved in mediating glucocorticoid effects on retrieval of contextual fear memory. Systemic administration of corticosterone (0.3–3 mg/kg) to male Sprague–Dawley rats 1 h before retention testing impaired the retrieval of contextual fear memory without impairing the retrieval of auditory fear memory or directly affecting the expression of freezing behavior. Importantly, a blockade of hippocampal CB1 receptors with AM251 prevented the impairing effect of corticosterone on retrieval of contextual fear memory, whereas the same impairing dose of corticosterone increased hippocampal levels of the endocannabinoid 2-arachidonoylglycerol. We also found that antagonism of hippocampal β-adrenoceptor activity with local infusions of propranolol blocked the memory retrieval impairment induced by the CB receptor agonist WIN55,212–2. Thus, these findings strongly suggest that the endocannabinoid system plays an intermediary role in regulating rapid glucocorticoid effects on noradrenergic activity in impairing memory retrieval of emotionally arousing experiences. PMID:22331883

  14. MEMORY EFFICIENT SEMI-GLOBAL MATCHING

    Directory of Open Access Journals (Sweden)

    H. Hirschmüller

    2012-07-01

    Full Text Available Semi-GlobalMatching (SGM is a robust stereo method that has proven its usefulness in various applications ranging from aerial image matching to driver assistance systems. It supports pixelwise matching for maintaining sharp object boundaries and fine structures and can be implemented efficiently on different computation hardware. Furthermore, the method is not sensitive to the choice of parameters. The structure of the matching algorithm is well suited to be processed by highly paralleling hardware e.g. FPGAs and GPUs. The drawback of SGM is the temporary memory requirement that depends on the number of pixels and the disparity range. On the one hand this results in long idle times due to the bandwidth limitations of the external memory and on the other hand the capacity bounds are quickly reached. A full HD image with a size of 1920 × 1080 pixels and a disparity range of 512 pixels requires already 1 billion elements, which is at least several GB of RAM, depending on the element size, wich are not available at standard FPGA- and GPUboards. The novel memory efficient (eSGM method is an advancement in which the amount of temporary memory only depends on the number of pixels and not on the disparity range. This permits matching of huge images in one piece and reduces the requirements of the memory bandwidth for real-time mobile robotics. The feature comes at the cost of 50% more compute operations as compared to SGM. This overhead is compensated by the previously idle compute logic within the FPGA and the GPU and therefore results in an overall performance increase. We show that eSGM produces the same high quality disparity images as SGM and demonstrate its performance both on an aerial image pair with 142 MPixel and within a real-time mobile robotic application. We have implemented the new method on the CPU, GPU and FPGA.We conclude that eSGM is advantageous for a GPU implementation and essential for an implementation on our FPGA.

  15. Laser memory (hologram) and coincident redundant multiplex memory (CRM-memory)

    International Nuclear Information System (INIS)

    Ostojic, Branko

    1975-01-01

    It is shown that besides the memory which remembers the object by memorising of the phases of the interferenting waves of the light (i.e. hologram) it is possible to construct the memory which remembers the object by memorising of the phases of the interferenting impulses (CFM-memory). It is given the mathematical description of the memory, based on the experimental model. Although in the paper only the technical aspect of CRM memory is given. It is mentioned the possibility that the human memory has the same principle and that the invention of CRM memory is due to cybernetical analysis of the system human eye-visual cortex

  16. Computer-assisted machine-to-human protocols for authentication of a RAM-based embedded system

    Science.gov (United States)

    Idrissa, Abdourhamane; Aubert, Alain; Fournel, Thierry

    2012-06-01

    Mobile readers used for optical identification of manufactured products can be tampered in different ways: with hardware Trojan or by powering up with fake configuration data. How a human verifier can authenticate the reader to be handled for goods verification? In this paper, two cryptographic protocols are proposed to achieve the verification of a RAM-based system through a trusted auxiliary machine. Such a system is assumed to be composed of a RAM memory and a secure block (in practice a FPGA or a configurable microcontroller). The system is connected to an input/output interface and contains a Non Volatile Memory where the configuration data are stored. Here, except the secure block, all the blocks are exposed to attacks. At the registration stage of the first protocol, the MAC of both the secret and the configuration data, denoted M0 is computed by the mobile device without saving it then transmitted to the user in a secure environment. At the verification stage, the reader which is challenged with nonces sendsMACs / HMACs of both nonces and MAC M0 (to be recomputed), keyed with the secret. These responses are verified by the user through a trusted auxiliary MAC computer unit. Here the verifier does not need to tract a (long) list of challenge / response pairs. This makes the protocol tractable for a human verifier as its participation in the authentication process is increased. In counterpart the secret has to be shared with the auxiliary unit. This constraint is relaxed in a second protocol directly derived from Fiat-Shamir's scheme.

  17. Computer hardware for radiologists: Part I

    International Nuclear Information System (INIS)

    Indrajit, IK; Alam, A

    2010-01-01

    Computers are an integral part of modern radiology practice. They are used in different radiology modalities to acquire, process, and postprocess imaging data. They have had a dramatic influence on contemporary radiology practice. Their impact has extended further with the emergence of Digital Imaging and Communications in Medicine (DICOM), Picture Archiving and Communication System (PACS), Radiology information system (RIS) technology, and Teleradiology. A basic overview of computer hardware relevant to radiology practice is presented here. The key hardware components in a computer are the motherboard, central processor unit (CPU), the chipset, the random access memory (RAM), the memory modules, bus, storage drives, and ports. The personnel computer (PC) has a rectangular case that contains important components called hardware, many of which are integrated circuits (ICs). The fiberglass motherboard is the main printed circuit board and has a variety of important hardware mounted on it, which are connected by electrical pathways called “buses”. The CPU is the largest IC on the motherboard and contains millions of transistors. Its principal function is to execute “programs”. A Pentium ® 4 CPU has transistors that execute a billion instructions per second. The chipset is completely different from the CPU in design and function; it controls data and interaction of buses between the motherboard and the CPU. Memory (RAM) is fundamentally semiconductor chips storing data and instructions for access by a CPU. RAM is classified by storage capacity, access speed, data rate, and configuration

  18. Computer hardware for radiologists: Part I

    Directory of Open Access Journals (Sweden)

    Indrajit I

    2010-01-01

    Full Text Available Computers are an integral part of modern radiology practice. They are used in different radiology modalities to acquire, process, and postprocess imaging data. They have had a dramatic influence on contemporary radiology practice. Their impact has extended further with the emergence of Digital Imaging and Communications in Medicine (DICOM, Picture Archiving and Communication System (PACS, Radiology information system (RIS technology, and Teleradiology. A basic overview of computer hardware relevant to radiology practice is presented here. The key hardware components in a computer are the motherboard, central processor unit (CPU, the chipset, the random access memory (RAM, the memory modules, bus, storage drives, and ports. The personnel computer (PC has a rectangular case that contains important components called hardware, many of which are integrated circuits (ICs. The fiberglass motherboard is the main printed circuit board and has a variety of important hardware mounted on it, which are connected by electrical pathways called "buses". The CPU is the largest IC on the motherboard and contains millions of transistors. Its principal function is to execute "programs". A Pentium® 4 CPU has transistors that execute a billion instructions per second. The chipset is completely different from the CPU in design and function; it controls data and interaction of buses between the motherboard and the CPU. Memory (RAM is fundamentally semiconductor chips storing data and instructions for access by a CPU. RAM is classified by storage capacity, access speed, data rate, and configuration.

  19. `Unlearning' has a stabilizing effect in collective memories

    Science.gov (United States)

    Hopfield, J. J.; Feinstein, D. I.; Palmer, R. G.

    1983-07-01

    Crick and Mitchison1 have presented a hypothesis for the functional role of dream sleep involving an `unlearning' process. We have independently carried out mathematical and computer modelling of learning and `unlearning' in a collective neural network of 30-1,000 neurones. The model network has a content-addressable memory or `associative memory' which allows it to learn and store many memories. A particular memory can be evoked in its entirety when the network is stimulated by any adequate-sized subpart of the information of that memory2. But different memories of the same size are not equally easy to recall. Also, when memories are learned, spurious memories are also created and can also be evoked. Applying an `unlearning' process, similar to the learning processes but with a reversed sign and starting from a noise input, enhances the performance of the network in accessing real memories and in minimizing spurious ones. Although our model was not motivated by higher nervous function, our system displays behaviours which are strikingly parallel to those needed for the hypothesized role of `unlearning' in rapid eye movement (REM) sleep.

  20. Concurrent Operations of O2-Tree on Shared Memory Multicore Architectures

    Directory of Open Access Journals (Sweden)

    Daniel Ohene-Kwofie

    2014-05-01

    Full Text Available Modern computer architectures provide high performance computing capability by having multiple CPU cores. Such systems are also typically associated with very large main-memory capacities, thereby allowing them to be used for fast processing of in-memory database applications. However, most of the concurrency control mechanism associated with the index structures of these memory resident databases do not scale well, under high transaction rates. This paper presents the O2-Tree, a fast main memory resident index, which is also highly scalable and tolerant of high transaction rates in a concurrent environment using the relaxed balancing tree algorithm. The O2-Tree is a modified Red-Black tree in which the leaf nodes are formed into blocks that hold key-value pairs, while each internal node stores a single key that results from splitting leaf nodes. Multi-threaded concurrent manipulation of the O2-Tree outperforms popular NoSQL based key-value stores considered in this paper.

  1. A Hybrid Approach to Processing Big Data Graphs on Memory-Restricted Systems

    KAUST Repository

    Harshvardhan,

    2015-05-01

    With the advent of big-data, processing large graphs quickly has become increasingly important. Most existing approaches either utilize in-memory processing techniques that can only process graphs that fit completely in RAM, or disk-based techniques that sacrifice performance. In this work, we propose a novel RAM-Disk hybrid approach to graph processing that can scale well from a single shared-memory node to large distributed-memory systems. It works by partitioning the graph into sub graphs that fit in RAM and uses a paging-like technique to load sub graphs. We show that without modifying the algorithms, this approach can scale from small memory-constrained systems (such as tablets) to large-scale distributed machines with 16, 000+ cores.

  2. A Soft Computing Based Approach Using Modified Selection Strategy for Feature Reduction of Medical Systems

    Directory of Open Access Journals (Sweden)

    Kursat Zuhtuogullari

    2013-01-01

    Full Text Available The systems consisting high input spaces require high processing times and memory usage. Most of the attribute selection algorithms have the problems of input dimensions limits and information storage problems. These problems are eliminated by means of developed feature reduction software using new modified selection mechanism with middle region solution candidates adding. The hybrid system software is constructed for reducing the input attributes of the systems with large number of input variables. The designed software also supports the roulette wheel selection mechanism. Linear order crossover is used as the recombination operator. In the genetic algorithm based soft computing methods, locking to the local solutions is also a problem which is eliminated by using developed software. Faster and effective results are obtained in the test procedures. Twelve input variables of the urological system have been reduced to the reducts (reduced input attributes with seven, six, and five elements. It can be seen from the obtained results that the developed software with modified selection has the advantages in the fields of memory allocation, execution time, classification accuracy, sensitivity, and specificity values when compared with the other reduction algorithms by using the urological test data.

  3. A soft computing based approach using modified selection strategy for feature reduction of medical systems.

    Science.gov (United States)

    Zuhtuogullari, Kursat; Allahverdi, Novruz; Arikan, Nihat

    2013-01-01

    The systems consisting high input spaces require high processing times and memory usage. Most of the attribute selection algorithms have the problems of input dimensions limits and information storage problems. These problems are eliminated by means of developed feature reduction software using new modified selection mechanism with middle region solution candidates adding. The hybrid system software is constructed for reducing the input attributes of the systems with large number of input variables. The designed software also supports the roulette wheel selection mechanism. Linear order crossover is used as the recombination operator. In the genetic algorithm based soft computing methods, locking to the local solutions is also a problem which is eliminated by using developed software. Faster and effective results are obtained in the test procedures. Twelve input variables of the urological system have been reduced to the reducts (reduced input attributes) with seven, six, and five elements. It can be seen from the obtained results that the developed software with modified selection has the advantages in the fields of memory allocation, execution time, classification accuracy, sensitivity, and specificity values when compared with the other reduction algorithms by using the urological test data.

  4. New computer systems

    International Nuclear Information System (INIS)

    Faerber, G.

    1975-01-01

    Process computers have already become indespensable technical aids for monitoring and automation tasks in nuclear power stations. Yet there are still some problems connected with their use whose elimination should be the main objective in the development of new computer systems. In the paper, some of these problems are summarized, new tendencies in hardware development are outlined, and finally some new systems concepts made possible by the hardware development are explained. (orig./AK) [de

  5. Computing the non-Markovian coarse-grained interactions derived from the Mori-Zwanzig formalism in molecular systems: Application to polymer melts

    Science.gov (United States)

    Li, Zhen; Lee, Hee Sun; Darve, Eric; Karniadakis, George Em

    2017-01-01

    Memory effects are often introduced during coarse-graining of a complex dynamical system. In particular, a generalized Langevin equation (GLE) for the coarse-grained (CG) system arises in the context of Mori-Zwanzig formalism. Upon a pairwise decomposition, GLE can be reformulated into its pairwise version, i.e., non-Markovian dissipative particle dynamics (DPD). GLE models the dynamics of a single coarse particle, while DPD considers the dynamics of many interacting CG particles, with both CG systems governed by non-Markovian interactions. We compare two different methods for the practical implementation of the non-Markovian interactions in GLE and DPD systems. More specifically, a direct evaluation of the non-Markovian (NM) terms is performed in LE-NM and DPD-NM models, which requires the storage of historical information that significantly increases computational complexity. Alternatively, we use a few auxiliary variables in LE-AUX and DPD-AUX models to replace the non-Markovian dynamics with a Markovian dynamics in a higher dimensional space, leading to a much reduced memory footprint and computational cost. In our numerical benchmarks, the GLE and non-Markovian DPD models are constructed from molecular dynamics (MD) simulations of star-polymer melts. Results show that a Markovian dynamics with auxiliary variables successfully generates equivalent non-Markovian dynamics consistent with the reference MD system, while maintaining a tractable computational cost. Also, transient subdiffusion of the star-polymers observed in the MD system can be reproduced by the coarse-grained models. The non-interacting particle models, LE-NM/AUX, are computationally much cheaper than the interacting particle models, DPD-NM/AUX. However, the pairwise models with momentum conservation are more appropriate for correctly reproducing the long-time hydrodynamics characterised by an algebraic decay in the velocity autocorrelation function.

  6. Propagating fronts in reaction-transport systems with memory

    Energy Technology Data Exchange (ETDEWEB)

    Yadav, A. [Department of Chemistry, Southern Methodist University, Dallas, TX 75275-0314 (United States)], E-mail: ayadav1@lsu.edu; Fedotov, Sergei [School of Mathematics, University of Manchester, Manchester M60 1DQ (United Kingdom)], E-mail: sergei.fedotov@manchester.ac.uk; Mendez, Vicenc [Grup de Fisica Estadistica, Departament de Fisica, Universitat Autonoma de Barcelona, E-08193 Bellaterra (Spain)], E-mail: vicenc.mendez@uab.es; Horsthemke, Werner [Department of Chemistry, Southern Methodist University, Dallas, TX 75275-0314 (United States)], E-mail: whorsthe@smu.edu

    2007-11-26

    In reaction-transport systems with non-standard diffusion, the memory of the transport causes a coupling of reactions and transport. We investigate the effect of this coupling for systems with Fisher-type kinetics and obtain a general analytical expression for the front speed. We apply our results to the specific case of subdiffusion.

  7. One declarative memory system or two? The relationship between episodic and semantic memory in children with temporal lobe epilepsy.

    Science.gov (United States)

    Smith, Mary Lou; Lah, Suncica

    2011-09-01

    This study explored verbal semantic and episodic memory in children with unilateral temporal lobe epilepsy to determine whether they had impairments in both or only 1 aspect of memory, and to examine relations between performance in the 2 domains. Sixty-six children and adolescents (37 with seizures of left temporal lobe onset, 29 with right-sided onset) were given 4 tasks assessing different aspects of semantic memory (picture naming, fluency, knowledge of facts, knowledge of word meanings) and 2 episodic memory tasks (story recall, word list recall). High rates of impairments were observed across tasks, and no differences were found related to the laterality of the seizures. Individual patient analyses showed that there was a double dissociation between the 2 aspects of memory in that some children were impaired on episodic but not semantic memory, whereas others showed intact episodic but impaired semantic memory. This double dissociation suggests that these 2 memory systems may develop independently in the context of temporal lobe pathology, perhaps related to differential effects of dysfunction in the lateral and mesial temporal lobe structures. PsycINFO Database Record (c) 2011 APA, all rights reserved.

  8. Method and system for training dynamic nonlinear adaptive filters which have embedded memory

    Science.gov (United States)

    Rabinowitz, Matthew (Inventor)

    2002-01-01

    Described herein is a method and system for training nonlinear adaptive filters (or neural networks) which have embedded memory. Such memory can arise in a multi-layer finite impulse response (FIR) architecture, or an infinite impulse response (IIR) architecture. We focus on filter architectures with separate linear dynamic components and static nonlinear components. Such filters can be structured so as to restrict their degrees of computational freedom based on a priori knowledge about the dynamic operation to be emulated. The method is detailed for an FIR architecture which consists of linear FIR filters together with nonlinear generalized single layer subnets. For the IIR case, we extend the methodology to a general nonlinear architecture which uses feedback. For these dynamic architectures, we describe how one can apply optimization techniques which make updates closer to the Newton direction than those of a steepest descent method, such as backpropagation. We detail a novel adaptive modified Gauss-Newton optimization technique, which uses an adaptive learning rate to determine both the magnitude and direction of update steps. For a wide range of adaptive filtering applications, the new training algorithm converges faster and to a smaller value of cost than both steepest-descent methods such as backpropagation-through-time, and standard quasi-Newton methods. We apply the algorithm to modeling the inverse of a nonlinear dynamic tracking system 5, as well as a nonlinear amplifier 6.

  9. FFT transformed quantitative EEG analysis of short term memory load.

    Science.gov (United States)

    Singh, Yogesh; Singh, Jayvardhan; Sharma, Ratna; Talwar, Anjana

    2015-07-01

    The EEG is considered as building block of functional signaling in the brain. The role of EEG oscillations in human information processing has been intensively investigated. To study the quantitative EEG correlates of short term memory load as assessed through Sternberg memory test. The study was conducted on 34 healthy male student volunteers. The intervention consisted of Sternberg memory test, which runs on a version of the Sternberg memory scanning paradigm software on a computer. Electroencephalography (EEG) was recorded from 19 scalp locations according to 10-20 international system of electrode placement. EEG signals were analyzed offline. To overcome the problems of fixed band system, individual alpha frequency (IAF) based frequency band selection method was adopted. The outcome measures were FFT transformed absolute powers in the six bands at 19 electrode positions. Sternberg memory test served as model of short term memory load. Correlation analysis of EEG during memory task was reflected as decreased absolute power in Upper alpha band in nearly all the electrode positions; increased power in Theta band at Fronto-Temporal region and Lower 1 alpha band at Fronto-Central region. Lower 2 alpha, Beta and Gamma band power remained unchanged. Short term memory load has distinct electroencephalographic correlates resembling the mentally stressed state. This is evident from decreased power in Upper alpha band (corresponding to Alpha band of traditional EEG system) which is representative band of relaxed mental state. Fronto-temporal Theta power changes may reflect the encoding and execution of memory task.

  10. Privacy-Preserving Computation with Trusted Computing via Scramble-then-Compute

    OpenAIRE

    Dang Hung; Dinh Tien Tuan Anh; Chang Ee-Chien; Ooi Beng Chin

    2017-01-01

    We consider privacy-preserving computation of big data using trusted computing primitives with limited private memory. Simply ensuring that the data remains encrypted outside the trusted computing environment is insufficient to preserve data privacy, for data movement observed during computation could leak information. While it is possible to thwart such leakage using generic solution such as ORAM [42], designing efficient privacy-preserving algorithms is challenging. Besides computation effi...

  11. A VAX/VMS mapped section/virtual memory utility package: Yucca Mountain Project

    International Nuclear Information System (INIS)

    Yarrington, L.

    1990-02-01

    A VAX/VMS Mapped Section/Virtual Memory Utility Package is a collection of FORTRAN subprograms that allocate virtual memory and, optionally, map that memory to a file. The subprograms use VMS system services and run-time libraries for allocating and mapping memory; therefore, the utility package is system dependent and functional on that platform only. FORTRAN-77 is one of the most widely used languages for computer programming. Languages have been developed in the past few decades that provide more powerful tools than FORTRAN and overcome some of its limitations. Two limitations addressed by this paper which have been a source of frustration to many programmers are that (1) FORTRAN does not provide dynamic array allocation and (2) FORTRAN file input-output is very slow. The solutions presented here are for the VAX/VMS operating system and use system services that are not part of the standard FORTRAN language description. Also discussed in this paper are dynamic array allocation, mapped sections of the program memory, and support modules. 3 refs

  12. Distribution of return point memory states for systems with stochastic inputs

    International Nuclear Information System (INIS)

    Amann, A; Brokate, M; Rachinskii, D; Temnov, G

    2011-01-01

    We consider the long term effect of stochastic inputs on the state of an open loop system which exhibits the so-called return point memory. An example of such a system is the Preisach model; more generally, systems with the Preisach type input-state relationship, such as in spin-interaction models, are considered. We focus on the characterisation of the expected memory configuration after the system has been effected by the input for sufficiently long period of time. In the case where the input is given by a discrete time random walk process, or the Wiener process, simple closed form expressions for the probability density of the vector of the main input extrema recorded by the memory state, and scaling laws for the dimension of this vector, are derived. If the input is given by a general continuous Markov process, we show that the distribution of previous memory elements can be obtained from a Markov chain scheme which is derived from the solution of an associated one-dimensional escape type problem. Formulas for transition probabilities defining this Markov chain scheme are presented. Moreover, explicit formulas for the conditional probability densities of previous main extrema are obtained for the Ornstein-Uhlenbeck input process. The analytical results are confirmed by numerical experiments.

  13. From Augustine of Hippo’s Memory Systems to Our Modern Taxonomy in Cognitive Psychology and Neuroscience of Memory: A 16-Century Nap of Intuition before Light of Evidence

    Directory of Open Access Journals (Sweden)

    Jean-Christophe Cassel

    2012-12-01

    Full Text Available Over the last half century, neuropsychologists, cognitive psychologists and cognitive neuroscientists interested in human memory have accumulated evidence showing that there is not one general memory function but a variety of memory systems deserving distinct (but for an organism, complementary functional entities. The first attempts to organize memory systems within a taxonomic construct are often traced back to the French philosopher Maine de Biran (1766–1824, who, in his book first published in 1803, distinguished mechanical memory, sensitive memory and representative memory, without, however, providing any experimental evidence in support of his view. It turns out, however, that what might be regarded as the first elaborated taxonomic proposal is 14 centuries older and is due to Augustine of Hippo (354–430, also named St Augustine, who, in Book 10 of his Confessions, by means of an introspective process that did not aim at organizing memory systems, nevertheless distinguished and commented on sensible memory, intellectual memory, memory of memories, memory of feelings and passion, and memory of forgetting. These memories were envisaged as different and complementary instances. In the current study, after a short biographical synopsis of St Augustine, we provide an outline of the philosopher’s contribution, both in terms of questions and answers, and focus on how this contribution almost perfectly fits with several viewpoints of modern psychology and neuroscience of memory about human memory functions, including the notion that episodic autobiographical memory stores events of our personal history in their what, where and when dimensions, and from there enables our mental time travel. It is not at all meant that St Augustine’s elaboration was the basis for the modern taxonomy, but just that the similarity is striking, and that the architecture of our current viewpoints about memory systems might have preexisted as an outstanding

  14. Test-Retest Reliability of Computerized, Everyday Memory Measures and Traditional Memory Tests.

    Science.gov (United States)

    Youngjohn, James R.; And Others

    Test-retest reliabilities and practice effect magnitudes were considered for nine computer-simulated tasks of everyday cognition and five traditional neuropsychological tests. The nine simulated everyday memory tests were from the Memory Assessment Clinic battery as follows: (1) simple reaction time while driving; (2) divided attention (driving…

  15. Konsep Memory Systems dalam Iklan ‘Diskon Ramadhan’

    Directory of Open Access Journals (Sweden)

    Elsye Rumondang Damanik

    2011-10-01

    Full Text Available The purpose of the article is to discuss and reminiscence the concept of memory systems and its purpose of marketing activity. Information-processed activity related to marketing activity made this concept is important to be discussed. To limit the problem discussion scope, the article will only discuss about human role as consumer in marketing activity and also the effects of memory system in helping human being to precede information related to marketing. In presenting the article, the writer had gathered data dan information through literature study from books and information from mass media. The result is that is it important for marketers to understand information-processed stages by their consumers and how the seller optimize or perhaps manipulate the stages to win the market. 

  16. Memory Reconsolidation and Computational Learning

    Science.gov (United States)

    2010-03-01

    Siegelmann-Danieli and H.T. Siegelmann, "Robust Artificial Life Via Artificial Programmed Death," Artificial Inteligence 172(6-7), April 2008: 884-898. F...STATEMENT Unrestricted 13. SUPPLEMENTARY NOTES 20100402019 14. ABSTRACT Memory models are central to Artificial Intelligence and Machine...beyond [1]. The advances cited are a significant step toward creating Artificial Intelligence via neural networks at the human level. Our network

  17. Belle computing system

    International Nuclear Information System (INIS)

    Adachi, Ichiro; Hibino, Taisuke; Hinz, Luc; Itoh, Ryosuke; Katayama, Nobu; Nishida, Shohei; Ronga, Frederic; Tsukamoto, Toshifumi; Yokoyama, Masahiko

    2004-01-01

    We describe the present status of the computing system in the Belle experiment at the KEKB e+e- asymmetric-energy collider. So far, we have logged more than 160fb-1 of data, corresponding to the world's largest data sample of 170M BB-bar pairs at the -bar (4S) energy region. A large amount of event data has to be processed to produce an analysis event sample in a timely fashion. In addition, Monte Carlo events have to be created to control systematic errors accurately. This requires stable and efficient usage of computing resources. Here, we review our computing model and then describe how we efficiently proceed DST/MC productions in our system

  18. Overlap in the functional neural systems involved in semantic and episodic memory retrieval.

    Science.gov (United States)

    Rajah, M N; McIntosh, A R

    2005-03-01

    Neuroimaging and neuropsychological data suggest that episodic and semantic memory may be mediated by distinct neural systems. However, an alternative perspective is that episodic and semantic memory represent different modes of processing within a single declarative memory system. To examine whether the multiple or the unitary system view better represents the data we conducted a network analysis using multivariate partial least squares (PLS ) activation analysis followed by covariance structural equation modeling (SEM) of positron emission tomography data obtained while healthy adults performed episodic and semantic verbal retrieval tasks. It is argued that if performance of episodic and semantic retrieval tasks are mediated by different memory systems, then there should differences in both regional activations and interregional correlations related to each type of retrieval task, respectively. The PLS results identified brain regions that were differentially active during episodic retrieval versus semantic retrieval. Regions that showed maximal differences in regional activity between episodic retrieval tasks were used to construct separate functional models for episodic and semantic retrieval. Omnibus tests of these functional models failed to find a significant difference across tasks for both functional models. The pattern of path coefficients for the episodic retrieval model were not different across tasks, nor were the path coefficients for the semantic retrieval model. The SEM results suggest that the same memory network/system was engaged across tasks, given the similarities in path coefficients. Therefore, activation differences between episodic and semantic retrieval may ref lect variation along a continuum of processing during task performance within the context of a single memory system.

  19. Exploring Shared-Memory Optimizations for an Unstructured Mesh CFD Application on Modern Parallel Systems

    KAUST Repository

    Mudigere, Dheevatsa; Sridharan, Srinivas; Deshpande, Anand; Park, Jongsoo; Heinecke, Alexander; Smelyanskiy, Mikhail; Kaul, Bharat; Dubey, Pradeep; Kaushik, Dinesh; Keyes, David E.

    2015-01-01

    -grid implicit flow solver, which forms the backbone of computational aerodynamics, poses particular challenges due to its large irregular working sets, unstructured memory accesses, and variable/limited amount of parallelism. This code, based on a domain

  20. Distinctive Features Hold a Privileged Status in the Computation of Word Meaning: Implications for Theories of Semantic Memory

    Science.gov (United States)

    Cree, George S.; McNorgan, Chris; McRae, Ken

    2006-01-01

    The authors present data from 2 feature verification experiments designed to determine whether distinctive features have a privileged status in the computation of word meaning. They use an attractor-based connectionist model of semantic memory to derive predictions for the experiments. Contrary to central predictions of the conceptual structure…