WorldWideScience

Sample records for computer memory

  1. Distributed-memory matrix computations

    DEFF Research Database (Denmark)

    Balle, Susanne Mølleskov

    1995-01-01

    The main goal of this project is to investigate, develop, and implement algorithms for numerical linear algebra on parallel computers in order to acquire expertise in methods for parallel computations. An important motivation for analyzaing and investigating the potential for parallelism in these......The main goal of this project is to investigate, develop, and implement algorithms for numerical linear algebra on parallel computers in order to acquire expertise in methods for parallel computations. An important motivation for analyzaing and investigating the potential for parallelism...... in these algorithms is that many scientific applications rely heavily on the performance of the involved dense linear algebra building blocks. Even though we consider the distributed-memory as well as the shared-memory programming paradigm, the major part of the thesis is dedicated to distributed-memory architectures....... We emphasize distributed-memory massively parallel computers - such as the Connection Machines model CM-200 and model CM-5/CM-5E - available to us at UNI-C and at Thinking Machines Corporation. The CM-200 was at the time this project started one of the few existing massively parallel computers...

  2. Paging memory from random access memory to backing storage in a parallel computer

    Science.gov (United States)

    Archer, Charles J; Blocksome, Michael A; Inglett, Todd A; Ratterman, Joseph D; Smith, Brian E

    2013-05-21

    Paging memory from random access memory (`RAM`) to backing storage in a parallel computer that includes a plurality of compute nodes, including: executing a data processing application on a virtual machine operating system in a virtual machine on a first compute node; providing, by a second compute node, backing storage for the contents of RAM on the first compute node; and swapping, by the virtual machine operating system in the virtual machine on the first compute node, a page of memory from RAM on the first compute node to the backing storage on the second compute node.

  3. The computational nature of memory modification.

    Science.gov (United States)

    Gershman, Samuel J; Monfils, Marie-H; Norman, Kenneth A; Niv, Yael

    2017-03-15

    Retrieving a memory can modify its influence on subsequent behavior. We develop a computational theory of memory modification, according to which modification of a memory trace occurs through classical associative learning, but which memory trace is eligible for modification depends on a structure learning mechanism that discovers the units of association by segmenting the stream of experience into statistically distinct clusters (latent causes). New memories are formed when the structure learning mechanism infers that a new latent cause underlies current sensory observations. By the same token, old memories are modified when old and new sensory observations are inferred to have been generated by the same latent cause. We derive this framework from probabilistic principles, and present a computational implementation. Simulations demonstrate that our model can reproduce the major experimental findings from studies of memory modification in the Pavlovian conditioning literature.

  4. The computational nature of memory modification

    Science.gov (United States)

    Gershman, Samuel J; Monfils, Marie-H; Norman, Kenneth A; Niv, Yael

    2017-01-01

    Retrieving a memory can modify its influence on subsequent behavior. We develop a computational theory of memory modification, according to which modification of a memory trace occurs through classical associative learning, but which memory trace is eligible for modification depends on a structure learning mechanism that discovers the units of association by segmenting the stream of experience into statistically distinct clusters (latent causes). New memories are formed when the structure learning mechanism infers that a new latent cause underlies current sensory observations. By the same token, old memories are modified when old and new sensory observations are inferred to have been generated by the same latent cause. We derive this framework from probabilistic principles, and present a computational implementation. Simulations demonstrate that our model can reproduce the major experimental findings from studies of memory modification in the Pavlovian conditioning literature. DOI: http://dx.doi.org/10.7554/eLife.23763.001 PMID:28294944

  5. Parallel structures in human and computer memory

    Science.gov (United States)

    Kanerva, Pentti

    1986-08-01

    If we think of our experiences as being recorded continuously on film, then human memory can be compared to a film library that is indexed by the contents of the film strips stored in it. Moreover, approximate retrieval cues suffice to retrieve information stored in this library: We recognize a familiar person in a fuzzy photograph or a familiar tune played on a strange instrument. This paper is about how to construct a computer memory that would allow a computer to recognize patterns and to recall sequences the way humans do. Such a memory is remarkably similar in structure to a conventional computer memory and also to the neural circuits in the cortex of the cerebellum of the human brain. The paper concludes that the frame problem of artificial intelligence could be solved by the use of such a memory if we were able to encode information about the world properly.

  6. Memory systems, computation, and the second law of thermodynamics

    International Nuclear Information System (INIS)

    Wolpert, D.H.

    1992-01-01

    A memory is a physical system for transferring information form one moment in time to another, where that information concerns something external to the system itself. This paper argues on information-theoretic and statistical mechanical grounds that useful memories must be of one of two types, exemplified by memory in abstract computer programs and by memory in photographs. Photograph-type memories work by exploring a collapse of state space flow to an attractor state. (This attractor state is the open-quotes initializedclose quotes state of the memory.) The central assumption of the theory of reversible computation tells us that in any such collapsing, regardless of whether the collapsing must increase in entropy of the system. In concert with the second law, this establishes the logical necessity of the empirical observation that photograph-type memories are temporally asymmetric (they can tell us about the past but not about the future). Under the assumption that human memory is a photograph-type memory, this result also explains why we humans can remember only our past and not our future. In contrast to photo-graph-type memories, computer-type memories do not require any initialization, and therefore are not directly affected by the second law. As a result, computer memories can be of the future as easily as of the past, even if the program running on the computer is logically irreversible. This is entirely in accord with the well-known temporal reversibility of the process of computation. This paper ends by arguing that the asymmetry of the psychological arrow of time is a direct consequence of the asymmetry of human memory. With the rest of this paper, this explains, explicitly and rigorously, why the psychological and thermodynamic arrows of time are correlated with one another. 24 refs

  7. Self-Testing Computer Memory

    Science.gov (United States)

    Chau, Savio, N.; Rennels, David A.

    1988-01-01

    Memory system for computer repeatedly tests itself during brief, regular interruptions of normal processing of data. Detects and corrects transient faults as single-event upsets (changes in bits due to ionizing radiation) within milliseconds after occuring. Self-testing concept surpasses conventional by actively flushing latent defects out of memory and attempting to correct before accumulating beyond capacity for self-correction or detection. Cost of improvement modest increase in complexity of circuitry and operating time.

  8. Large scale particle simulations in a virtual memory computer

    International Nuclear Information System (INIS)

    Gray, P.C.; Million, R.; Wagner, J.S.; Tajima, T.

    1983-01-01

    Virtual memory computers are capable of executing large-scale particle simulations even when the memory requirements exceeds the computer core size. The required address space is automatically mapped onto slow disc memory the the operating system. When the simulation size is very large, frequent random accesses to slow memory occur during the charge accumulation and particle pushing processes. Assesses to slow memory significantly reduce the excecution rate of the simulation. We demonstrate in this paper that with the proper choice of sorting algorithm, a nominal amount of sorting to keep physically adjacent particles near particles with neighboring array indices can reduce random access to slow memory, increase the efficiency of the I/O system, and hence, reduce the required computing time. (orig.)

  9. Large-scale particle simulations in a virtual-memory computer

    International Nuclear Information System (INIS)

    Gray, P.C.; Wagner, J.S.; Tajima, T.; Million, R.

    1982-08-01

    Virtual memory computers are capable of executing large-scale particle simulations even when the memory requirements exceed the computer core size. The required address space is automatically mapped onto slow disc memory by the operating system. When the simulation size is very large, frequent random accesses to slow memory occur during the charge accumulation and particle pushing processes. Accesses to slow memory significantly reduce the execution rate of the simulation. We demonstrate in this paper that with the proper choice of sorting algorithm, a nominal amount of sorting to keep physically adjacent particles near particles with neighboring array indices can reduce random access to slow memory, increase the efficiency of the I/O system, and hence, reduce the required computing time

  10. Computing betweenness centrality in external memory

    DEFF Research Database (Denmark)

    Arge, Lars; Goodrich, Michael T.; Walderveen, Freek van

    2013-01-01

    Betweenness centrality is one of the most well-known measures of the importance of nodes in a social-network graph. In this paper we describe the first known external-memory and cache-oblivious algorithms for computing betweenness centrality. We present four different external-memory algorithms...

  11. Human Memory Organization for Computer Programs.

    Science.gov (United States)

    Norcio, A. F.; Kerst, Stephen M.

    1983-01-01

    Results of study investigating human memory organization in processing of computer programming languages indicate that algorithmic logic segments form a cognitive organizational structure in memory for programs. Statement indentation and internal program documentation did not enhance organizational process of recall of statements in five Fortran…

  12. Dynamic computing random access memory

    International Nuclear Information System (INIS)

    Traversa, F L; Bonani, F; Pershin, Y V; Di Ventra, M

    2014-01-01

    The present von Neumann computing paradigm involves a significant amount of information transfer between a central processing unit and memory, with concomitant limitations in the actual execution speed. However, it has been recently argued that a different form of computation, dubbed memcomputing (Di Ventra and Pershin 2013 Nat. Phys. 9 200–2) and inspired by the operation of our brain, can resolve the intrinsic limitations of present day architectures by allowing for computing and storing of information on the same physical platform. Here we show a simple and practical realization of memcomputing that utilizes easy-to-build memcapacitive systems. We name this architecture dynamic computing random access memory (DCRAM). We show that DCRAM provides massively-parallel and polymorphic digital logic, namely it allows for different logic operations with the same architecture, by varying only the control signals. In addition, by taking into account realistic parameters, its energy expenditures can be as low as a few fJ per operation. DCRAM is fully compatible with CMOS technology, can be realized with current fabrication facilities, and therefore can really serve as an alternative to the present computing technology. (paper)

  13. Static Memory Deduplication for Performance Optimization in Cloud Computing

    Directory of Open Access Journals (Sweden)

    Gangyong Jia

    2017-04-01

    Full Text Available In a cloud computing environment, the number of virtual machines (VMs on a single physical server and the number of applications running on each VM are continuously growing. This has led to an enormous increase in the demand of memory capacity and subsequent increase in the energy consumption in the cloud. Lack of enough memory has become a major bottleneck for scalability and performance of virtualization interfaces in cloud computing. To address this problem, memory deduplication techniques which reduce memory demand through page sharing are being adopted. However, such techniques suffer from overheads in terms of number of online comparisons required for the memory deduplication. In this paper, we propose a static memory deduplication (SMD technique which can reduce memory capacity requirement and provide performance optimization in cloud computing. The main innovation of SMD is that the process of page detection is performed offline, thus potentially reducing the performance cost, especially in terms of response time. In SMD, page comparisons are restricted to the code segment, which has the highest shared content. Our experimental results show that SMD efficiently reduces memory capacity requirement and improves performance. We demonstrate that, compared to other approaches, the cost in terms of the response time is negligible.

  14. Static Memory Deduplication for Performance Optimization in Cloud Computing.

    Science.gov (United States)

    Jia, Gangyong; Han, Guangjie; Wang, Hao; Yang, Xuan

    2017-04-27

    In a cloud computing environment, the number of virtual machines (VMs) on a single physical server and the number of applications running on each VM are continuously growing. This has led to an enormous increase in the demand of memory capacity and subsequent increase in the energy consumption in the cloud. Lack of enough memory has become a major bottleneck for scalability and performance of virtualization interfaces in cloud computing. To address this problem, memory deduplication techniques which reduce memory demand through page sharing are being adopted. However, such techniques suffer from overheads in terms of number of online comparisons required for the memory deduplication. In this paper, we propose a static memory deduplication (SMD) technique which can reduce memory capacity requirement and provide performance optimization in cloud computing. The main innovation of SMD is that the process of page detection is performed offline, thus potentially reducing the performance cost, especially in terms of response time. In SMD, page comparisons are restricted to the code segment, which has the highest shared content. Our experimental results show that SMD efficiently reduces memory capacity requirement and improves performance. We demonstrate that, compared to other approaches, the cost in terms of the response time is negligible.

  15. Resistive content addressable memory based in-memory computation architecture

    KAUST Repository

    Salama, Khaled N.; Zidan, Mohammed A.; Kurdahi, Fadi; Eltawil, Ahmed M.

    2016-01-01

    Various examples are provided examples related to resistive content addressable memory (RCAM) based in-memory computation architectures. In one example, a system includes a content addressable memory (CAM) including an array of cells having a memristor based crossbar and an interconnection switch matrix having a gateless memristor array, which is coupled to an output of the CAM. In another example, a method, includes comparing activated bit values stored a key register with corresponding bit values in a row of a CAM, setting a tag bit value to indicate that the activated bit values match the corresponding bit values, and writing masked key bit values to corresponding bit locations in the row of the CAM based on the tag bit value.

  16. Resistive content addressable memory based in-memory computation architecture

    KAUST Repository

    Salama, Khaled N.

    2016-12-08

    Various examples are provided examples related to resistive content addressable memory (RCAM) based in-memory computation architectures. In one example, a system includes a content addressable memory (CAM) including an array of cells having a memristor based crossbar and an interconnection switch matrix having a gateless memristor array, which is coupled to an output of the CAM. In another example, a method, includes comparing activated bit values stored a key register with corresponding bit values in a row of a CAM, setting a tag bit value to indicate that the activated bit values match the corresponding bit values, and writing masked key bit values to corresponding bit locations in the row of the CAM based on the tag bit value.

  17. Associative Memory Computing Power and Its Simulation

    CERN Document Server

    Volpi, G; The ATLAS collaboration

    2014-01-01

    The associative memory (AM) system is a computing device made of hundreds of AM ASICs chips designed to perform “pattern matching” at very high speed. Since each AM chip stores a data base of 130000 pre-calculated patterns and large numbers of chips can be easily assembled together, it is possible to produce huge AM banks. Speed and size of the system are crucial for real-time High Energy Physics applications, such as the ATLAS Fast TracKer (FTK) Processor. Using 80 million channels of the ATLAS tracker, FTK finds tracks within 100 micro seconds. The simulation of such a parallelized system is an extremely complex task if executed in commercial computers based on normal CPUs. The algorithm performance is limited, due to the lack of parallelism, and in addition the memory requirement is very large. In fact the AM chip uses a content addressable memory (CAM) architecture. Any data inquiry is broadcast to all memory elements simultaneously, thus data retrieval time is independent of the database size. The gr...

  18. Associative Memory computing power and its simulation

    CERN Document Server

    Ancu, L S; The ATLAS collaboration; Britzger, D; Giannetti, P; Howarth, J W; Luongo, C; Pandini, C; Schmitt, S; Volpi, G

    2014-01-01

    The associative memory (AM) system is a computing device made of hundreds of AM ASICs chips designed to perform “pattern matching” at very high speed. Since each AM chip stores a data base of 130000 pre-calculated patterns and large numbers of chips can be easily assembled together, it is possible to produce huge AM banks. Speed and size of the system are crucial for real-time High Energy Physics applications, such as the ATLAS Fast TracKer (FTK) Processor. Using 80 million channels of the ATLAS tracker, FTK finds tracks within 100 micro seconds. The simulation of such a parallelized system is an extremely complex task if executed in commercial computers based on normal CPUs. The algorithm performance is limited, due to the lack of parallelism, and in addition the memory requirement is very large. In fact the AM chip uses a content addressable memory (CAM) architecture. Any data inquiry is broadcast to all memory elements simultaneously, thus data retrieval time is independent of the database size. The gr...

  19. Persistent Memory in Single Node Delay-Coupled Reservoir Computing.

    Science.gov (United States)

    Kovac, André David; Koall, Maximilian; Pipa, Gordon; Toutounji, Hazem

    2016-01-01

    Delays are ubiquitous in biological systems, ranging from genetic regulatory networks and synaptic conductances, to predator/pray population interactions. The evidence is mounting, not only to the presence of delays as physical constraints in signal propagation speed, but also to their functional role in providing dynamical diversity to the systems that comprise them. The latter observation in biological systems inspired the recent development of a computational architecture that harnesses this dynamical diversity, by delay-coupling a single nonlinear element to itself. This architecture is a particular realization of Reservoir Computing, where stimuli are injected into the system in time rather than in space as is the case with classical recurrent neural network realizations. This architecture also exhibits an internal memory which fades in time, an important prerequisite to the functioning of any reservoir computing device. However, fading memory is also a limitation to any computation that requires persistent storage. In order to overcome this limitation, the current work introduces an extended version to the single node Delay-Coupled Reservoir, that is based on trained linear feedback. We show by numerical simulations that adding task-specific linear feedback to the single node Delay-Coupled Reservoir extends the class of solvable tasks to those that require nonfading memory. We demonstrate, through several case studies, the ability of the extended system to carry out complex nonlinear computations that depend on past information, whereas the computational power of the system with fading memory alone quickly deteriorates. Our findings provide the theoretical basis for future physical realizations of a biologically-inspired ultrafast computing device with extended functionality.

  20. A simplified computational memory model from information processing.

    Science.gov (United States)

    Zhang, Lanhua; Zhang, Dongsheng; Deng, Yuqin; Ding, Xiaoqian; Wang, Yan; Tang, Yiyuan; Sun, Baoliang

    2016-11-23

    This paper is intended to propose a computational model for memory from the view of information processing. The model, called simplified memory information retrieval network (SMIRN), is a bi-modular hierarchical functional memory network by abstracting memory function and simulating memory information processing. At first meta-memory is defined to express the neuron or brain cortices based on the biology and graph theories, and we develop an intra-modular network with the modeling algorithm by mapping the node and edge, and then the bi-modular network is delineated with intra-modular and inter-modular. At last a polynomial retrieval algorithm is introduced. In this paper we simulate the memory phenomena and functions of memorization and strengthening by information processing algorithms. The theoretical analysis and the simulation results show that the model is in accordance with the memory phenomena from information processing view.

  1. Spin-transfer torque magnetoresistive random-access memory technologies for normally off computing (invited)

    International Nuclear Information System (INIS)

    Ando, K.; Yuasa, S.; Fujita, S.; Ito, J.; Yoda, H.; Suzuki, Y.; Nakatani, Y.; Miyazaki, T.

    2014-01-01

    Most parts of present computer systems are made of volatile devices, and the power to supply them to avoid information loss causes huge energy losses. We can eliminate this meaningless energy loss by utilizing the non-volatile function of advanced spin-transfer torque magnetoresistive random-access memory (STT-MRAM) technology and create a new type of computer, i.e., normally off computers. Critical tasks to achieve normally off computers are implementations of STT-MRAM technologies in the main memory and low-level cache memories. STT-MRAM technology for applications to the main memory has been successfully developed by using perpendicular STT-MRAMs, and faster STT-MRAM technologies for applications to the cache memory are now being developed. The present status of STT-MRAMs and challenges that remain for normally off computers are discussed

  2. Persistent Memory in Single Node Delay-Coupled Reservoir Computing.

    Directory of Open Access Journals (Sweden)

    André David Kovac

    Full Text Available Delays are ubiquitous in biological systems, ranging from genetic regulatory networks and synaptic conductances, to predator/pray population interactions. The evidence is mounting, not only to the presence of delays as physical constraints in signal propagation speed, but also to their functional role in providing dynamical diversity to the systems that comprise them. The latter observation in biological systems inspired the recent development of a computational architecture that harnesses this dynamical diversity, by delay-coupling a single nonlinear element to itself. This architecture is a particular realization of Reservoir Computing, where stimuli are injected into the system in time rather than in space as is the case with classical recurrent neural network realizations. This architecture also exhibits an internal memory which fades in time, an important prerequisite to the functioning of any reservoir computing device. However, fading memory is also a limitation to any computation that requires persistent storage. In order to overcome this limitation, the current work introduces an extended version to the single node Delay-Coupled Reservoir, that is based on trained linear feedback. We show by numerical simulations that adding task-specific linear feedback to the single node Delay-Coupled Reservoir extends the class of solvable tasks to those that require nonfading memory. We demonstrate, through several case studies, the ability of the extended system to carry out complex nonlinear computations that depend on past information, whereas the computational power of the system with fading memory alone quickly deteriorates. Our findings provide the theoretical basis for future physical realizations of a biologically-inspired ultrafast computing device with extended functionality.

  3. A simplified computational memory model from information processing

    Science.gov (United States)

    Zhang, Lanhua; Zhang, Dongsheng; Deng, Yuqin; Ding, Xiaoqian; Wang, Yan; Tang, Yiyuan; Sun, Baoliang

    2016-01-01

    This paper is intended to propose a computational model for memory from the view of information processing. The model, called simplified memory information retrieval network (SMIRN), is a bi-modular hierarchical functional memory network by abstracting memory function and simulating memory information processing. At first meta-memory is defined to express the neuron or brain cortices based on the biology and graph theories, and we develop an intra-modular network with the modeling algorithm by mapping the node and edge, and then the bi-modular network is delineated with intra-modular and inter-modular. At last a polynomial retrieval algorithm is introduced. In this paper we simulate the memory phenomena and functions of memorization and strengthening by information processing algorithms. The theoretical analysis and the simulation results show that the model is in accordance with the memory phenomena from information processing view. PMID:27876847

  4. Computational modelling of memory retention from synapse to behaviour

    Science.gov (United States)

    van Rossum, Mark C. W.; Shippi, Maria

    2013-03-01

    One of our most intriguing mental abilities is the capacity to store information and recall it from memory. Computational neuroscience has been influential in developing models and concepts of learning and memory. In this tutorial review we focus on the interplay between learning and forgetting. We discuss recent advances in the computational description of the learning and forgetting processes on synaptic, neuronal, and systems levels, as well as recent data that open up new challenges for statistical physicists.

  5. Computational modelling of memory retention from synapse to behaviour

    International Nuclear Information System (INIS)

    Van Rossum, Mark C W; Shippi, Maria

    2013-01-01

    One of our most intriguing mental abilities is the capacity to store information and recall it from memory. Computational neuroscience has been influential in developing models and concepts of learning and memory. In this tutorial review we focus on the interplay between learning and forgetting. We discuss recent advances in the computational description of the learning and forgetting processes on synaptic, neuronal, and systems levels, as well as recent data that open up new challenges for statistical physicists. (paper)

  6. Energy-aware memory management for embedded multimedia systems a computer-aided design approach

    CERN Document Server

    Balasa, Florin

    2011-01-01

    Energy-Aware Memory Management for Embedded Multimedia Systems: A Computer-Aided Design Approach presents recent computer-aided design (CAD) ideas that address memory management tasks, particularly the optimization of energy consumption in the memory subsystem. It explains how to efficiently implement CAD solutions, including theoretical methods and novel algorithms. The book covers various energy-aware design techniques, including data-dependence analysis techniques, memory size estimation methods, extensions of mapping approaches, and memory banking approaches. It shows how these techniques

  7. A 32-bit computer for large memory applications on the FASTBUS

    International Nuclear Information System (INIS)

    Kellner, R.; Blossom, J.M.; Hung, J.P.

    1985-01-01

    A FASTBUS based 32-bit computer is being built at Los Alamos National Laboratory for use in systems requiring large fast memory in the FASTBUS environment. A separate local execution bus allows data reduction to proceed concurrently with other FASTBUS operations. The computer, which can operate in either master or slave mode, includes the National Semiconductor NS32032 chip set with demand paged memory management, floating point slave processor, interrupt control unit, timers, and time-of-day clock. The 16.0 megabytes of random access memory are interleaved to allow windowed direct memory access on and off the FASTBUS at 80 megabytes per second

  8. Soft-error tolerance and energy consumption evaluation of embedded computer with magnetic random access memory in practical systems using computer simulations

    Science.gov (United States)

    Nebashi, Ryusuke; Sakimura, Noboru; Sugibayashi, Tadahiko

    2017-08-01

    We evaluated the soft-error tolerance and energy consumption of an embedded computer with magnetic random access memory (MRAM) using two computer simulators. One is a central processing unit (CPU) simulator of a typical embedded computer system. We simulated the radiation-induced single-event-upset (SEU) probability in a spin-transfer-torque MRAM cell and also the failure rate of a typical embedded computer due to its main memory SEU error. The other is a delay tolerant network (DTN) system simulator. It simulates the power dissipation of wireless sensor network nodes of the system using a revised CPU simulator and a network simulator. We demonstrated that the SEU effect on the embedded computer with 1 Gbit MRAM-based working memory is less than 1 failure in time (FIT). We also demonstrated that the energy consumption of the DTN sensor node with MRAM-based working memory can be reduced to 1/11. These results indicate that MRAM-based working memory enhances the disaster tolerance of embedded computers.

  9. Computer Icons and the Art of Memory.

    Science.gov (United States)

    McNair, John R.

    1996-01-01

    States that key aspects of "memoria," the ancient Art of Memory, especially its focus on vivid representational images set against distinct backgrounds, can be helpful in creating memorable, universal, and easily retrievable computer icons. (PA)

  10. The Spacetime Memory of Geometric Phases and Quantum Computing

    CERN Document Server

    Binder, B

    2002-01-01

    Spacetime memory is defined with a holonomic approach to information processing, where multi-state stability is introduced by a non-linear phase-locked loop. Geometric phases serve as the carrier of physical information and geometric memory (of orientation) given by a path integral measure of curvature that is periodically refreshed. Regarding the resulting spin-orbit coupling and gauge field, the geometric nature of spacetime memory suggests to assign intrinsic computational properties to the electromagnetic field.

  11. Multiple-User, Multitasking, Virtual-Memory Computer System

    Science.gov (United States)

    Generazio, Edward R.; Roth, Don J.; Stang, David B.

    1993-01-01

    Computer system designed and programmed to serve multiple users in research laboratory. Provides for computer control and monitoring of laboratory instruments, acquisition and anlaysis of data from those instruments, and interaction with users via remote terminals. System provides fast access to shared central processing units and associated large (from megabytes to gigabytes) memories. Underlying concept of system also applicable to monitoring and control of industrial processes.

  12. Computational performance of a smoothed particle hydrodynamics simulation for shared-memory parallel computing

    Science.gov (United States)

    Nishiura, Daisuke; Furuichi, Mikito; Sakaguchi, Hide

    2015-09-01

    The computational performance of a smoothed particle hydrodynamics (SPH) simulation is investigated for three types of current shared-memory parallel computer devices: many integrated core (MIC) processors, graphics processing units (GPUs), and multi-core CPUs. We are especially interested in efficient shared-memory allocation methods for each chipset, because the efficient data access patterns differ between compute unified device architecture (CUDA) programming for GPUs and OpenMP programming for MIC processors and multi-core CPUs. We first introduce several parallel implementation techniques for the SPH code, and then examine these on our target computer architectures to determine the most effective algorithms for each processor unit. In addition, we evaluate the effective computing performance and power efficiency of the SPH simulation on each architecture, as these are critical metrics for overall performance in a multi-device environment. In our benchmark test, the GPU is found to produce the best arithmetic performance as a standalone device unit, and gives the most efficient power consumption. The multi-core CPU obtains the most effective computing performance. The computational speed of the MIC processor on Xeon Phi approached that of two Xeon CPUs. This indicates that using MICs is an attractive choice for existing SPH codes on multi-core CPUs parallelized by OpenMP, as it gains computational acceleration without the need for significant changes to the source code.

  13. Hybrid computing using a neural network with dynamic external memory.

    Science.gov (United States)

    Graves, Alex; Wayne, Greg; Reynolds, Malcolm; Harley, Tim; Danihelka, Ivo; Grabska-Barwińska, Agnieszka; Colmenarejo, Sergio Gómez; Grefenstette, Edward; Ramalho, Tiago; Agapiou, John; Badia, Adrià Puigdomènech; Hermann, Karl Moritz; Zwols, Yori; Ostrovski, Georg; Cain, Adam; King, Helen; Summerfield, Christopher; Blunsom, Phil; Kavukcuoglu, Koray; Hassabis, Demis

    2016-10-27

    Artificial neural networks are remarkably adept at sensory processing, sequence learning and reinforcement learning, but are limited in their ability to represent variables and data structures and to store data over long timescales, owing to the lack of an external memory. Here we introduce a machine learning model called a differentiable neural computer (DNC), which consists of a neural network that can read from and write to an external memory matrix, analogous to the random-access memory in a conventional computer. Like a conventional computer, it can use its memory to represent and manipulate complex data structures, but, like a neural network, it can learn to do so from data. When trained with supervised learning, we demonstrate that a DNC can successfully answer synthetic questions designed to emulate reasoning and inference problems in natural language. We show that it can learn tasks such as finding the shortest path between specified points and inferring the missing links in randomly generated graphs, and then generalize these tasks to specific graphs such as transport networks and family trees. When trained with reinforcement learning, a DNC can complete a moving blocks puzzle in which changing goals are specified by sequences of symbols. Taken together, our results demonstrate that DNCs have the capacity to solve complex, structured tasks that are inaccessible to neural networks without external read-write memory.

  14. An Alternative Algorithm for Computing Watersheds on Shared Memory Parallel Computers

    NARCIS (Netherlands)

    Meijster, A.; Roerdink, J.B.T.M.

    1995-01-01

    In this paper a parallel implementation of a watershed algorithm is proposed. The algorithm can easily be implemented on shared memory parallel computers. The watershed transform is generally considered to be inherently sequential since the discrete watershed of an image is defined using recursion.

  15. Distributed Memory Parallel Computing with SEAWAT

    Science.gov (United States)

    Verkaik, J.; Huizer, S.; van Engelen, J.; Oude Essink, G.; Ram, R.; Vuik, K.

    2017-12-01

    Fresh groundwater reserves in coastal aquifers are threatened by sea-level rise, extreme weather conditions, increasing urbanization and associated groundwater extraction rates. To counteract these threats, accurate high-resolution numerical models are required to optimize the management of these precious reserves. The major model drawbacks are long run times and large memory requirements, limiting the predictive power of these models. Distributed memory parallel computing is an efficient technique for reducing run times and memory requirements, where the problem is divided over multiple processor cores. A new Parallel Krylov Solver (PKS) for SEAWAT is presented. PKS has recently been applied to MODFLOW and includes Conjugate Gradient (CG) and Biconjugate Gradient Stabilized (BiCGSTAB) linear accelerators. Both accelerators are preconditioned by an overlapping additive Schwarz preconditioner in a way that: a) subdomains are partitioned using Recursive Coordinate Bisection (RCB) load balancing, b) each subdomain uses local memory only and communicates with other subdomains by Message Passing Interface (MPI) within the linear accelerator, c) it is fully integrated in SEAWAT. Within SEAWAT, the PKS-CG solver replaces the Preconditioned Conjugate Gradient (PCG) solver for solving the variable-density groundwater flow equation and the PKS-BiCGSTAB solver replaces the Generalized Conjugate Gradient (GCG) solver for solving the advection-diffusion equation. PKS supports the third-order Total Variation Diminishing (TVD) scheme for computing advection. Benchmarks were performed on the Dutch national supercomputer (https://userinfo.surfsara.nl/systems/cartesius) using up to 128 cores, for a synthetic 3D Henry model (100 million cells) and the real-life Sand Engine model ( 10 million cells). The Sand Engine model was used to investigate the potential effect of the long-term morphological evolution of a large sand replenishment and climate change on fresh groundwater resources

  16. Computing with memory for energy-efficient robust systems

    CERN Document Server

    Paul, Somnath

    2013-01-01

    This book analyzes energy and reliability as major challenges faced by designers of computing frameworks in the nanometer technology regime.  The authors describe the existing solutions to address these challenges and then reveal a new reconfigurable computing platform, which leverages high-density nanoscale memory for both data storage and computation to maximize the energy-efficiency and reliability. The energy and reliability benefits of this new paradigm are illustrated and the design challenges are discussed. Various hardware and software aspects of this exciting computing paradigm are de

  17. Single-Chip Computers With Microelectromechanical Systems-Based Magnetic Memory

    NARCIS (Netherlands)

    Carley, L. Richard; Bain, James A.; Fedder, Gary K.; Greve, David W.; Guillou, David F.; Lu, Michael S.C.; Mukherjee, Tamal; Santhanam, Suresh; Abelmann, Leon; Min, Seungook

    This article describes an approach for implementing a complete computer system (CPU, RAM, I/O, and nonvolatile mass memory) on a single integrated-circuit substrate (a chip)—hence, the name "single-chip computer." The approach presented combines advances in the field of microelectromechanical

  18. Continuous-variable quantum computing in optical time-frequency modes using quantum memories.

    Science.gov (United States)

    Humphreys, Peter C; Kolthammer, W Steven; Nunn, Joshua; Barbieri, Marco; Datta, Animesh; Walmsley, Ian A

    2014-09-26

    We develop a scheme for time-frequency encoded continuous-variable cluster-state quantum computing using quantum memories. In particular, we propose a method to produce, manipulate, and measure two-dimensional cluster states in a single spatial mode by exploiting the intrinsic time-frequency selectivity of Raman quantum memories. Time-frequency encoding enables the scheme to be extremely compact, requiring a number of memories that are a linear function of only the number of different frequencies in which the computational state is encoded, independent of its temporal duration. We therefore show that quantum memories can be a powerful component for scalable photonic quantum information processing architectures.

  19. A Case Study on Neural Inspired Dynamic Memory Management Strategies for High Performance Computing.

    Energy Technology Data Exchange (ETDEWEB)

    Vineyard, Craig Michael [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Verzi, Stephen Joseph [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-09-01

    As high performance computing architectures pursue more computational power there is a need for increased memory capacity and bandwidth as well. A multi-level memory (MLM) architecture addresses this need by combining multiple memory types with different characteristics as varying levels of the same architecture. How to efficiently utilize this memory infrastructure is an unknown challenge, and in this research we sought to investigate whether neural inspired approaches can meaningfully help with memory management. In particular we explored neurogenesis inspired re- source allocation, and were able to show a neural inspired mixed controller policy can beneficially impact how MLM architectures utilize memory.

  20. emMAW: computing minimal absent words in external memory.

    Science.gov (United States)

    Héliou, Alice; Pissis, Solon P; Puglisi, Simon J

    2017-09-01

    The biological significance of minimal absent words has been investigated in genomes of organisms from all domains of life. For instance, three minimal absent words of the human genome were found in Ebola virus genomes. There exists an O(n) -time and O(n) -space algorithm for computing all minimal absent words of a sequence of length n on a fixed-sized alphabet based on suffix arrays. A standard implementation of this algorithm, when applied to a large sequence of length n , requires more than 20 n  bytes of RAM. Such memory requirements are a significant hurdle to the computation of minimal absent words in large datasets. We present emMAW, the first external-memory algorithm for computing minimal absent words. A free open-source implementation of our algorithm is made available. This allows for computation of minimal absent words on far bigger data sets than was previously possible. Our implementation requires less than 3 h on a standard workstation to process the full human genome when as little as 1 GB of RAM is made available. We stress that our implementation, despite making use of external memory, is fast; indeed, even on relatively smaller datasets when enough RAM is available to hold all necessary data structures, it is less than two times slower than state-of-the-art internal-memory implementations. https://github.com/solonas13/maw (free software under the terms of the GNU GPL). alice.heliou@lix.polytechnique.fr or solon.pissis@kcl.ac.uk. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  1. Injecting Artificial Memory Errors Into a Running Computer Program

    Science.gov (United States)

    Bornstein, Benjamin J.; Granat, Robert A.; Wagstaff, Kiri L.

    2008-01-01

    Single-event upsets (SEUs) or bitflips are computer memory errors caused by radiation. BITFLIPS (Basic Instrumentation Tool for Fault Localized Injection of Probabilistic SEUs) is a computer program that deliberately injects SEUs into another computer program, while the latter is running, for the purpose of evaluating the fault tolerance of that program. BITFLIPS was written as a plug-in extension of the open-source Valgrind debugging and profiling software. BITFLIPS can inject SEUs into any program that can be run on the Linux operating system, without needing to modify the program s source code. Further, if access to the original program source code is available, BITFLIPS offers fine-grained control over exactly when and which areas of memory (as specified via program variables) will be subjected to SEUs. The rate of injection of SEUs is controlled by specifying either a fault probability or a fault rate based on memory size and radiation exposure time, in units of SEUs per byte per second. BITFLIPS can also log each SEU that it injects and, if program source code is available, report the magnitude of effect of the SEU on a floating-point value or other program variable.

  2. Computational and empirical simulations of selective memory impairments: Converging evidence for a single-system account of memory dissociations.

    Science.gov (United States)

    Curtis, Evan T; Jamieson, Randall K

    2018-04-01

    Current theory has divided memory into multiple systems, resulting in a fractionated account of human behaviour. By an alternative perspective, memory is a single system. However, debate over the details of different single-system theories has overshadowed the converging agreement among them, slowing the reunification of memory. Evidence in favour of dividing memory often takes the form of dissociations observed in amnesia, where amnesic patients are impaired on some memory tasks but not others. The dissociations are taken as evidence for separate explicit and implicit memory systems. We argue against this perspective. We simulate two key dissociations between classification and recognition in a computational model of memory, A Theory of Nonanalytic Association. We assume that amnesia reflects a quantitative difference in the quality of encoding. We also present empirical evidence that replicates the dissociations in healthy participants, simulating amnesic behaviour by reducing study time. In both analyses, we successfully reproduce the dissociations. We integrate our computational and empirical successes with the success of alternative models and manipulations and argue that our demonstrations, taken in concert with similar demonstrations with similar models, provide converging evidence for a more general set of single-system analyses that support the conclusion that a wide variety of memory phenomena can be explained by a unified and coherent set of principles.

  3. A memory-array architecture for computer vision

    Energy Technology Data Exchange (ETDEWEB)

    Balsara, P.T.

    1989-01-01

    With the fast advances in the area of computer vision and robotics there is a growing need for machines that can understand images at a very high speed. A conventional von Neumann computer is not suited for this purpose because it takes a tremendous amount of time to solve most typical image processing problems. Exploiting the inherent parallelism present in various vision tasks can significantly reduce the processing time. Fortunately, parallelism is increasingly affordable as hardware gets cheaper. Thus it is now imperative to study computer vision in a parallel processing framework. The author should first design a computational structure which is well suited for a wide range of vision tasks and then develop parallel algorithms which can run efficiently on this structure. Recent advances in VLSI technology have led to several proposals for parallel architectures for computer vision. In this thesis he demonstrates that a memory array architecture with efficient local and global communication capabilities can be used for high speed execution of a wide range of computer vision tasks. This architecture, called the Access Constrained Memory Array Architecture (ACMAA), is efficient for VLSI implementation because of its modular structure, simple interconnect and limited global control. Several parallel vision algorithms have been designed for this architecture. The choice of vision problems demonstrates the versatility of ACMAA for a wide range of vision tasks. These algorithms were simulated on a high level ACMAA simulator running on the Intel iPSC/2 hypercube, a parallel architecture. The results of this simulation are compared with those of sequential algorithms running on a single hypercube node. Details of the ACMAA processor architecture are also presented.

  4. Reprogrammable logic in memristive crossbar for in-memory computing

    Science.gov (United States)

    Cheng, Long; Zhang, Mei-Yun; Li, Yi; Zhou, Ya-Xiong; Wang, Zhuo-Rui; Hu, Si-Yu; Long, Shi-Bing; Liu, Ming; Miao, Xiang-Shui

    2017-12-01

    Memristive stateful logic has emerged as a promising next-generation in-memory computing paradigm to address escalating computing-performance pressures in traditional von Neumann architecture. Here, we present a nonvolatile reprogrammable logic method that can process data between different rows and columns in a memristive crossbar array based on material implication (IMP) logic. Arbitrary Boolean logic can be executed with a reprogrammable cell containing four memristors in a crossbar array. In the fabricated Ti/HfO2/W memristive array, some fundamental functions, such as universal NAND logic and data transfer, were experimentally implemented. Moreover, using eight memristors in a 2  ×  4 array, a one-bit full adder was theoretically designed and verified by simulation to exhibit the feasibility of our method to accomplish complex computing tasks. In addition, some critical logic-related performances were further discussed, such as the flexibility of data processing, cascading problem and bit error rate. Such a method could be a step forward in developing IMP-based memristive nonvolatile logic for large-scale in-memory computing architecture.

  5. A compositional reservoir simulator on distributed memory parallel computers

    International Nuclear Information System (INIS)

    Rame, M.; Delshad, M.

    1995-01-01

    This paper presents the application of distributed memory parallel computes to field scale reservoir simulations using a parallel version of UTCHEM, The University of Texas Chemical Flooding Simulator. The model is a general purpose highly vectorized chemical compositional simulator that can simulate a wide range of displacement processes at both field and laboratory scales. The original simulator was modified to run on both distributed memory parallel machines (Intel iPSC/960 and Delta, Connection Machine 5, Kendall Square 1 and 2, and CRAY T3D) and a cluster of workstations. A domain decomposition approach has been taken towards parallelization of the code. A portion of the discrete reservoir model is assigned to each processor by a set-up routine that attempts a data layout as even as possible from the load-balance standpoint. Each of these subdomains is extended so that data can be shared between adjacent processors for stencil computation. The added routines that make parallel execution possible are written in a modular fashion that makes the porting to new parallel platforms straight forward. Results of the distributed memory computing performance of Parallel simulator are presented for field scale applications such as tracer flood and polymer flood. A comparison of the wall-clock times for same problems on a vector supercomputer is also presented

  6. Evaluation of External Memory Access Performance on a High-End FPGA Hybrid Computer

    Directory of Open Access Journals (Sweden)

    Konstantinos Kalaitzis

    2016-10-01

    Full Text Available The motivation of this research was to evaluate the main memory performance of a hybrid super computer such as the Convey HC-x, and ascertain how the controller performs in several access scenarios, vis-à-vis hand-coded memory prefetches. Such memory patterns are very useful in stencil computations. The theoretical bandwidth of the memory of the Convey is compared with the results of our measurements. The accurate study of the memory subsystem is particularly useful for users when they are developing their application-specific personality. Experiments were performed to measure the bandwidth between the coprocessor and the memory subsystem. The experiments aimed mainly at measuring the reading access speed of the memory from Application Engines (FPGAs. Different ways of accessing data were used in order to find the most efficient way to access memory. This way was proposed for future work in the Convey HC-x. When performing a series of accesses to memory, non-uniform latencies occur. The Memory Controller of the Convey HC-x in the coprocessor attempts to cover this latency. We measure memory efficiency as a ratio of the number of memory accesses and the number of execution cycles. The result of this measurement converges to one in most cases. In addition, we performed experiments with hand-coded memory accesses. The analysis of the experimental results shows how the memory subsystem and Memory Controllers work. From this work we conclude that the memory controllers do an excellent job, largely because (transparently to the user they seem to cache large amounts of data, and hence hand-coding is not needed in most situations.

  7. Computational cost estimates for parallel shared memory isogeometric multi-frontal solvers

    KAUST Repository

    Woźniak, Maciej; Kuźnik, Krzysztof M.; Paszyński, Maciej R.; Calo, Victor M.; Pardo, D.

    2014-01-01

    In this paper we present computational cost estimates for parallel shared memory isogeometric multi-frontal solvers. The estimates show that the ideal isogeometric shared memory parallel direct solver scales as O( p2log(N/p)) for one dimensional problems, O(Np2) for two dimensional problems, and O(N4/3p2) for three dimensional problems, where N is the number of degrees of freedom, and p is the polynomial order of approximation. The computational costs of the shared memory parallel isogeometric direct solver are compared with those corresponding to the sequential isogeometric direct solver, being the latest equal to O(N p2) for the one dimensional case, O(N1.5p3) for the two dimensional case, and O(N2p3) for the three dimensional case. The shared memory version significantly reduces both the scalability in terms of N and p. Theoretical estimates are compared with numerical experiments performed with linear, quadratic, cubic, quartic, and quintic B-splines, in one and two spatial dimensions. © 2014 Elsevier Ltd. All rights reserved.

  8. Computational cost estimates for parallel shared memory isogeometric multi-frontal solvers

    KAUST Repository

    Woźniak, Maciej

    2014-06-01

    In this paper we present computational cost estimates for parallel shared memory isogeometric multi-frontal solvers. The estimates show that the ideal isogeometric shared memory parallel direct solver scales as O( p2log(N/p)) for one dimensional problems, O(Np2) for two dimensional problems, and O(N4/3p2) for three dimensional problems, where N is the number of degrees of freedom, and p is the polynomial order of approximation. The computational costs of the shared memory parallel isogeometric direct solver are compared with those corresponding to the sequential isogeometric direct solver, being the latest equal to O(N p2) for the one dimensional case, O(N1.5p3) for the two dimensional case, and O(N2p3) for the three dimensional case. The shared memory version significantly reduces both the scalability in terms of N and p. Theoretical estimates are compared with numerical experiments performed with linear, quadratic, cubic, quartic, and quintic B-splines, in one and two spatial dimensions. © 2014 Elsevier Ltd. All rights reserved.

  9. Computer Use and Its Effect on the Memory Process in Young and Adults

    Science.gov (United States)

    Alliprandini, Paula Mariza Zedu; Straub, Sandra Luzia Wrobel; Brugnera, Elisangela; de Oliveira, Tânia Pitombo; Souza, Isabela Augusta Andrade

    2013-01-01

    This work investigates the effect of computer use in the memory process in young and adults under the Perceptual and Memory experimental conditions. The memory condition involved the phases acquisition of information and recovery, on time intervals (2 min, 24 hours and 1 week) on situations of pre and post-test (before and after the participants…

  10. Computational Model-Based Prediction of Human Episodic Memory Performance Based on Eye Movements

    Science.gov (United States)

    Sato, Naoyuki; Yamaguchi, Yoko

    Subjects' episodic memory performance is not simply reflected by eye movements. We use a ‘theta phase coding’ model of the hippocampus to predict subjects' memory performance from their eye movements. Results demonstrate the ability of the model to predict subjects' memory performance. These studies provide a novel approach to computational modeling in the human-machine interface.

  11. Graphical Visualization on Computational Simulation Using Shared Memory

    International Nuclear Information System (INIS)

    Lima, A B; Correa, Eberth

    2014-01-01

    The Shared Memory technique is a powerful tool for parallelizing computer codes. In particular it can be used to visualize the results ''on the fly'' without stop running the simulation. In this presentation we discuss and show how to use the technique conjugated with a visualization code using openGL

  12. Projection multiplex recording of computer-synthesised one-dimensional Fourier holograms for holographic memory systems: mathematical and experimental modelling

    Energy Technology Data Exchange (ETDEWEB)

    Betin, A Yu; Bobrinev, V I; Verenikina, N M; Donchenko, S S; Odinokov, S B [Research Institute ' Radiotronics and Laser Engineering' , Bauman Moscow State Technical University, Moscow (Russian Federation); Evtikhiev, N N; Zlokazov, E Yu; Starikov, S N; Starikov, R S [National Reseach Nuclear University MEPhI (Moscow Engineering Physics Institute), Moscow (Russian Federation)

    2015-08-31

    A multiplex method of recording computer-synthesised one-dimensional Fourier holograms intended for holographic memory devices is proposed. The method potentially allows increasing the recording density in the previously proposed holographic memory system based on the computer synthesis and projection recording of data page holograms. (holographic memory)

  13. A learnable parallel processing architecture towards unity of memory and computing.

    Science.gov (United States)

    Li, H; Gao, B; Chen, Z; Zhao, Y; Huang, P; Ye, H; Liu, L; Liu, X; Kang, J

    2015-08-14

    Developing energy-efficient parallel information processing systems beyond von Neumann architecture is a long-standing goal of modern information technologies. The widely used von Neumann computer architecture separates memory and computing units, which leads to energy-hungry data movement when computers work. In order to meet the need of efficient information processing for the data-driven applications such as big data and Internet of Things, an energy-efficient processing architecture beyond von Neumann is critical for the information society. Here we show a non-von Neumann architecture built of resistive switching (RS) devices named "iMemComp", where memory and logic are unified with single-type devices. Leveraging nonvolatile nature and structural parallelism of crossbar RS arrays, we have equipped "iMemComp" with capabilities of computing in parallel and learning user-defined logic functions for large-scale information processing tasks. Such architecture eliminates the energy-hungry data movement in von Neumann computers. Compared with contemporary silicon technology, adder circuits based on "iMemComp" can improve the speed by 76.8% and the power dissipation by 60.3%, together with a 700 times aggressive reduction in the circuit area.

  14. A learnable parallel processing architecture towards unity of memory and computing

    Science.gov (United States)

    Li, H.; Gao, B.; Chen, Z.; Zhao, Y.; Huang, P.; Ye, H.; Liu, L.; Liu, X.; Kang, J.

    2015-08-01

    Developing energy-efficient parallel information processing systems beyond von Neumann architecture is a long-standing goal of modern information technologies. The widely used von Neumann computer architecture separates memory and computing units, which leads to energy-hungry data movement when computers work. In order to meet the need of efficient information processing for the data-driven applications such as big data and Internet of Things, an energy-efficient processing architecture beyond von Neumann is critical for the information society. Here we show a non-von Neumann architecture built of resistive switching (RS) devices named “iMemComp”, where memory and logic are unified with single-type devices. Leveraging nonvolatile nature and structural parallelism of crossbar RS arrays, we have equipped “iMemComp” with capabilities of computing in parallel and learning user-defined logic functions for large-scale information processing tasks. Such architecture eliminates the energy-hungry data movement in von Neumann computers. Compared with contemporary silicon technology, adder circuits based on “iMemComp” can improve the speed by 76.8% and the power dissipation by 60.3%, together with a 700 times aggressive reduction in the circuit area.

  15. Computer Game Play Reduces Intrusive Memories of Experimental Trauma via Reconsolidation-Update Mechanisms.

    Science.gov (United States)

    James, Ella L; Bonsall, Michael B; Hoppitt, Laura; Tunbridge, Elizabeth M; Geddes, John R; Milton, Amy L; Holmes, Emily A

    2015-08-01

    Memory of a traumatic event becomes consolidated within hours. Intrusive memories can then flash back repeatedly into the mind's eye and cause distress. We investigated whether reconsolidation-the process during which memories become malleable when recalled-can be blocked using a cognitive task and whether such an approach can reduce these unbidden intrusions. We predicted that reconsolidation of a reactivated visual memory of experimental trauma could be disrupted by engaging in a visuospatial task that would compete for visual working memory resources. We showed that intrusive memories were virtually abolished by playing the computer game Tetris following a memory-reactivation task 24 hr after initial exposure to experimental trauma. Furthermore, both memory reactivation and playing Tetris were required to reduce subsequent intrusions (Experiment 2), consistent with reconsolidation-update mechanisms. A simple, noninvasive cognitive-task procedure administered after emotional memory has already consolidated (i.e., > 24 hours after exposure to experimental trauma) may prevent the recurrence of intrusive memories of those emotional events. © The Author(s) 2015.

  16. Memory allocation and computations for Laplace’s equation of 3-D arbitrary boundary problems

    Directory of Open Access Journals (Sweden)

    Tsay Tswn-Syau

    2017-01-01

    Full Text Available Computation iteration schemes and memory allocation technique for finite difference method were presented in this paper. The transformed form of a groundwater flow problem in the generalized curvilinear coordinates was taken to be the illustrating example and a 3-dimensional second order accurate 19-point scheme was presented. Traditional element-by-element methods (e.g. SOR are preferred since it is simple and memory efficient but time consuming in computation. For efficient memory allocation, an index method was presented to store the sparse non-symmetric matrix of the problem. For computations, conjugate-gradient-like methods were reported to be computationally efficient. Among them, using incomplete Choleski decomposition as preconditioner was reported to be good method for iteration convergence. In general, the developed index method in this paper has the following advantages: (1 adaptable to various governing and boundary conditions, (2 flexible for higher order approximation, (3 independence of problem dimension, (4 efficient for complex problems when global matrix is not symmetric, (5 convenience for general sparse matrices, (6 computationally efficient in the most time consuming procedure of matrix multiplication, and (7 applicable to any developed matrix solver.

  17. Computer-Presented Organizational/Memory Aids as Instruction for Solving Pico-Fomi Problems.

    Science.gov (United States)

    Steinberg, Esther R.; And Others

    1985-01-01

    Describes investigation of effectiveness of computer-presented organizational/memory aids (matrix and verbal charts controlled by computer or learner) as instructional technique for solving Pico-Fomi problems, and the acquisition of deductive inference rules when such aids are present. Results indicate chart use control should be adapted to…

  18. Towards Modeling False Memory With Computational Knowledge Bases.

    Science.gov (United States)

    Li, Justin; Kohanyi, Emma

    2017-01-01

    One challenge to creating realistic cognitive models of memory is the inability to account for the vast common-sense knowledge of human participants. Large computational knowledge bases such as WordNet and DBpedia may offer a solution to this problem but may pose other challenges. This paper explores some of these difficulties through a semantic network spreading activation model of the Deese-Roediger-McDermott false memory task. In three experiments, we show that these knowledge bases only capture a subset of human associations, while irrelevant information introduces noise and makes efficient modeling difficult. We conclude that the contents of these knowledge bases must be augmented and, more important, that the algorithms must be refined and optimized, before large knowledge bases can be widely used for cognitive modeling. Copyright © 2016 Cognitive Science Society, Inc.

  19. Optical computing, optical memory, and SBIRs at Foster-Miller

    Science.gov (United States)

    Domash, Lawrence H.

    1994-03-01

    A desktop design and manufacturing system for binary diffractive elements, MacBEEP, was developed with the optical researcher in mind. Optical processing systems for specialized tasks such as cellular automation computation and fractal measurement were constructed. A new family of switchable holograms has enabled several applications for control of laser beams in optical memories. New spatial light modulators and optical logic elements have been demonstrated based on a more manufacturable semiconductor technology. Novel synthetic and polymeric nonlinear materials for optical storage are under development in an integrated memory architecture. SBIR programs enable creative contributions from smaller companies, both product oriented and technology oriented, and support advances that might not otherwise be developed.

  20. Reduction of Used Memory Ensemble Kalman Filtering (RumEnKF): A data assimilation scheme for memory intensive, high performance computing

    Science.gov (United States)

    Hut, Rolf; Amisigo, Barnabas A.; Steele-Dunne, Susan; van de Giesen, Nick

    2015-12-01

    Reduction of Used Memory Ensemble Kalman Filtering (RumEnKF) is introduced as a variant on the Ensemble Kalman Filter (EnKF). RumEnKF differs from EnKF in that it does not store the entire ensemble, but rather only saves the first two moments of the ensemble distribution. In this way, the number of ensemble members that can be calculated is less dependent on available memory, and mainly on available computing power (CPU). RumEnKF is developed to make optimal use of current generation super computer architecture, where the number of available floating point operations (flops) increases more rapidly than the available memory and where inter-node communication can quickly become a bottleneck. RumEnKF reduces the used memory compared to the EnKF when the number of ensemble members is greater than half the number of state variables. In this paper, three simple models are used (auto-regressive, low dimensional Lorenz and high dimensional Lorenz) to show that RumEnKF performs similarly to the EnKF. Furthermore, it is also shown that increasing the ensemble size has a similar impact on the estimation error from the three algorithms.

  1. A homotopy method for solving Riccati equations on a shared memory parallel computer

    International Nuclear Information System (INIS)

    Zigic, D.; Watson, L.T.; Collins, E.G. Jr.; Davis, L.D.

    1993-01-01

    Although there are numerous algorithms for solving Riccati equations, there still remains a need for algorithms which can operate efficiently on large problems and on parallel machines. This paper gives a new homotopy-based algorithm for solving Riccati equations on a shared memory parallel computer. The central part of the algorithm is the computation of the kernel of the Jacobian matrix, which is essential for the corrector iterations along the homotopy zero curve. Using a Schur decomposition the tensor product structure of various matrices can be efficiently exploited. The algorithm allows for efficient parallelization on shared memory machines

  2. Computational dissection of human episodic memory reveals mental process-specific genetic profiles.

    Science.gov (United States)

    Luksys, Gediminas; Fastenrath, Matthias; Coynel, David; Freytag, Virginie; Gschwind, Leo; Heck, Angela; Jessen, Frank; Maier, Wolfgang; Milnik, Annette; Riedel-Heller, Steffi G; Scherer, Martin; Spalek, Klara; Vogler, Christian; Wagner, Michael; Wolfsgruber, Steffen; Papassotiropoulos, Andreas; de Quervain, Dominique J-F

    2015-09-01

    Episodic memory performance is the result of distinct mental processes, such as learning, memory maintenance, and emotional modulation of memory strength. Such processes can be effectively dissociated using computational models. Here we performed gene set enrichment analyses of model parameters estimated from the episodic memory performance of 1,765 healthy young adults. We report robust and replicated associations of the amine compound SLC (solute-carrier) transporters gene set with the learning rate, of the collagen formation and transmembrane receptor protein tyrosine kinase activity gene sets with the modulation of memory strength by negative emotional arousal, and of the L1 cell adhesion molecule (L1CAM) interactions gene set with the repetition-based memory improvement. Furthermore, in a large functional MRI sample of 795 subjects we found that the association between L1CAM interactions and memory maintenance revealed large clusters of differences in brain activity in frontal cortical areas. Our findings provide converging evidence that distinct genetic profiles underlie specific mental processes of human episodic memory. They also provide empirical support to previous theoretical and neurobiological studies linking specific neuromodulators to the learning rate and linking neural cell adhesion molecules to memory maintenance. Furthermore, our study suggests additional memory-related genetic pathways, which may contribute to a better understanding of the neurobiology of human memory.

  3. Computational dissection of human episodic memory reveals mental process-specific genetic profiles

    Science.gov (United States)

    Luksys, Gediminas; Fastenrath, Matthias; Coynel, David; Freytag, Virginie; Gschwind, Leo; Heck, Angela; Jessen, Frank; Maier, Wolfgang; Milnik, Annette; Riedel-Heller, Steffi G.; Scherer, Martin; Spalek, Klara; Vogler, Christian; Wagner, Michael; Wolfsgruber, Steffen; Papassotiropoulos, Andreas; de Quervain, Dominique J.-F.

    2015-01-01

    Episodic memory performance is the result of distinct mental processes, such as learning, memory maintenance, and emotional modulation of memory strength. Such processes can be effectively dissociated using computational models. Here we performed gene set enrichment analyses of model parameters estimated from the episodic memory performance of 1,765 healthy young adults. We report robust and replicated associations of the amine compound SLC (solute-carrier) transporters gene set with the learning rate, of the collagen formation and transmembrane receptor protein tyrosine kinase activity gene sets with the modulation of memory strength by negative emotional arousal, and of the L1 cell adhesion molecule (L1CAM) interactions gene set with the repetition-based memory improvement. Furthermore, in a large functional MRI sample of 795 subjects we found that the association between L1CAM interactions and memory maintenance revealed large clusters of differences in brain activity in frontal cortical areas. Our findings provide converging evidence that distinct genetic profiles underlie specific mental processes of human episodic memory. They also provide empirical support to previous theoretical and neurobiological studies linking specific neuromodulators to the learning rate and linking neural cell adhesion molecules to memory maintenance. Furthermore, our study suggests additional memory-related genetic pathways, which may contribute to a better understanding of the neurobiology of human memory. PMID:26261317

  4. Perspective: Memcomputing: Leveraging memory and physics to compute efficiently

    Science.gov (United States)

    Di Ventra, Massimiliano; Traversa, Fabio L.

    2018-05-01

    It is well known that physical phenomena may be of great help in computing some difficult problems efficiently. A typical example is prime factorization that may be solved in polynomial time by exploiting quantum entanglement on a quantum computer. There are, however, other types of (non-quantum) physical properties that one may leverage to compute efficiently a wide range of hard problems. In this perspective, we discuss how to employ one such property, memory (time non-locality), in a novel physics-based approach to computation: Memcomputing. In particular, we focus on digital memcomputing machines (DMMs) that are scalable. DMMs can be realized with non-linear dynamical systems with memory. The latter property allows the realization of a new type of Boolean logic, one that is self-organizing. Self-organizing logic gates are "terminal-agnostic," namely, they do not distinguish between the input and output terminals. When appropriately assembled to represent a given combinatorial/optimization problem, the corresponding self-organizing circuit converges to the equilibrium points that express the solutions of the problem at hand. In doing so, DMMs take advantage of the long-range order that develops during the transient dynamics. This collective dynamical behavior, reminiscent of a phase transition, or even the "edge of chaos," is mediated by families of classical trajectories (instantons) that connect critical points of increasing stability in the system's phase space. The topological character of the solution search renders DMMs robust against noise and structural disorder. Since DMMs are non-quantum systems described by ordinary differential equations, not only can they be built in hardware with the available technology, they can also be simulated efficiently on modern classical computers. As an example, we will show the polynomial-time solution of the subset-sum problem for the worst cases, and point to other types of hard problems where simulations of DMMs

  5. Exploiting short-term memory in soft body dynamics as a computational resource.

    Science.gov (United States)

    Nakajima, K; Li, T; Hauser, H; Pfeifer, R

    2014-11-06

    Soft materials are not only highly deformable, but they also possess rich and diverse body dynamics. Soft body dynamics exhibit a variety of properties, including nonlinearity, elasticity and potentially infinitely many degrees of freedom. Here, we demonstrate that such soft body dynamics can be employed to conduct certain types of computation. Using body dynamics generated from a soft silicone arm, we show that they can be exploited to emulate functions that require memory and to embed robust closed-loop control into the arm. Our results suggest that soft body dynamics have a short-term memory and can serve as a computational resource. This finding paves the way towards exploiting passive body dynamics for control of a large class of underactuated systems. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  6. From shoebox to performative agent: the computer as personal memory machine

    NARCIS (Netherlands)

    van Dijck, J.

    2005-01-01

    Digital technologies offer new opportunities in the everyday lives of people: with still expanding memory capacities, the computer is rapidly becoming a giant storage and processing facility for recording and retrieving ‘bits of life’. Software engineers and companies promise not only to expand the

  7. Computational cost of isogeometric multi-frontal solvers on parallel distributed memory machines

    KAUST Repository

    Woźniak, Maciej; Paszyński, Maciej R.; Pardo, D.; Dalcin, Lisandro; Calo, Victor M.

    2015-01-01

    This paper derives theoretical estimates of the computational cost for isogeometric multi-frontal direct solver executed on parallel distributed memory machines. We show theoretically that for the Cp-1 global continuity of the isogeometric solution

  8. Irrelevant sensory stimuli interfere with working memory storage: evidence from a computational model of prefrontal neurons.

    Science.gov (United States)

    Bancroft, Tyler D; Hockley, William E; Servos, Philip

    2013-03-01

    The encoding of irrelevant stimuli into the memory store has previously been suggested as a mechanism of interference in working memory (e.g., Lange & Oberauer, Memory, 13, 333-339, 2005; Nairne, Memory & Cognition, 18, 251-269, 1990). Recently, Bancroft and Servos (Experimental Brain Research, 208, 529-532, 2011) used a tactile working memory task to provide experimental evidence that irrelevant stimuli were, in fact, encoded into working memory. In the present study, we replicated Bancroft and Servos's experimental findings using a biologically based computational model of prefrontal neurons, providing a neurocomputational model of overwriting in working memory. Furthermore, our modeling results show that inhibition acts to protect the contents of working memory, and they suggest a need for further experimental research into the capacity of vibrotactile working memory.

  9. Retrieval and organizational strategies in conceptual memory a computer model

    CERN Document Server

    Kolodner, Janet L

    2014-01-01

    'Someday we expect that computers will be able to keep us informed about the news. People have imagined being able to ask their home computers questions such as "What's going on in the world?"…'. Originally published in 1984, this book is a fascinating look at the world of memory and computers before the internet became the mainstream phenomenon it is today. It looks at the early development of a computer system that could keep us informed in a way that we now take for granted. Presenting a theory of remembering, based on human information processing, it begins to address many of the hard problems implicated in the quest to make computers remember. The book had two purposes in presenting this theory of remembering. First, to be used in implementing intelligent computer systems, including fact retrieval systems and intelligent systems in general. Any intelligent program needs to use and store and use a great deal of knowledge. The strategies and structures in the book were designed to be used for that purpos...

  10. The Sensitivity of Memory Consolidation and Reconsolidation to Inhibitors of Protein Synthesis and Kinases: Computational Analysis

    Science.gov (United States)

    Zhang, Yili; Smolen, Paul; Baxter, Douglas A.; Byrne, John H.

    2010-01-01

    Memory consolidation and reconsolidation require kinase activation and protein synthesis. Blocking either process during or shortly after training or recall disrupts memory stabilization, which suggests the existence of a critical time window during which these processes are necessary. Using a computational model of kinase synthesis and…

  11. Exploring memory hierarchy design with emerging memory technologies

    CERN Document Server

    Sun, Guangyu

    2014-01-01

    This book equips readers with tools for computer architecture of high performance, low power, and high reliability memory hierarchy in computer systems based on emerging memory technologies, such as STTRAM, PCM, FBDRAM, etc.  The techniques described offer advantages of high density, near-zero static power, and immunity to soft errors, which have the potential of overcoming the “memory wall.”  The authors discuss memory design from various perspectives: emerging memory technologies are employed in the memory hierarchy with novel architecture modification;  hybrid memory structure is introduced to leverage advantages from multiple memory technologies; an analytical model named “Moguls” is introduced to explore quantitatively the optimization design of a memory hierarchy; finally, the vulnerability of the CMPs to radiation-based soft errors is improved by replacing different levels of on-chip memory with STT-RAMs.   ·         Provides a holistic study of using emerging memory technologies i...

  12. Impact of singular excessive computer game and television exposure on sleep patterns and memory performance of school-aged children.

    Science.gov (United States)

    Dworak, Markus; Schierl, Thomas; Bruns, Thomas; Strüder, Heiko Klaus

    2007-11-01

    Television and computer game consumption are a powerful influence in the lives of most children. Previous evidence has supported the notion that media exposure could impair a variety of behavioral characteristics. Excessive television viewing and computer game playing have been associated with many psychiatric symptoms, especially emotional and behavioral symptoms, somatic complaints, attention problems such as hyperactivity, and family interaction problems. Nevertheless, there is insufficient knowledge about the relationship between singular excessive media consumption on sleep patterns and linked implications on children. The aim of this study was to investigate the effects of singular excessive television and computer game consumption on sleep patterns and memory performance of children. Eleven school-aged children were recruited for this polysomnographic study. Children were exposed to voluntary excessive television and computer game consumption. In the subsequent night, polysomnographic measurements were conducted to measure sleep-architecture and sleep-continuity parameters. In addition, a visual and verbal memory test was conducted before media stimulation and after the subsequent sleeping period to determine visuospatial and verbal memory performance. Only computer game playing resulted in significant reduced amounts of slow-wave sleep as well as significant declines in verbal memory performance. Prolonged sleep-onset latency and more stage 2 sleep were also detected after previous computer game consumption. No effects on rapid eye movement sleep were observed. Television viewing reduced sleep efficiency significantly but did not affect sleep patterns. The results suggest that television and computer game exposure affect children's sleep and deteriorate verbal cognitive performance, which supports the hypothesis of the negative influence of media consumption on children's sleep, learning, and memory.

  13. Method of computer generation and projection recording of microholograms for holographic memory systems: mathematical modelling and experimental implementation

    International Nuclear Information System (INIS)

    Betin, A Yu; Bobrinev, V I; Evtikhiev, N N; Zherdev, A Yu; Zlokazov, E Yu; Lushnikov, D S; Markin, V V; Odinokov, S B; Starikov, S N; Starikov, R S

    2013-01-01

    A method of computer generation and projection recording of microholograms for holographic memory systems is presented; the results of mathematical modelling and experimental implementation of the method are demonstrated. (holographic memory)

  14. Effect of Computer-Presented Organizational/Memory Aids on Problem Solving Behavior.

    Science.gov (United States)

    Steinberg, Esther R.; And Others

    This research studied the effects of computer-presented organizational/memory aids on problem solving behavior. The aids were either matrix or verbal charts shown on the display screen next to the problem. The 104 college student subjects were randomly assigned to one of the four conditions: type of chart (matrix or verbal chart) and use of charts…

  15. Memory and selective attention in multiple sclerosis: cross-sectional computer-based assessment in a large outpatient sample.

    Science.gov (United States)

    Adler, Georg; Lembach, Yvonne

    2015-08-01

    Cognitive impairments may have a severe impact on everyday functioning and quality of life of patients with multiple sclerosis (MS). However, there are some methodological problems in the assessment and only a few studies allow a representative estimate of the prevalence and severity of cognitive impairments in MS patients. We applied a computer-based method, the memory and attention test (MAT), in 531 outpatients with MS, who were assessed at nine neurological practices or specialized outpatient clinics. The findings were compared with those obtained in an age-, sex- and education-matched control group of 84 healthy subjects. Episodic short-term memory was substantially decreased in the MS patients. About 20% of them reached a score of only less than two standard deviations below the mean of the control group. The episodic short-term memory score was negatively correlated with the EDSS score. Minor but also significant impairments in the MS patients were found for verbal short-term memory, episodic working memory and selective attention. The computer-based MAT was found to be useful for a routine assessment of cognition in MS outpatients.

  16. Robust dynamical decoupling for quantum computing and quantum memory.

    Science.gov (United States)

    Souza, Alexandre M; Alvarez, Gonzalo A; Suter, Dieter

    2011-06-17

    Dynamical decoupling (DD) is a popular technique for protecting qubits from the environment. However, unless special care is taken, experimental errors in the control pulses used in this technique can destroy the quantum information instead of preserving it. Here, we investigate techniques for making DD sequences robust against different types of experimental errors while retaining good decoupling efficiency in a fluctuating environment. We present experimental data from solid-state nuclear spin qubits and introduce a new DD sequence that is suitable for quantum computing and quantum memory.

  17. A multiprocessor computer simulation model employing a feedback scheduler/allocator for memory space and bandwidth matching and TMR processing

    Science.gov (United States)

    Bradley, D. B.; Irwin, J. D.

    1974-01-01

    A computer simulation model for a multiprocessor computer is developed that is useful for studying the problem of matching multiprocessor's memory space, memory bandwidth and numbers and speeds of processors with aggregate job set characteristics. The model assumes an input work load of a set of recurrent jobs. The model includes a feedback scheduler/allocator which attempts to improve system performance through higher memory bandwidth utilization by matching individual job requirements for space and bandwidth with space availability and estimates of bandwidth availability at the times of memory allocation. The simulation model includes provisions for specifying precedence relations among the jobs in a job set, and provisions for specifying precedence execution of TMR (Triple Modular Redundant and SIMPLEX (non redundant) jobs.

  18. Visual Memories Bypass Normalization.

    Science.gov (United States)

    Bloem, Ilona M; Watanabe, Yurika L; Kibbe, Melissa M; Ling, Sam

    2018-05-01

    How distinct are visual memory representations from visual perception? Although evidence suggests that briefly remembered stimuli are represented within early visual cortices, the degree to which these memory traces resemble true visual representations remains something of a mystery. Here, we tested whether both visual memory and perception succumb to a seemingly ubiquitous neural computation: normalization. Observers were asked to remember the contrast of visual stimuli, which were pitted against each other to promote normalization either in perception or in visual memory. Our results revealed robust normalization between visual representations in perception, yet no signature of normalization occurring between working memory stores-neither between representations in memory nor between memory representations and visual inputs. These results provide unique insight into the nature of visual memory representations, illustrating that visual memory representations follow a different set of computational rules, bypassing normalization, a canonical visual computation.

  19. A computational model of fMRI activity in the intraparietal sulcus that supports visual working memory.

    Science.gov (United States)

    Domijan, Dražen

    2011-12-01

    A computational model was developed to explain a pattern of results of fMRI activation in the intraparietal sulcus (IPS) supporting visual working memory for multiobject scenes. The model is based on the hypothesis that dendrites of excitatory neurons are major computational elements in the cortical circuit. Dendrites enable formation of a competitive queue that exhibits a gradient of activity values for nodes encoding different objects, and this pattern is stored in working memory. In the model, brain imaging data are interpreted as a consequence of blood flow arising from dendritic processing. Computer simulations showed that the model successfully simulates data showing the involvement of inferior IPS in object individuation and spatial grouping through representation of objects' locations in space, along with the involvement of superior IPS in object identification through representation of a set of objects' features. The model exhibits a capacity limit due to the limited dynamic range for nodes and the operation of lateral inhibition among them. The capacity limit is fixed in the inferior IPS regardless of the objects' complexity, due to the normalization of lateral inhibition, and variable in the superior IPS, due to the different encoding demands for simple and complex shapes. Systematic variation in the strength of self-excitation enables an understanding of the individual differences in working memory capacity. The model offers several testable predictions regarding the neural basis of visual working memory.

  20. Computational cost of isogeometric multi-frontal solvers on parallel distributed memory machines

    KAUST Repository

    Woźniak, Maciej

    2015-02-01

    This paper derives theoretical estimates of the computational cost for isogeometric multi-frontal direct solver executed on parallel distributed memory machines. We show theoretically that for the Cp-1 global continuity of the isogeometric solution, both the computational cost and the communication cost of a direct solver are of order O(log(N)p2) for the one dimensional (1D) case, O(Np2) for the two dimensional (2D) case, and O(N4/3p2) for the three dimensional (3D) case, where N is the number of degrees of freedom and p is the polynomial order of the B-spline basis functions. The theoretical estimates are verified by numerical experiments performed with three parallel multi-frontal direct solvers: MUMPS, PaStiX and SuperLU, available through PETIGA toolkit built on top of PETSc. Numerical results confirm these theoretical estimates both in terms of p and N. For a given problem size, the strong efficiency rapidly decreases as the number of processors increases, becoming about 20% for 256 processors for a 3D example with 1283 unknowns and linear B-splines with C0 global continuity, and 15% for a 3D example with 643 unknowns and quartic B-splines with C3 global continuity. At the same time, one cannot arbitrarily increase the problem size, since the memory required by higher order continuity spaces is large, quickly consuming all the available memory resources even in the parallel distributed memory version. Numerical results also suggest that the use of distributed parallel machines is highly beneficial when solving higher order continuity spaces, although the number of processors that one can efficiently employ is somehow limited.

  1. Virtual memory support for distributed computing environments using a shared data object model

    Science.gov (United States)

    Huang, F.; Bacon, J.; Mapp, G.

    1995-12-01

    Conventional storage management systems provide one interface for accessing memory segments and another for accessing secondary storage objects. This hinders application programming and affects overall system performance due to mandatory data copying and user/kernel boundary crossings, which in the microkernel case may involve context switches. Memory-mapping techniques may be used to provide programmers with a unified view of the storage system. This paper extends such techniques to support a shared data object model for distributed computing environments in which good support for coherence and synchronization is essential. The approach is based on a microkernel, typed memory objects, and integrated coherence control. A microkernel architecture is used to support multiple coherence protocols and the addition of new protocols. Memory objects are typed and applications can choose the most suitable protocols for different types of object to avoid protocol mismatch. Low-level coherence control is integrated with high-level concurrency control so that the number of messages required to maintain memory coherence is reduced and system-wide synchronization is realized without severely impacting the system performance. These features together contribute a novel approach to the support for flexible coherence under application control.

  2. Translation Memory and Computer Assisted Translation Tool for Medieval Texts

    Directory of Open Access Journals (Sweden)

    Törcsvári Attila

    2013-05-01

    Full Text Available Translation memories (TMs, as part of Computer Assisted Translation (CAT tools, support translators reusing portions of formerly translated text. Fencing books are good candidates for using TMs due to the high number of repeated terms. Medieval texts suffer a number of drawbacks that make hard even “simple” rewording to the modern version of the same language. The analyzed difficulties are: lack of systematic spelling, unusual word orders and typos in the original. A hypothesis is made and verified that even simple modernization increases legibility and it is feasible, also it is worthwhile to apply translation memories due to the numerous and even extremely long repeated terms. Therefore, methods and algorithms are presented 1. for automated transcription of medieval texts (when a limited training set is available, and 2. collection of repeated patterns. The efficiency of the algorithms is analyzed for recall and precision.

  3. Studies of electron collisions with polyatomic molecules using distributed-memory parallel computers

    International Nuclear Information System (INIS)

    Winstead, C.; Hipes, P.G.; Lima, M.A.P.; McKoy, V.

    1991-01-01

    Elastic electron scattering cross sections from 5--30 eV are reported for the molecules C 2 H 4 , C 2 H 6 , C 3 H 8 , Si 2 H 6 , and GeH 4 , obtained using an implementation of the Schwinger multichannel method for distributed-memory parallel computer architectures. These results, obtained within the static-exchange approximation, are in generally good agreement with the available experimental data. These calculations demonstrate the potential of highly parallel computation in the study of collisions between low-energy electrons and polyatomic gases. The computational methodology discussed is also directly applicable to the calculation of elastic cross sections at higher levels of approximation (target polarization) and of electronic excitation cross sections

  4. Metal oxide resistive random access memory based synaptic devices for brain-inspired computing

    Science.gov (United States)

    Gao, Bin; Kang, Jinfeng; Zhou, Zheng; Chen, Zhe; Huang, Peng; Liu, Lifeng; Liu, Xiaoyan

    2016-04-01

    The traditional Boolean computing paradigm based on the von Neumann architecture is facing great challenges for future information technology applications such as big data, the Internet of Things (IoT), and wearable devices, due to the limited processing capability issues such as binary data storage and computing, non-parallel data processing, and the buses requirement between memory units and logic units. The brain-inspired neuromorphic computing paradigm is believed to be one of the promising solutions for realizing more complex functions with a lower cost. To perform such brain-inspired computing with a low cost and low power consumption, novel devices for use as electronic synapses are needed. Metal oxide resistive random access memory (ReRAM) devices have emerged as the leading candidate for electronic synapses. This paper comprehensively addresses the recent work on the design and optimization of metal oxide ReRAM-based synaptic devices. A performance enhancement methodology and optimized operation scheme to achieve analog resistive switching and low-energy training behavior are provided. A three-dimensional vertical synapse network architecture is proposed for high-density integration and low-cost fabrication. The impacts of the ReRAM synaptic device features on the performances of neuromorphic systems are also discussed on the basis of a constructed neuromorphic visual system with a pattern recognition function. Possible solutions to achieve the high recognition accuracy and efficiency of neuromorphic systems are presented.

  5. Logic computation in phase change materials by threshold and memory switching.

    Science.gov (United States)

    Cassinerio, M; Ciocchini, N; Ielmini, D

    2013-11-06

    Memristors, namely hysteretic devices capable of changing their resistance in response to applied electrical stimuli, may provide new opportunities for future memory and computation, thanks to their scalable size, low switching energy and nonvolatile nature. We have developed a functionally complete set of logic functions including NOR, NAND and NOT gates, each utilizing a single phase-change memristor (PCM) where resistance switching is due to the phase transformation of an active chalcogenide material. The logic operations are enabled by the high functionality of nanoscale phase change, featuring voltage comparison, additive crystallization and pulse-induced amorphization. The nonvolatile nature of memristive states provides the basis for developing reconfigurable hybrid logic/memory circuits featuring low-power and high-speed switching. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  6. Programs for Testing Processor-in-Memory Computing Systems

    Science.gov (United States)

    Katz, Daniel S.

    2006-01-01

    The Multithreaded Microbenchmarks for Processor-In-Memory (PIM) Compilers, Simulators, and Hardware are computer programs arranged in a series for use in testing the performances of PIM computing systems, including compilers, simulators, and hardware. The programs at the beginning of the series test basic functionality; the programs at subsequent positions in the series test increasingly complex functionality. The programs are intended to be used while designing a PIM system, and can be used to verify that compilers, simulators, and hardware work correctly. The programs can also be used to enable designers of these system components to examine tradeoffs in implementation. Finally, these programs can be run on non-PIM hardware (either single-threaded or multithreaded) using the POSIX pthreads standard to verify that the benchmarks themselves operate correctly. [POSIX (Portable Operating System Interface for UNIX) is a set of standards that define how programs and operating systems interact with each other. pthreads is a library of pre-emptive thread routines that comply with one of the POSIX standards.

  7. Trial-by-Trial Modulation of Associative Memory Formation by Reward Prediction Error and Reward Anticipation as Revealed by a Biologically Plausible Computational Model.

    Science.gov (United States)

    Aberg, Kristoffer C; Müller, Julia; Schwartz, Sophie

    2017-01-01

    Anticipation and delivery of rewards improves memory formation, but little effort has been made to disentangle their respective contributions to memory enhancement. Moreover, it has been suggested that the effects of reward on memory are mediated by dopaminergic influences on hippocampal plasticity. Yet, evidence linking memory improvements to actual reward computations reflected in the activity of the dopaminergic system, i.e., prediction errors and expected values, is scarce and inconclusive. For example, different previous studies reported that the magnitude of prediction errors during a reinforcement learning task was a positive, negative, or non-significant predictor of successfully encoding simultaneously presented images. Individual sensitivities to reward and punishment have been found to influence the activation of the dopaminergic reward system and could therefore help explain these seemingly discrepant results. Here, we used a novel associative memory task combined with computational modeling and showed independent effects of reward-delivery and reward-anticipation on memory. Strikingly, the computational approach revealed positive influences from both reward delivery, as mediated by prediction error magnitude, and reward anticipation, as mediated by magnitude of expected value, even in the absence of behavioral effects when analyzed using standard methods, i.e., by collapsing memory performance across trials within conditions. We additionally measured trait estimates of reward and punishment sensitivity and found that individuals with increased reward (vs. punishment) sensitivity had better memory for associations encoded during positive (vs. negative) prediction errors when tested after 20 min, but a negative trend when tested after 24 h. In conclusion, modeling trial-by-trial fluctuations in the magnitude of reward, as we did here for prediction errors and expected value computations, provides a comprehensive and biologically plausible description of

  8. Energy Scaling Advantages of Resistive Memory Crossbar Based Computation and its Application to Sparse Coding

    Directory of Open Access Journals (Sweden)

    Sapan eAgarwal

    2016-01-01

    Full Text Available The exponential increase in data over the last decade presents a significant challenge to analytics efforts that seek to process and interpret such data for various applications. Neural-inspired computing approaches are being developed in order to leverage the computational advantages of the analog, low-power data processing observed in biological systems. Analog resistive memory crossbars can perform a parallel read or a vector-matrix multiplication as well as a parallel write or a rank-1 update with high computational efficiency. For an NxN crossbar, these two kernels are at a minimum O(N more energy efficient than a digital memory-based architecture. If the read operation is noise limited, the energy to read a column can be independent of the crossbar size (O(1. These two kernels form the basis of many neuromorphic algorithms such as image, text, and speech recognition. For instance, these kernels can be applied to a neural sparse coding algorithm to give an O(N reduction in energy for the entire algorithm. Sparse coding is a rich problem with a host of applications including computer vision, object tracking, and more generally unsupervised learning.

  9. Method and apparatus for managing access to a memory

    Science.gov (United States)

    DeBenedictis, Erik

    2017-08-01

    A method and apparatus for managing access to a memory of a computing system. A controller transforms a plurality of operations that represent a computing job into an operational memory layout that reduces a size of a selected portion of the memory that needs to be accessed to perform the computing job. The controller stores the operational memory layout in a plurality of memory cells within the selected portion of the memory. The controller controls a sequence by which a processor in the computing system accesses the memory to perform the computing job using the operational memory layout. The operational memory layout reduces an amount of energy consumed by the processor to perform the computing job.

  10. Providing for organizational memory in computer supported meetings

    OpenAIRE

    Schwabe, Gerhard

    1994-01-01

    Meeting memory features are poorly integrated into current group support systems (GSS). In this article, I discuss how to introduce meeting memory functionality into a GSS. The article first introduces the benefits of effective meetings and organizational memory to an organization. Then, the following challenges to design are discussed: How to store semantically rich output, how to build up the meeting memory with a minimum of additional effort, how to integrate meeting memory into organizati...

  11. PREFACE: Special section on Computational Fluid Dynamics—in memory of Professor Kunio Kuwahara Special section on Computational Fluid Dynamics—in memory of Professor Kunio Kuwahara

    Science.gov (United States)

    Ishii, Katsuya

    2011-08-01

    This issue includes a special section on computational fluid dynamics (CFD) in memory of the late Professor Kunio Kuwahara, who passed away on 15 September 2008, at the age of 66. In this special section, five articles are included that are based on the lectures and discussions at `The 7th International Nobeyama Workshop on CFD: To the Memory of Professor Kuwahara' held in Tokyo on 23 and 24 September 2009. Professor Kuwahara started his research in fluid dynamics under Professor Imai at the University of Tokyo. His first paper was published in 1969 with the title 'Steady Viscous Flow within Circular Boundary', with Professor Imai. In this paper, he combined theoretical and numerical methods in fluid dynamics. Since that time, he made significant and seminal contributions to computational fluid dynamics. He undertook pioneering numerical studies on the vortex method in 1970s. From then to the early nineties, he developed numerical analyses on a variety of three-dimensional unsteady phenomena of incompressible and compressible fluid flows and/or complex fluid flows using his own supercomputers with academic and industrial co-workers and members of his private research institute, ICFD in Tokyo. In addition, a number of senior and young researchers of fluid mechanics around the world were invited to ICFD and the Nobeyama workshops, which were held near his villa, and they intensively discussed new frontier problems of fluid physics and fluid engineering at Professor Kuwahara's kind hospitality. At the memorial Nobeyama workshop held in 2009, 24 overseas speakers presented their papers, including the talks of Dr J P Boris (Naval Research Laboratory), Dr E S Oran (Naval Research Laboratory), Professor Z J Wang (Iowa State University), Dr M Meinke (RWTH Aachen), Professor K Ghia (University of Cincinnati), Professor U Ghia (University of Cincinnati), Professor F Hussain (University of Houston), Professor M Farge (École Normale Superieure), Professor J Y Yong (National

  12. How Human Memory and Working Memory Work in Second Language Acquisition

    OpenAIRE

    小那覇, 洋子; Onaha, Hiroko

    2014-01-01

    We often draw an analogy between human memory and computers. Information around us is taken into our memory storage first, and then we use the information in storage whatever we need it in our daily life. Linguistic information is also in storage and we process our thoughts based on the memory that is stored. Memory storage consists of multiple memory systems; one of which is called working memory that includes short-term memory. Working memory is the central system that underpins the process...

  13. RAM-efficient external memory sorting

    DEFF Research Database (Denmark)

    Arge, Lars; Thorup, Mikkel

    2013-01-01

    In recent years a large number of problems have been considered in external memory models of computation, where the complexity measure is the number of blocks of data that are moved between slow external memory and fast internal memory (also called I/Os). In practice, however, internal memory time...... often dominates the total running time once I/O-efficiency has been obtained. In this paper we study algorithms for fundamental problems that are simultaneously I/O-efficient and internal memory efficient in the RAM model of computation....

  14. Photon echo quantum random access memory integration in a quantum computer

    International Nuclear Information System (INIS)

    Moiseev, Sergey A; Andrianov, Sergey N

    2012-01-01

    We have analysed an efficient integration of multi-qubit echo quantum memory (QM) into the quantum computer scheme based on squids, quantum dots or atomic resonant ensembles in a quantum electrodynamics cavity. Here, one atomic ensemble with controllable inhomogeneous broadening is used for the QM node and other nodes characterized by the homogeneously broadened resonant line are used for processing. We have found the optimal conditions for the efficient integration of the multi-qubit QM modified for the analysed scheme, and we have determined the self-temporal modes providing a perfect reversible transfer of the photon qubits between the QM node and arbitrary processing nodes. The obtained results open the way for realization of a full-scale solid state quantum computing based on the efficient multi-qubit QM. (paper)

  15. Determination of memory performance

    International Nuclear Information System (INIS)

    Gopych, P.M.

    1999-01-01

    Within the scope of testing statistical hypotheses theory a model definition and a computer method for model calculation of widely used in neuropsychology human memory performance (free recall, cued recall, and recognition probabilities), a model definition and a computer method for model calculation of intensities of cues used in experiments for testing human memory quality are proposed. Models for active and passive traces of memory and their relations are found. It was shown that autoassociative memory unit in the form of short two-layer artificial neural network with (or without) damages can be used for model description of memory performance in subjects with (or without) local brain lesions

  16. Coupling Computer Codes for The Analysis of Severe Accident Using A Pseudo Shared Memory Based on MPI

    International Nuclear Information System (INIS)

    Cho, Young Chul; Park, Chang-Hwan; Kim, Dong-Min

    2016-01-01

    As there are four codes in-vessel analysis code (CSPACE), ex-vessel analysis code (SACAP), corium behavior analysis code (COMPASS), and fission product behavior analysis code, for the analysis of severe accident, it is complex to implement the coupling of codes with the similar methodologies for RELAP and CONTEMPT or SPACE and CAP. Because of that, an efficient coupling so called Pseudo shared memory architecture was introduced. In this paper, coupling methodologies will be compared and the methodology used for the analysis of severe accident will be discussed in detail. The barrier between in-vessel and ex-vessel has been removed for the analysis of severe accidents with the implementation of coupling computer codes with pseudo shared memory architecture based on MPI. The remaining are proper choice and checking of variables and values for the selected severe accident scenarios, e.g., TMI accident. Even though it is possible to couple more than two computer codes with pseudo shared memory architecture, the methodology should be revised to couple parallel codes especially when they are programmed using MPI

  17. Coupling Computer Codes for The Analysis of Severe Accident Using A Pseudo Shared Memory Based on MPI

    Energy Technology Data Exchange (ETDEWEB)

    Cho, Young Chul; Park, Chang-Hwan; Kim, Dong-Min [FNC Technology Co., Yongin (Korea, Republic of)

    2016-10-15

    As there are four codes in-vessel analysis code (CSPACE), ex-vessel analysis code (SACAP), corium behavior analysis code (COMPASS), and fission product behavior analysis code, for the analysis of severe accident, it is complex to implement the coupling of codes with the similar methodologies for RELAP and CONTEMPT or SPACE and CAP. Because of that, an efficient coupling so called Pseudo shared memory architecture was introduced. In this paper, coupling methodologies will be compared and the methodology used for the analysis of severe accident will be discussed in detail. The barrier between in-vessel and ex-vessel has been removed for the analysis of severe accidents with the implementation of coupling computer codes with pseudo shared memory architecture based on MPI. The remaining are proper choice and checking of variables and values for the selected severe accident scenarios, e.g., TMI accident. Even though it is possible to couple more than two computer codes with pseudo shared memory architecture, the methodology should be revised to couple parallel codes especially when they are programmed using MPI.

  18. The Effects of 3D Computer Simulation on Biology Students' Achievement and Memory Retention

    Science.gov (United States)

    Elangovan, Tavasuria; Ismail, Zurida

    2014-01-01

    A quasi experimental study was conducted for six weeks to determine the effectiveness of two different 3D computer simulation based teaching methods, that is, realistic simulation and non-realistic simulation on Form Four Biology students' achievement and memory retention in Perak, Malaysia. A sample of 136 Form Four Biology students in Perak,…

  19. Amorphous Semiconductors: From Photocatalyst to Computer Memory

    Science.gov (United States)

    Sundararajan, Mayur

    encouraging but inconclusive. Then the method was successfully demonstrated on mesoporous TiO2SiO 2 by showing a shift in its optical bandgap. One of the special class of amorphous semiconductors is chalcogenide glasses, which exhibit high ionic conductivity even at room temperature. When metal doped chalcogenide glasses are under an electric field, they become electronically conductive. These properties are exploited in the computer memory storage application of Conductive Bridging Random Access Memory (CBRAM). CBRAM is a non-volatile memory that is a strong contender to replace conventional volatile RAMs such as DRAM, SRAM, etc. This technology has already been commercialized, but the working mechanism is still not clearly understood especially the nature of the conductive bridge filament. In this project, the CBRAM memory cells are fabricated by thermal evaporation method with Agx(GeSe 2)1-x as the solid electrolyte layer, Ag as the active electrode and Au as the inert electrode. By careful use of cyclic voltammetry, the conductive filaments were grown on the surface and the bulk of the solid electrolyte. The comparison between the two filaments revealed major differences leading to contradiction with the existing working mechanism. After compiling all the results, a modified working mechanism is proposed. SAXS is a powerful tool to characterize nanostructure of glasses. The analysis of the SAXS data to get useful information are usually performed by different programs. In this project, Irena and GIFT programs were compared by performing the analysis of the SAXS data of glass and glass ceramics samples. Irena was shown to be not suitable for the analysis of SAXS data that has a significant contribution from interparticle interactions. GIFT was demonstrated to be better suited for such analysis. Additionally, the results obtained by programs for samples with low interparticle interactions were shown to be consistent.

  20. Bessel function expansion to reduce the calculation time and memory usage for cylindrical computer-generated holograms.

    Science.gov (United States)

    Sando, Yusuke; Barada, Daisuke; Jackin, Boaz Jessie; Yatagai, Toyohiko

    2017-07-10

    This study proposes a method to reduce the calculation time and memory usage required for calculating cylindrical computer-generated holograms. The wavefront on the cylindrical observation surface is represented as a convolution integral in the 3D Fourier domain. The Fourier transformation of the kernel function involving this convolution integral is analytically performed using a Bessel function expansion. The analytical solution can drastically reduce the calculation time and the memory usage without any cost, compared with the numerical method using fast Fourier transform to Fourier transform the kernel function. In this study, we present the analytical derivation, the efficient calculation of Bessel function series, and a numerical simulation. Furthermore, we demonstrate the effectiveness of the analytical solution through comparisons of calculation time and memory usage.

  1. Results from the First Two Flights of the Static Computer Memory Integrity Testing Experiment

    Science.gov (United States)

    Hancock, Thomas M., III

    1999-01-01

    This paper details the scientific objectives, experiment design, data collection method, and post flight analysis following the first two flights of the Static Computer Memory Integrity Testing (SCMIT) experiment. SCMIT is designed to detect soft-event upsets in passive magnetic memory. A soft-event upset is a change in the logic state of active or passive forms of magnetic memory, commonly referred to as a "Bitflip". In its mildest form a soft-event upset can cause software exceptions, unexpected events, start spacecraft safeing (ending data collection) or corrupted fault protection and error recovery capabilities. In it's most severe form loss of mission or spacecraft can occur. Analysis after the first flight (in 1991 during STS-40) identified possible soft-event upsets to 25% of the experiment detectors. Post flight analysis after the second flight (in 1997 on STS-87) failed to find any evidence of soft-event upsets. The SCMIT experiment is currently scheduled for a third flight in December 1999 on STS-101.

  2. Performing an allreduce operation using shared memory

    Science.gov (United States)

    Archer, Charles J [Rochester, MN; Dozsa, Gabor [Ardsley, NY; Ratterman, Joseph D [Rochester, MN; Smith, Brian E [Rochester, MN

    2012-04-17

    Methods, apparatus, and products are disclosed for performing an allreduce operation using shared memory that include: receiving, by at least one of a plurality of processing cores on a compute node, an instruction to perform an allreduce operation; establishing, by the core that received the instruction, a job status object for specifying a plurality of shared memory allreduce work units, the plurality of shared memory allreduce work units together performing the allreduce operation on the compute node; determining, by an available core on the compute node, a next shared memory allreduce work unit in the job status object; and performing, by that available core on the compute node, that next shared memory allreduce work unit.

  3. Effects of Violent and Non-Violent Computer Game Content on Memory Performance in Adolescents

    Science.gov (United States)

    Maass, Asja; Kollhorster, Kirsten; Riediger, Annemarie; MacDonald, Vanessa; Lohaus, Arnold

    2011-01-01

    The present study focuses on the short-term effects of electronic entertainment media on memory and learning processes. It compares the effects of violent versus non-violent computer game content in a condition of playing and in another condition of watching the same game. The participants consisted of 83 female and 94 male adolescents with a mean…

  4. Working Memory Interventions with Children: Classrooms or Computers?

    Science.gov (United States)

    Colmar, Susan; Double, Kit

    2017-01-01

    The importance of working memory to classroom functioning and academic outcomes has led to the development of many interventions designed to enhance students' working memory. In this article we briefly review the evidence for the relative effectiveness of classroom and computerised working memory interventions in bringing about measurable and…

  5. Optical quantum memory

    Science.gov (United States)

    Lvovsky, Alexander I.; Sanders, Barry C.; Tittel, Wolfgang

    2009-12-01

    Quantum memory is essential for the development of many devices in quantum information processing, including a synchronization tool that matches various processes within a quantum computer, an identity quantum gate that leaves any state unchanged, and a mechanism to convert heralded photons to on-demand photons. In addition to quantum computing, quantum memory will be instrumental for implementing long-distance quantum communication using quantum repeaters. The importance of this basic quantum gate is exemplified by the multitude of optical quantum memory mechanisms being studied, such as optical delay lines, cavities and electromagnetically induced transparency, as well as schemes that rely on photon echoes and the off-resonant Faraday interaction. Here, we report on state-of-the-art developments in the field of optical quantum memory, establish criteria for successful quantum memory and detail current performance levels.

  6. All-spin logic operations: Memory device and reconfigurable computing

    Science.gov (United States)

    Patra, Moumita; Maiti, Santanu K.

    2018-02-01

    Exploiting spin degree of freedom of electron a new proposal is given to characterize spin-based logical operations using a quantum interferometer that can be utilized as a programmable spin logic device (PSLD). The ON and OFF states of both inputs and outputs are described by spin state only, circumventing spin-to-charge conversion at every stage as often used in conventional devices with the inclusion of extra hardware that can eventually diminish the efficiency. All possible logic functions can be engineered from a single device without redesigning the circuit which certainly offers the opportunities of designing new generation spintronic devices. Moreover, we also discuss the utilization of the present model as a memory device and suitable computing operations with proposed experimental setups.

  7. Simulation of radiation effects on three-dimensional computer optical memories

    Science.gov (United States)

    Moscovitch, M.; Emfietzoglou, D.

    1997-01-01

    A model was developed to simulate the effects of heavy charged-particle (HCP) radiation on the information stored in three-dimensional computer optical memories. The model is based on (i) the HCP track radial dose distribution, (ii) the spatial and temporal distribution of temperature in the track, (iii) the matrix-specific radiation-induced changes that will affect the response, and (iv) the kinetics of transition of photochromic molecules from the colored to the colorless isomeric form (bit flip). It is shown that information stored in a volume of several nanometers radius around the particle's track axis may be lost. The magnitude of the effect is dependent on the particle's track structure.

  8. Progress In Optical Memory Technology

    Science.gov (United States)

    Tsunoda, Yoshito

    1987-01-01

    More than 20 years have passed since the concept of optical memory was first proposed in 1966. Since then considerable progress has been made in this area together with the creation of completely new markets of optical memory in consumer and computer application areas. The first generation of optical memory was mainly developed with holographic recording technology in late 1960s and early 1970s. Considerable number of developments have been done in both analog and digital memory applications. Unfortunately, these technologies did not meet a chance to be a commercial product. The second generation of optical memory started at the beginning of 1970s with bit by bit recording technology. Read-only type optical memories such as video disks and compact audio disks have extensively investigated. Since laser diodes were first applied to optical video disk read out in 1976, there have been extensive developments of laser diode pick-ups for optical disk memory systems. The third generation of optical memory started in 1978 with bit by bit read/write technology using laser diodes. Developments of recording materials including both write-once and erasable have been actively pursued at several research institutes. These technologies are mainly focused on the optical memory systems for computer application. Such practical applications of optical memory technology has resulted in the creation of such new products as compact audio disks and computer file memories.

  9. Static Computer Memory Integrity Testing (SCMIT): An experiment flown on STS-40 as part of GAS payload G-616

    Science.gov (United States)

    Hancock, Thomas

    1993-01-01

    This experiment investigated the integrity of static computer memory (floppy disk media) when exposed to the environment of low earth orbit. The experiment attempted to record soft-event upsets (bit-flips) in static computer memory. Typical conditions that exist in low earth orbit that may cause soft-event upsets include: cosmic rays, low level background radiation, charged fields, static charges, and the earth's magnetic field. Over the years several spacecraft have been affected by soft-event upsets (bit-flips), and these events have caused a loss of data or affected spacecraft guidance and control. This paper describes a commercial spin-off that is being developed from the experiment.

  10. Computer Simulations of Developmental Change: The Contributions of Working Memory Capacity and Long-Term Knowledge

    Science.gov (United States)

    Jones, Gary; Gobet, Fernand; Pine, Julian M.

    2008-01-01

    Increasing working memory (WM) capacity is often cited as a major influence on children's development and yet WM capacity is difficult to examine independently of long-term knowledge. A computational model of children's nonword repetition (NWR) performance is presented that independently manipulates long-term knowledge and WM capacity to determine…

  11. Main Memory DBMS

    NARCIS (Netherlands)

    P.A. Boncz (Peter); L. Liu (Lei); M. Tamer Özsu

    2008-01-01

    htmlabstractA main memory database system is a DBMS that primarily relies on main memory for computer data storage. In contrast, normal database management systems employ hard disk based persisntent storage.

  12. Novel spintronics devices for memory and logic: prospects and challenges for room temperature all spin computing

    Science.gov (United States)

    Wang, Jian-Ping

    An energy efficient memory and logic device for the post-CMOS era has been the goal of a variety of research fields. The limits of scaling, which we expect to reach by the year 2025, demand that future advances in computational power will not be realized from ever-shrinking device sizes, but rather by innovative designs and new materials and physics. Magnetoresistive based devices have been a promising candidate for future integrated magnetic computation because of its unique non-volatility and functionalities. The application of perpendicular magnetic anisotropy for potential STT-RAM application was demonstrated and later has been intensively investigated by both academia and industry groups, but there is no clear path way how scaling will eventually work for both memory and logic applications. One of main reasons is that there is no demonstrated material stack candidate that could lead to a scaling scheme down to sub 10 nm. Another challenge for the usage of magnetoresistive based devices for logic application is its available switching speed and writing energy. Although a good progress has been made to demonstrate the fast switching of a thermally stable magnetic tunnel junction (MTJ) down to 165 ps, it is still several times slower than its CMOS counterpart. In this talk, I will review the recent progress by my research group and my C-SPIN colleagues, then discuss the opportunities, challenges and some potential path ways for magnetoresitive based devices for memory and logic applications and their integration for room temperature all spin computing system.

  13. The MUSOS (MUsic SOftware System) Toolkit: A computer-based, open source application for testing memory for melodies.

    Science.gov (United States)

    Rainsford, M; Palmer, M A; Paine, G

    2018-04-01

    Despite numerous innovative studies, rates of replication in the field of music psychology are extremely low (Frieler et al., 2013). Two key methodological challenges affecting researchers wishing to administer and reproduce studies in music cognition are the difficulty of measuring musical responses, particularly when conducting free-recall studies, and access to a reliable set of novel stimuli unrestricted by copyright or licensing issues. In this article, we propose a solution for these challenges in computer-based administration. We present a computer-based application for testing memory for melodies. Created using the software Max/MSP (Cycling '74, 2014a), the MUSOS (Music Software System) Toolkit uses a simple modular framework configurable for testing common paradigms such as recall, old-new recognition, and stem completion. The program is accompanied by a stimulus set of 156 novel, copyright-free melodies, in audio and Max/MSP file formats. Two pilot tests were conducted to establish the properties of the accompanying stimulus set that are relevant to music cognition and general memory research. By using this software, a researcher without specialist musical training may administer and accurately measure responses from common paradigms used in the study of memory for music.

  14. Raster Scan Computer Image Generation (CIG) System Based On Refresh Memory

    Science.gov (United States)

    Dichter, W.; Doris, K.; Conkling, C.

    1982-06-01

    A full color, Computer Image Generation (CIG) raster visual system has been developed which provides a high level of training sophistication by utilizing advanced semiconductor technology and innovative hardware and firmware techniques. Double buffered refresh memory and efficient algorithms eliminate the problem of conventional raster line ordering by allowing the generated image to be stored in a random fashion. Modular design techniques and simplified architecture provide significant advantages in reduced system cost, standardization of parts, and high reliability. The major system components are a general purpose computer to perform interfacing and data base functions; a geometric processor to define the instantaneous scene image; a display generator to convert the image to a video signal; an illumination control unit which provides final image processing; and a CRT monitor for display of the completed image. Additional optional enhancements include texture generators, increased edge and occultation capability, curved surface shading, and data base extensions.

  15. SiGe epitaxial memory for neuromorphic computing with reproducible high performance based on engineered dislocations

    Science.gov (United States)

    Choi, Shinhyun; Tan, Scott H.; Li, Zefan; Kim, Yunjo; Choi, Chanyeol; Chen, Pai-Yu; Yeon, Hanwool; Yu, Shimeng; Kim, Jeehwan

    2018-01-01

    Although several types of architecture combining memory cells and transistors have been used to demonstrate artificial synaptic arrays, they usually present limited scalability and high power consumption. Transistor-free analog switching devices may overcome these limitations, yet the typical switching process they rely on—formation of filaments in an amorphous medium—is not easily controlled and hence hampers the spatial and temporal reproducibility of the performance. Here, we demonstrate analog resistive switching devices that possess desired characteristics for neuromorphic computing networks with minimal performance variations using a single-crystalline SiGe layer epitaxially grown on Si as a switching medium. Such epitaxial random access memories utilize threading dislocations in SiGe to confine metal filaments in a defined, one-dimensional channel. This confinement results in drastically enhanced switching uniformity and long retention/high endurance with a high analog on/off ratio. Simulations using the MNIST handwritten recognition data set prove that epitaxial random access memories can operate with an online learning accuracy of 95.1%.

  16. Explicit time integration of finite element models on a vectorized, concurrent computer with shared memory

    Science.gov (United States)

    Gilbertsen, Noreen D.; Belytschko, Ted

    1990-01-01

    The implementation of a nonlinear explicit program on a vectorized, concurrent computer with shared memory is described and studied. The conflict between vectorization and concurrency is described and some guidelines are given for optimal block sizes. Several example problems are summarized to illustrate the types of speed-ups which can be achieved by reprogramming as compared to compiler optimization.

  17. A single-trace dual-process model of episodic memory: a novel computational account of familiarity and recollection.

    Science.gov (United States)

    Greve, Andrea; Donaldson, David I; van Rossum, Mark C W

    2010-02-01

    Dual-process theories of episodic memory state that retrieval is contingent on two independent processes: familiarity (providing a sense of oldness) and recollection (recovering events and their context). A variety of studies have reported distinct neural signatures for familiarity and recollection, supporting dual-process theory. One outstanding question is whether these signatures reflect the activation of distinct memory traces or the operation of different retrieval mechanisms on a single memory trace. We present a computational model that uses a single neuronal network to store memory traces, but two distinct and independent retrieval processes access the memory. The model is capable of performing familiarity and recollection-based discrimination between old and new patterns, demonstrating that dual-process models need not to rely on multiple independent memory traces, but can use a single trace. Importantly, our putative familiarity and recollection processes exhibit distinct characteristics analogous to those found in empirical data; they diverge in capacity and sensitivity to sparse and correlated patterns, exhibit distinct ROC curves, and account for performance on both item and associative recognition tests. The demonstration that a single-trace, dual-process model can account for a range of empirical findings highlights the importance of distinguishing between neuronal processes and the neuronal representations on which they operate.

  18. A short review of memory research

    Directory of Open Access Journals (Sweden)

    Igor Areh

    2004-09-01

    Full Text Available Scientific research on memory began at the end of 19th century with studies of semantic and/or long term memory. In most cases memory was interpreted as a storehouse for various data and the quality of the storehouse was usually defined by a quantity of recalled data. The research work was concentrated on specificity of the connection between memory and learning. At that time few authors developed theories which were rare, uncommon and before their time (e.g.: Bartlett, Ribot, Freud. Even after 20th century, when behavioural stimulus-response approach began to dominate, the measure of memory quality was still the quantity of memory recall. In the 1960th the rise of cognitive psychology began, the computer metaphor was born and finally the behavioural comprehension of cognitive system was surpassed. Cognitive system was understood as a computer-like interface between an organism and environment. In recent years the computer metaphor is no longer dominant. New and efficient concepts are moving forward. Quantity of data recall, as the measure of memory quality, is not so important any more – attention is focused on accuracy of memory recall.

  19. ClimateSpark: An In-memory Distributed Computing Framework for Big Climate Data Analytics

    Science.gov (United States)

    Hu, F.; Yang, C. P.; Duffy, D.; Schnase, J. L.; Li, Z.

    2016-12-01

    Massive array-based climate data is being generated from global surveillance systems and model simulations. They are widely used to analyze the environment problems, such as climate changes, natural hazards, and public health. However, knowing the underlying information from these big climate datasets is challenging due to both data- and computing- intensive issues in data processing and analyzing. To tackle the challenges, this paper proposes ClimateSpark, an in-memory distributed computing framework to support big climate data processing. In ClimateSpark, the spatiotemporal index is developed to enable Apache Spark to treat the array-based climate data (e.g. netCDF4, HDF4) as native formats, which are stored in Hadoop Distributed File System (HDFS) without any preprocessing. Based on the index, the spatiotemporal query services are provided to retrieve dataset according to a defined geospatial and temporal bounding box. The data subsets will be read out, and a data partition strategy will be applied to equally split the queried data to each computing node, and store them in memory as climateRDDs for processing. By leveraging Spark SQL and User Defined Function (UDFs), the climate data analysis operations can be conducted by the intuitive SQL language. ClimateSpark is evaluated by two use cases using the NASA Modern-Era Retrospective Analysis for Research and Applications (MERRA) climate reanalysis dataset. One use case is to conduct the spatiotemporal query and visualize the subset results in animation; the other one is to compare different climate model outputs using Taylor-diagram service. Experimental results show that ClimateSpark can significantly accelerate data query and processing, and enable the complex analysis services served in the SQL-style fashion.

  20. Fencing direct memory access data transfers in a parallel active messaging interface of a parallel computer

    Science.gov (United States)

    Blocksome, Michael A.; Mamidala, Amith R.

    2013-09-03

    Fencing direct memory access (`DMA`) data transfers in a parallel active messaging interface (`PAMI`) of a parallel computer, the PAMI including data communications endpoints, each endpoint including specifications of a client, a context, and a task, the endpoints coupled for data communications through the PAMI and through DMA controllers operatively coupled to segments of shared random access memory through which the DMA controllers deliver data communications deterministically, including initiating execution through the PAMI of an ordered sequence of active DMA instructions for DMA data transfers between two endpoints, effecting deterministic DMA data transfers through a DMA controller and a segment of shared memory; and executing through the PAMI, with no FENCE accounting for DMA data transfers, an active FENCE instruction, the FENCE instruction completing execution only after completion of all DMA instructions initiated prior to execution of the FENCE instruction for DMA data transfers between the two endpoints.

  1. Simulation of radiation effects on three-dimensional computer optical memories

    International Nuclear Information System (INIS)

    Moscovitch, M.; Emfietzoglou, D.

    1997-01-01

    A model was developed to simulate the effects of heavy charged-particle (HCP) radiation on the information stored in three-dimensional computer optical memories. The model is based on (i) the HCP track radial dose distribution, (ii) the spatial and temporal distribution of temperature in the track, (iii) the matrix-specific radiation-induced changes that will affect the response, and (iv) the kinetics of transition of photochromic molecules from the colored to the colorless isomeric form (bit flip). It is shown that information stored in a volume of several nanometers radius around the particle close-quote s track axis may be lost. The magnitude of the effect is dependent on the particle close-quote s track structure. copyright 1997 American Institute of Physics

  2. Cognitive memory.

    Science.gov (United States)

    Widrow, Bernard; Aragon, Juan Carlos

    2013-05-01

    Regarding the workings of the human mind, memory and pattern recognition seem to be intertwined. You generally do not have one without the other. Taking inspiration from life experience, a new form of computer memory has been devised. Certain conjectures about human memory are keys to the central idea. The design of a practical and useful "cognitive" memory system is contemplated, a memory system that may also serve as a model for many aspects of human memory. The new memory does not function like a computer memory where specific data is stored in specific numbered registers and retrieval is done by reading the contents of the specified memory register, or done by matching key words as with a document search. Incoming sensory data would be stored at the next available empty memory location, and indeed could be stored redundantly at several empty locations. The stored sensory data would neither have key words nor would it be located in known or specified memory locations. Sensory inputs concerning a single object or subject are stored together as patterns in a single "file folder" or "memory folder". When the contents of the folder are retrieved, sights, sounds, tactile feel, smell, etc., are obtained all at the same time. Retrieval would be initiated by a query or a prompt signal from a current set of sensory inputs or patterns. A search through the memory would be made to locate stored data that correlates with or relates to the prompt input. The search would be done by a retrieval system whose first stage makes use of autoassociative artificial neural networks and whose second stage relies on exhaustive search. Applications of cognitive memory systems have been made to visual aircraft identification, aircraft navigation, and human facial recognition. Concerning human memory, reasons are given why it is unlikely that long-term memory is stored in the synapses of the brain's neural networks. Reasons are given suggesting that long-term memory is stored in DNA or RNA

  3. Computational Fluid Dynamics (CFD) Computations With Zonal Navier-Stokes Flow Solver (ZNSFLOW) Common High Performance Computing Scalable Software Initiative (CHSSI) Software

    National Research Council Canada - National Science Library

    Edge, Harris

    1999-01-01

    ...), computational fluid dynamics (CFD) 6 project. Under the project, a proven zonal Navier-Stokes solver was rewritten for scalable parallel performance on both shared memory and distributed memory high performance computers...

  4. Internode data communications in a parallel computer

    Science.gov (United States)

    Archer, Charles J.; Blocksome, Michael A.; Miller, Douglas R.; Parker, Jeffrey J.; Ratterman, Joseph D.; Smith, Brian E.

    2013-09-03

    Internode data communications in a parallel computer that includes compute nodes that each include main memory and a messaging unit, the messaging unit including computer memory and coupling compute nodes for data communications, in which, for each compute node at compute node boot time: a messaging unit allocates, in the messaging unit's computer memory, a predefined number of message buffers, each message buffer associated with a process to be initialized on the compute node; receives, prior to initialization of a particular process on the compute node, a data communications message intended for the particular process; and stores the data communications message in the message buffer associated with the particular process. Upon initialization of the particular process, the process establishes a messaging buffer in main memory of the compute node and copies the data communications message from the message buffer of the messaging unit into the message buffer of main memory.

  5. Applications for Packetized Memory Interfaces

    OpenAIRE

    Watson, Myles Glen

    2015-01-01

    The performance of the memory subsystem has a large impact on the performance of modern computer systems. Many important applications are memory bound and others are expected to become memory bound in the future. The importance of memory performance makes it imperative to understand and optimize the interactions between applications and the system architecture. Prototyping and exploring various configurations of memory systems can give important insights, but current memory interfaces are lim...

  6. System of common usage on the base of external memory devices and the SM-3 computer

    International Nuclear Information System (INIS)

    Baluka, G.; Vasin, A.Yu.; Ermakov, V.A.; Zhukov, G.P.; Zimin, G.N.; Namsraj, Yu.; Ostrovnoj, A.I.; Savvateev, A.S.; Salamatin, I.M.; Yanovskij, G.Ya.

    1980-01-01

    An easily modified system of common usage on the base of external memories and a SM-3 minicomputer replacing some pulse analysers is described. The system has merits of PA and is more advantageous with regard to effectiveness of equipment using, the possibility of changing configuration and functions, the data protection against losses due to user errors and some failures, price of one registration channel, place occupied. The system of common usage is intended for the IBR-2 pulse reactor computing centre. It is designed using the SANPO system means for SM-3 computer [ru

  7. Logic and memory concepts for all-magnetic computing based on transverse domain walls

    International Nuclear Information System (INIS)

    Vandermeulen, J; Van de Wiele, B; Dupré, L; Van Waeyenberge, B

    2015-01-01

    We introduce a non-volatile digital logic and memory concept in which the binary data is stored in the transverse magnetic domain walls present in in-plane magnetized nanowires with sufficiently small cross sectional dimensions. We assign the digital bit to the two possible orientations of the transverse domain wall. Numerical proofs-of-concept are presented for a NOT-, AND- and OR-gate, a FAN-out as well as a reading and writing device. Contrary to the chirality based vortex domain wall logic gates introduced in Omari and Hayward (2014 Phys. Rev. Appl. 2 044001), the presented concepts remain applicable when miniaturized and are driven by electrical currents, making the technology compatible with the in-plane racetrack memory concept. The individual devices can be easily combined to logic networks working with clock speeds that scale linearly with decreasing design dimensions. This opens opportunities to an all-magnetic computing technology where the digital data is stored and processed under the same magnetic representation. (paper)

  8. Phase change memory

    CERN Document Server

    Qureshi, Moinuddin K

    2011-01-01

    As conventional memory technologies such as DRAM and Flash run into scaling challenges, architects and system designers are forced to look at alternative technologies for building future computer systems. This synthesis lecture begins by listing the requirements for a next generation memory technology and briefly surveys the landscape of novel non-volatile memories. Among these, Phase Change Memory (PCM) is emerging as a leading contender, and the authors discuss the material, device, and circuit advances underlying this exciting technology. The lecture then describes architectural solutions t

  9. Memory Analysis of the KBeast Linux Rootkit: Investigating Publicly Available Linux Rootkit Using the Volatility Memory Analysis Framework

    Science.gov (United States)

    2015-06-01

    examine how a computer forensic investigator/incident handler, without specialised computer memory or software reverse engineering skills , can successfully...memory images and malware, this new series of reports will be directed at those who must analyse Linux malware-infected memory images. The skills ...disable 1287 1000 1000 /usr/lib/policykit-1-gnome/polkit-gnome-authentication- agent-1 1310 1000 1000 /usr/lib/pulseaudio/pulse/gconf- helper 1350

  10. Iterative schemes for parallel Sn algorithms in a shared-memory computing environment

    International Nuclear Information System (INIS)

    Haghighat, A.; Hunter, M.A.; Mattis, R.E.

    1995-01-01

    Several two-dimensional spatial domain partitioning S n transport theory algorithms are developed on the basis of different iterative schemes. These algorithms are incorporated into TWOTRAN-II and tested on the shared-memory CRAY Y-MP C90 computer. For a series of fixed-source r-z geometry homogeneous problems, it is demonstrated that the concurrent red-black algorithms may result in large parallel efficiencies (>60%) on C90. It is also demonstrated that for a realistic shielding problem, the use of the negative flux fixup causes high load imbalance, which results in a significant loss of parallel efficiency

  11. The impact of taxing working memory on negative and positive memories

    NARCIS (Netherlands)

    Engelhard, I.M.; van Uijen, S.L.; Van den Hout, M.A.

    2010-01-01

    BACKGROUND: Earlier studies have shown that horizontal eye movement (EM) during retrieval of a negative memory reduces its vividness and emotionality. This may be due to both tasks competing for working memory (WM) resources. This study examined whether playing the computer game "Tetris" also blurs

  12. Conditional load and store in a shared memory

    Science.gov (United States)

    Blumrich, Matthias A; Ohmacht, Martin

    2015-02-03

    A method, system and computer program product for implementing load-reserve and store-conditional instructions in a multi-processor computing system. The computing system includes a multitude of processor units and a shared memory cache, and each of the processor units has access to the memory cache. In one embodiment, the method comprises providing the memory cache with a series of reservation registers, and storing in these registers addresses reserved in the memory cache for the processor units as a result of issuing load-reserve requests. In this embodiment, when one of the processor units makes a request to store data in the memory cache using a store-conditional request, the reservation registers are checked to determine if an address in the memory cache is reserved for that processor unit. If an address in the memory cache is reserved for that processor, the data are stored at this address.

  13. External-Memory Algorithms and Data Structures

    DEFF Research Database (Denmark)

    Arge, Lars; Zeh, Norbert

    2010-01-01

    The data sets involved in many modern applications are often too massive to fit in main memory of even the most powerful computers and must therefore reside on disk. Thus communication between internal and external memory, and not actual computation time, becomes the bottleneck in the computation....... This is due to the huge difference in access time of fast internal memory and slower external memory such as disks. The goal of theoretical work in the area of external memory algorithms (also called I/O algorithms or out-of-core algorithms) has been to develop algorithms that minimize the Input...... in parallel and the use of parallel disks has received a lot of theoretical attention. See below for recent surveys of theoretical results in the area of I/O-efficient algorithms. TPIE is designed to bridge the gap between the theory and practice of parallel I/O systems. It is intended to demonstrate all...

  14. ClimateSpark: An in-memory distributed computing framework for big climate data analytics

    Science.gov (United States)

    Hu, Fei; Yang, Chaowei; Schnase, John L.; Duffy, Daniel Q.; Xu, Mengchao; Bowen, Michael K.; Lee, Tsengdar; Song, Weiwei

    2018-06-01

    The unprecedented growth of climate data creates new opportunities for climate studies, and yet big climate data pose a grand challenge to climatologists to efficiently manage and analyze big data. The complexity of climate data content and analytical algorithms increases the difficulty of implementing algorithms on high performance computing systems. This paper proposes an in-memory, distributed computing framework, ClimateSpark, to facilitate complex big data analytics and time-consuming computational tasks. Chunking data structure improves parallel I/O efficiency, while a spatiotemporal index is built for the chunks to avoid unnecessary data reading and preprocessing. An integrated, multi-dimensional, array-based data model (ClimateRDD) and ETL operations are developed to address big climate data variety by integrating the processing components of the climate data lifecycle. ClimateSpark utilizes Spark SQL and Apache Zeppelin to develop a web portal to facilitate the interaction among climatologists, climate data, analytic operations and computing resources (e.g., using SQL query and Scala/Python notebook). Experimental results show that ClimateSpark conducts different spatiotemporal data queries/analytics with high efficiency and data locality. ClimateSpark is easily adaptable to other big multiple-dimensional, array-based datasets in various geoscience domains.

  15. The Memory Aid study: protocol for a randomized controlled clinical trial evaluating the effect of computer-based working memory training in elderly patients with mild cognitive impairment (MCI).

    Science.gov (United States)

    Flak, Marianne M; Hernes, Susanne S; Chang, Linda; Ernst, Thomas; Douet, Vanessa; Skranes, Jon; Løhaugen, Gro C C

    2014-05-03

    Mild cognitive impairment (MCI) is a condition characterized by memory problems that are more severe than the normal cognitive changes due to aging, but less severe than dementia. Reduced working memory (WM) is regarded as one of the core symptoms of an MCI condition. Recent studies have indicated that WM can be improved through computer-based training. The objective of this study is to evaluate if WM training is effective in improving cognitive function in elderly patients with MCI, and if cognitive training induces structural changes in the white and gray matter of the brain, as assessed by structural MRI. The proposed study is a blinded, randomized, controlled trail that will include 90 elderly patients diagnosed with MCI at a hospital-based memory clinic. The participants will be randomized to either a training program or a placebo version of the program. The intervention is computerized WM training performed for 45 minutes of 25 sessions over 5 weeks. The placebo version is identical in duration but is non-adaptive in the difficulty level of the tasks. Neuropsychological assessment and structural MRI will be performed before and 1 month after training, and at a 5-month folllow-up. If computer-based training results in positive changes to memory functions in patients with MCI this may represent a new, cost-effective treatment for MCI. Secondly, evaluation of any training-induced structural changes to gray or white matter will improve the current understanding of the mechanisms behind effective cognitive interventions in patients with MCI. ClinicalTrials.gov NCT01991405. November 18, 2013.

  16. Event parallelism: Distributed memory parallel computing for high energy physics experiments

    International Nuclear Information System (INIS)

    Nash, T.

    1989-05-01

    This paper describes the present and expected future development of distributed memory parallel computers for high energy physics experiments. It covers the use of event parallel microprocessor farms, particularly at Fermilab, including both ACP multiprocessors and farms of MicroVAXES. These systems have proven very cost effective in the past. A case is made for moving to the more open environment of UNIX and RISC processors. The 2nd Generation ACP Multiprocessor System, which is based on powerful RISC systems, is described. Given the promise of still more extraordinary increases in processor performance, a new emphasis on point to point, rather than bussed, communication will be required. Developments in this direction are described. 6 figs

  17. Event parallelism: Distributed memory parallel computing for high energy physics experiments

    International Nuclear Information System (INIS)

    Nash, T.

    1989-01-01

    This paper describes the present and expected future development of distributed memory parallel computers for high energy physics experiments. It covers the use of event parallel microprocessor farms, particularly at Fermilab, including both ACP multiprocessors and farms of MicroVAXES. These systems have proven very cost effective in the past. A case is made for moving to the more open environment of UNIX and RISC processors. The 2nd Generation ACP Multiprocessor System, which is based on powerful RISC systems, is described. Given the promise of still more extraordinary increases in processor performance, a new emphasis on point to point, rather than bussed, communication will be required. Developments in this direction are described. (orig.)

  18. Event parallelism: Distributed memory parallel computing for high energy physics experiments

    Science.gov (United States)

    Nash, Thomas

    1989-12-01

    This paper describes the present and expected future development of distributed memory parallel computers for high energy physics experiments. It covers the use of event parallel microprocessor farms, particularly at Fermilab, including both ACP multiprocessors and farms of MicroVAXES. These systems have proven very cost effective in the past. A case is made for moving to the more open environment of UNIX and RISC processors. The 2nd Generation ACP Multiprocessor System, which is based on powerful RISC system, is described. Given the promise of still more extraordinary increases in processor performance, a new emphasis on point to point, rather than bussed, communication will be required. Developments in this direction are described.

  19. Enhancing Assisted Living Technology with Extended Visual Memory

    Directory of Open Access Journals (Sweden)

    Joo-Hwee Lim

    2011-05-01

    Full Text Available Human vision and memory are powerful cognitive faculties by which we understand the world. However, they are imperfect and further, subject to deterioration with age. We propose a cognitive-inspired computational model, Extended Visual Memory (EVM, within the Computer-Aided Vision (CAV framework, to assist human in vision-related tasks. We exploit wearable sensors such as cameras, GPS and ambient computing facilities to complement a user's vision and memory functions by answering four types of queries central to visual activities, namely, Retrieval, Understanding, Navigation and Search. Learning of EVM relies on both frequency-based and attention-driven mechanisms to store view-based visual fragments (VF, which are abstracted into high-level visual schemas (VS, both in the visual long-term memory. During inference, the visual short-term memory plays a key role in visual similarity computation between input (or its schematic representation and VF, exemplified from VS when necessary. We present an assisted living scenario, termed EViMAL (Extended Visual Memory for Assisted Living, targeted at mild dementia patients to provide novel functions such as hazard-warning, visual reminder, object look-up and event review. We envisage EVM having the potential benefits in alleviating memory loss, improving recall precision and enhancing memory capacity through external support.

  20. External Memory Pipelining Made Easy With TPIE

    OpenAIRE

    Arge, Lars; Rav, Mathias; Svendsen, Svend C.; Truelsen, Jakob

    2017-01-01

    When handling large datasets that exceed the capacity of the main memory, movement of data between main memory and external memory (disk), rather than actual (CPU) computation time, is often the bottleneck in the computation. Since data is moved between disk and main memory in large contiguous blocks, this has led to the development of a large number of I/O-efficient algorithms that minimize the number of such block movements. TPIE is one of two major libraries that have been developed to sup...

  1. Disentangling the Relationship Between the Adoption of In-Memory Computing and Firm Performance

    DEFF Research Database (Denmark)

    Fay, Marua; Müller, Oliver; vom Brocke, Jan

    2016-01-01

    Recent growth in data volume, variety, and velocity led to an increased demand for high-performance data processing and analytics solutions. In-memory computing (IMC) enables organizations to boost their information processing capacity, and is widely acknowledged to be one of the leading strategic...... at explaining the relationship between the adoption of IMC solutions and firm performance. In this research-in-progress paper we discuss the theoretical background of our work, describe the proposed research design, and develop five hypotheses for later testing. Our work aims at contributing to the research...

  2. Spin-wave interference patterns created by spin-torque nano-oscillators for memory and computation

    International Nuclear Information System (INIS)

    Macia, Ferran; Kent, Andrew D; Hoppensteadt, Frank C

    2011-01-01

    Magnetization dynamics in nanomagnets has attracted broad interest since it was predicted that a dc current flowing through a thin magnetic layer can create spin-wave excitations. These excitations are due to spin momentum transfer, a transfer of spin angular momentum between conduction electrons and the background magnetization, that enables new types of information processing. Here we show how arrays of spin-torque nano-oscillators can create propagating spin-wave interference patterns of use for memory and computation. Memristic transponders distributed on the thin film respond to threshold tunnel magnetoresistance values, thereby allowing spin-wave detection and creating new excitation patterns. We show how groups of transponders create resonant (reverberating) spin-wave interference patterns that may be used for polychronous wave computation and information storage.

  3. Assessing Working Memory in Children: The Comprehensive Assessment Battery for Children - Working Memory (CABC-WM).

    Science.gov (United States)

    Cabbage, Kathryn; Brinkley, Shara; Gray, Shelley; Alt, Mary; Cowan, Nelson; Green, Samuel; Kuo, Trudy; Hogan, Tiffany P

    2017-06-12

    The Comprehensive Assessment Battery for Children - Working Memory (CABC-WM) is a computer-based battery designed to assess different components of working memory in young school-age children. Working memory deficits have been identified in children with language-based learning disabilities, including dyslexia 1 , 2 and language impairment 3 , 4 , but it is not clear whether these children exhibit deficits in subcomponents of working memory, such as visuospatial or phonological working memory. The CABC-WM is administered on a desktop computer with a touchscreen interface and was specifically developed to be engaging and motivating for children. Although the long-term goal of the CABC-WM is to provide individualized working memory profiles in children, the present study focuses on the initial success and utility of the CABC-WM for measuring central executive, visuospatial, phonological loop, and binding constructs in children with typical development. Immediate next steps are to administer the CABC-WM to children with specific language impairment, dyslexia, and comorbid specific language impairment and dyslexia.

  4. Implementation of Parallel Dynamic Simulation on Shared-Memory vs. Distributed-Memory Environments

    Energy Technology Data Exchange (ETDEWEB)

    Jin, Shuangshuang; Chen, Yousu; Wu, Di; Diao, Ruisheng; Huang, Zhenyu

    2015-12-09

    Power system dynamic simulation computes the system response to a sequence of large disturbance, such as sudden changes in generation or load, or a network short circuit followed by protective branch switching operation. It consists of a large set of differential and algebraic equations, which is computational intensive and challenging to solve using single-processor based dynamic simulation solution. High-performance computing (HPC) based parallel computing is a very promising technology to speed up the computation and facilitate the simulation process. This paper presents two different parallel implementations of power grid dynamic simulation using Open Multi-processing (OpenMP) on shared-memory platform, and Message Passing Interface (MPI) on distributed-memory clusters, respectively. The difference of the parallel simulation algorithms and architectures of the two HPC technologies are illustrated, and their performances for running parallel dynamic simulation are compared and demonstrated.

  5. Associative Memory computing power and its simulation.

    CERN Document Server

    Volpi, G; The ATLAS collaboration

    2014-01-01

    The associative memory (AM) chip is ASIC device specifically designed to perform ``pattern matching'' at very high speed and with parallel access to memory locations. The most extensive use for such device will be the ATLAS Fast Tracker (FTK) processor, where more than 8000 chips will be installed in 128 VME boards, specifically designed for high throughput in order to exploit the chip's features. Each AM chip will store a database of about 130000 pre-calculated patterns, allowing FTK to use about 1 billion patterns for the whole system, with any data inquiry broadcast to all memory elements simultaneously within the same clock cycle (10 ns), thus data retrieval time is independent of the database size. Speed and size of the system are crucial for real-time High Energy Physics applications, such as the ATLAS FTK processor. Using 80 million channels of the ATLAS tracker, FTK finds tracks within 100 $\\mathrm{\\mu s}$. The simulation of such a parallelized system is an extremely complex task when executed in comm...

  6. A three-dimensional ground-water-flow model modified to reduce computer-memory requirements and better simulate confining-bed and aquifer pinchouts

    Science.gov (United States)

    Leahy, P.P.

    1982-01-01

    The Trescott computer program for modeling groundwater flow in three dimensions has been modified to (1) treat aquifer and confining bed pinchouts more realistically and (2) reduce the computer memory requirements needed for the input data. Using the original program, simulation of aquifer systems with nonrectangular external boundaries may result in a large number of nodes that are not involved in the numerical solution of the problem, but require computer storage. (USGS)

  7. Privacy-Preserving Computation with Trusted Computing via Scramble-then-Compute

    OpenAIRE

    Dang Hung; Dinh Tien Tuan Anh; Chang Ee-Chien; Ooi Beng Chin

    2017-01-01

    We consider privacy-preserving computation of big data using trusted computing primitives with limited private memory. Simply ensuring that the data remains encrypted outside the trusted computing environment is insufficient to preserve data privacy, for data movement observed during computation could leak information. While it is possible to thwart such leakage using generic solution such as ORAM [42], designing efficient privacy-preserving algorithms is challenging. Besides computation effi...

  8. Test-Retest Reliability of Computerized, Everyday Memory Measures and Traditional Memory Tests.

    Science.gov (United States)

    Youngjohn, James R.; And Others

    Test-retest reliabilities and practice effect magnitudes were considered for nine computer-simulated tasks of everyday cognition and five traditional neuropsychological tests. The nine simulated everyday memory tests were from the Memory Assessment Clinic battery as follows: (1) simple reaction time while driving; (2) divided attention (driving…

  9. Memory Reconsolidation and Computational Learning

    Science.gov (United States)

    2010-03-01

    Siegelmann-Danieli and H.T. Siegelmann, "Robust Artificial Life Via Artificial Programmed Death," Artificial Inteligence 172(6-7), April 2008: 884-898. F...STATEMENT Unrestricted 13. SUPPLEMENTARY NOTES 20100402019 14. ABSTRACT Memory models are central to Artificial Intelligence and Machine...beyond [1]. The advances cited are a significant step toward creating Artificial Intelligence via neural networks at the human level. Our network

  10. Distinctive Features Hold a Privileged Status in the Computation of Word Meaning: Implications for Theories of Semantic Memory

    Science.gov (United States)

    Cree, George S.; McNorgan, Chris; McRae, Ken

    2006-01-01

    The authors present data from 2 feature verification experiments designed to determine whether distinctive features have a privileged status in the computation of word meaning. They use an attractor-based connectionist model of semantic memory to derive predictions for the experiments. Contrary to central predictions of the conceptual structure…

  11. Assessing Working Memory in Children: The Comprehensive Assessment Battery for Children – Working Memory (CABC-WM)

    OpenAIRE

    Cabbage, Kathryn; Brinkley, Shara; Gray, Shelley; Alt, Mary; Cowan, Nelson; Green, Samuel; Kuo, Trudy; Hogan, Tiffany P.

    2017-01-01

    The Comprehensive Assessment Battery for Children - Working Memory (CABC-WM) is a computer-based battery designed to assess different components of working memory in young school-age children. Working memory deficits have been identified in children with language-based learning disabilities, including dyslexia1 2 and language impairment3 4, but it is not clear whether these children exhibit deficits in subcomponents of working memory, such as visuospatial or phonological working memory. The C...

  12. A highly efficient parallel algorithm for solving the neutron diffusion nodal equations on shared-memory computers

    International Nuclear Information System (INIS)

    Azmy, Y.Y.; Kirk, B.L.

    1990-01-01

    Modern parallel computer architectures offer an enormous potential for reducing CPU and wall-clock execution times of large-scale computations commonly performed in various applications in science and engineering. Recently, several authors have reported their efforts in developing and implementing parallel algorithms for solving the neutron diffusion equation on a variety of shared- and distributed-memory parallel computers. Testing of these algorithms for a variety of two- and three-dimensional meshes showed significant speedup of the computation. Even for very large problems (i.e., three-dimensional fine meshes) executed concurrently on a few nodes in serial (nonvector) mode, however, the measured computational efficiency is very low (40 to 86%). In this paper, the authors present a highly efficient (∼85 to 99.9%) algorithm for solving the two-dimensional nodal diffusion equations on the Sequent Balance 8000 parallel computer. Also presented is a model for the performance, represented by the efficiency, as a function of problem size and the number of participating processors. The model is validated through several tests and then extrapolated to larger problems and more processors to predict the performance of the algorithm in more computationally demanding situations

  13. The effects of working memory on brain-computer interface performance.

    Science.gov (United States)

    Sprague, Samantha A; McBee, Matthew T; Sellers, Eric W

    2016-02-01

    The purpose of the present study is to evaluate the relationship between working memory and BCI performance. Participants took part in two separate sessions. The first session consisted of three computerized tasks. The List Sorting Working Memory Task was used to measure working memory, the Picture Vocabulary Test was used to measure general intelligence, and the Dimensional Change Card Sort Test was used to measure executive function, specifically cognitive flexibility. The second session consisted of a P300-based BCI copy-spelling task. The results indicate that both working memory and general intelligence are significant predictors of BCI performance. This suggests that working memory training could be used to improve performance on a BCI task. Working memory training may help to reduce a portion of the individual differences that exist in BCI performance allowing for a wider range of users to successfully operate the BCI system as well as increase the BCI performance of current users. Copyright © 2015 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  14. Parallel External Memory Graph Algorithms

    DEFF Research Database (Denmark)

    Arge, Lars Allan; Goodrich, Michael T.; Sitchinava, Nodari

    2010-01-01

    In this paper, we study parallel I/O efficient graph algorithms in the Parallel External Memory (PEM) model, one o f the private-cache chip multiprocessor (CMP) models. We study the fundamental problem of list ranking which leads to efficient solutions to problems on trees, such as computing lowest...... an optimal speedup of ¿(P) in parallel I/O complexity and parallel computation time, compared to the single-processor external memory counterparts....

  15. Distributed-Memory Fast Maximal Independent Set

    Energy Technology Data Exchange (ETDEWEB)

    Kanewala Appuhamilage, Thejaka Amila J.; Zalewski, Marcin J.; Lumsdaine, Andrew

    2017-09-13

    The Maximal Independent Set (MIS) graph problem arises in many applications such as computer vision, information theory, molecular biology, and process scheduling. The growing scale of MIS problems suggests the use of distributed-memory hardware as a cost-effective approach to providing necessary compute and memory resources. Luby proposed four randomized algorithms to solve the MIS problem. All those algorithms are designed focusing on shared-memory machines and are analyzed using the PRAM model. These algorithms do not have direct efficient distributed-memory implementations. In this paper, we extend two of Luby’s seminal MIS algorithms, “Luby(A)” and “Luby(B),” to distributed-memory execution, and we evaluate their performance. We compare our results with the “Filtered MIS” implementation in the Combinatorial BLAS library for two types of synthetic graph inputs.

  16. Read-only-memory-based quantum computation: Experimental explorations using nuclear magnetic resonance and future prospects

    International Nuclear Information System (INIS)

    Sypher, D.R.; Brereton, I.M.; Wiseman, H.M.; Hollis, B.L.; Travaglione, B.C.

    2002-01-01

    Read-only-memory-based (ROM-based) quantum computation (QC) is an alternative to oracle-based QC. It has the advantages of being less 'magical', and being more suited to implementing space-efficient computation (i.e., computation using the minimum number of writable qubits). Here we consider a number of small (one- and two-qubit) quantum algorithms illustrating different aspects of ROM-based QC. They are: (a) a one-qubit algorithm to solve the Deutsch problem; (b) a one-qubit binary multiplication algorithm; (c) a two-qubit controlled binary multiplication algorithm; and (d) a two-qubit ROM-based version of the Deutsch-Jozsa algorithm. For each algorithm we present experimental verification using nuclear magnetic resonance ensemble QC. The average fidelities for the implementation were in the ranges 0.9-0.97 for the one-qubit algorithms, and 0.84-0.94 for the two-qubit algorithms. We conclude with a discussion of future prospects for ROM-based quantum computation. We propose a four-qubit algorithm, using Grover's iterate, for solving a miniature 'real-world' problem relating to the lengths of paths in a network

  17. Time-Predictable Virtual Memory

    DEFF Research Database (Denmark)

    Puffitsch, Wolfgang; Schoeberl, Martin

    2016-01-01

    Virtual memory is an important feature of modern computer architectures. For hard real-time systems, memory protection is a particularly interesting feature of virtual memory. However, current memory management units are not designed for time-predictability and therefore cannot be used...... in such systems. This paper investigates the requirements on virtual memory from the perspective of hard real-time systems and presents the design of a time-predictable memory management unit. Our evaluation shows that the proposed design can be implemented efficiently. The design allows address translation...... and address range checking in constant time of two clock cycles on a cache miss. This constant time is in strong contrast to the possible cost of a miss in a translation look-aside buffer in traditional virtual memory organizations. Compared to a platform without a memory management unit, these two additional...

  18. Playing the computer game Tetris prior to viewing traumatic film material and subsequent intrusive memories: Examining proactive interference.

    Science.gov (United States)

    James, Ella L; Lau-Zhu, Alex; Tickle, Hannah; Horsch, Antje; Holmes, Emily A

    2016-12-01

    Visuospatial working memory (WM) tasks performed concurrently or after an experimental trauma (traumatic film viewing) have been shown to reduce subsequent intrusive memories (concurrent or retroactive interference, respectively). This effect is thought to arise because, during the time window of memory consolidation, the film memory is labile and vulnerable to interference by the WM task. However, it is not known whether tasks before an experimental trauma (i.e. proactive interference) would also be effective. Therefore, we tested if a visuospatial WM task given before a traumatic film reduced intrusions. Findings are relevant to the development of preventative strategies to reduce intrusive memories of trauma for groups who are routinely exposed to trauma (e.g. emergency services personnel) and for whom tasks prior to trauma exposure might be beneficial. Participants were randomly assigned to 1 of 2 conditions. In the Tetris condition (n = 28), participants engaged in the computer game for 11 min immediately before viewing a 12-min traumatic film, whereas those in the Control condition (n = 28) had no task during this period. Intrusive memory frequency was assessed using an intrusion diary over 1-week and an Intrusion Provocation Task at 1-week follow-up. Recognition memory for the film was also assessed at 1-week. Compared to the Control condition, participants in the Tetris condition did not report statistically significant difference in intrusive memories of the trauma film on either measure. There was also no statistically significant difference in recognition memory scores between conditions. The study used an experimental trauma paradigm and findings may not be generalizable to a clinical population. Compared to control, playing Tetris before viewing a trauma film did not lead to a statistically significant reduction in the frequency of later intrusive memories of the film. It is unlikely that proactive interference, at least with this task

  19. Modified stretched exponential model of computer system resources management limitations-The case of cache memory

    Science.gov (United States)

    Strzałka, Dominik; Dymora, Paweł; Mazurek, Mirosław

    2018-02-01

    In this paper we present some preliminary results in the field of computer systems management with relation to Tsallis thermostatistics and the ubiquitous problem of hardware limited resources. In the case of systems with non-deterministic behaviour, management of their resources is a key point that guarantees theirs acceptable performance and proper working. This is very wide problem that stands for many challenges in financial, transport, water and food, health, etc. areas. We focus on computer systems with attention paid to cache memory and propose to use an analytical model that is able to connect non-extensive entropy formalism, long-range dependencies, management of system resources and queuing theory. Obtained analytical results are related to the practical experiment showing interesting and valuable results.

  20. Computational Thermodynamics and Kinetics-Based ICME Framework for High-Temperature Shape Memory Alloys

    Science.gov (United States)

    Arróyave, Raymundo; Talapatra, Anjana; Johnson, Luke; Singh, Navdeep; Ma, Ji; Karaman, Ibrahim

    2015-11-01

    Over the last decade, considerable interest in the development of High-Temperature Shape Memory Alloys (HTSMAs) for solid-state actuation has increased dramatically as key applications in the aerospace and automotive industry demand actuation temperatures well above those of conventional SMAs. Most of the research to date has focused on establishing the (forward) connections between chemistry, processing, (micro)structure, properties, and performance. Much less work has been dedicated to the development of frameworks capable of addressing the inverse problem of establishing necessary chemistry and processing schedules to achieve specific performance goals. Integrated Computational Materials Engineering (ICME) has emerged as a powerful framework to address this problem, although it has yet to be applied to the development of HTSMAs. In this paper, the contributions of computational thermodynamics and kinetics to ICME of HTSMAs are described. Some representative examples of the use of computational thermodynamics and kinetics to understand the phase stability and microstructural evolution in HTSMAs are discussed. Some very recent efforts at combining both to assist in the design of HTSMAs and limitations to the full implementation of ICME frameworks for HTSMA development are presented.

  1. Projected phase-change memory devices.

    Science.gov (United States)

    Koelmans, Wabe W; Sebastian, Abu; Jonnalagadda, Vara Prasad; Krebs, Daniel; Dellmann, Laurent; Eleftheriou, Evangelos

    2015-09-03

    Nanoscale memory devices, whose resistance depends on the history of the electric signals applied, could become critical building blocks in new computing paradigms, such as brain-inspired computing and memcomputing. However, there are key challenges to overcome, such as the high programming power required, noise and resistance drift. Here, to address these, we present the concept of a projected memory device, whose distinguishing feature is that the physical mechanism of resistance storage is decoupled from the information-retrieval process. We designed and fabricated projected memory devices based on the phase-change storage mechanism and convincingly demonstrate the concept through detailed experimentation, supported by extensive modelling and finite-element simulations. The projected memory devices exhibit remarkably low drift and excellent noise performance. We also demonstrate active control and customization of the programming characteristics of the device that reliably realize a multitude of resistance states.

  2. Quantum computers: Definition and implementations

    International Nuclear Information System (INIS)

    Perez-Delgado, Carlos A.; Kok, Pieter

    2011-01-01

    The DiVincenzo criteria for implementing a quantum computer have been seminal in focusing both experimental and theoretical research in quantum-information processing. These criteria were formulated specifically for the circuit model of quantum computing. However, several new models for quantum computing (paradigms) have been proposed that do not seem to fit the criteria well. Therefore, the question is what are the general criteria for implementing quantum computers. To this end, a formal operational definition of a quantum computer is introduced. It is then shown that, according to this definition, a device is a quantum computer if it obeys the following criteria: Any quantum computer must consist of a quantum memory, with an additional structure that (1) facilitates a controlled quantum evolution of the quantum memory; (2) includes a method for information theoretic cooling of the memory; and (3) provides a readout mechanism for subsets of the quantum memory. The criteria are met when the device is scalable and operates fault tolerantly. We discuss various existing quantum computing paradigms and how they fit within this framework. Finally, we present a decision tree for selecting an avenue toward building a quantum computer. This is intended to help experimentalists determine the most natural paradigm given a particular physical implementation.

  3. Ring interconnection for distributed memory automation and computing system

    Energy Technology Data Exchange (ETDEWEB)

    Vinogradov, V I [Inst. for Nuclear Research of the Russian Academy of Sciences, Moscow (Russian Federation)

    1996-12-31

    Problems of development of measurement, acquisition and central systems based on a distributed memory and a ring interface are discussed. It has been found that the RAM LINK-type protocol can be used for ringlet links in non-symmetrical distributed memory architecture multiprocessor system interaction. 5 refs.

  4. A memory module for experimental data handling

    Science.gov (United States)

    De Blois, J.

    1985-02-01

    A compact CAMAC memory module for experimental data handling was developed to eliminate the need of direct memory access in computer controlled measurements. When using autonomous controllers it also makes measurements more independent of the program and enlarges the available space for programs in the memory of the micro-computer. The memory module has three modes of operation: an increment-, a list- and a fifo mode. This is achieved by connecting the main parts, being: the memory (MEM), the fifo buffer (FIFO), the address buffer (BUF), two counters (AUX and ADDR) and a readout register (ROR), by an internal 24-bit databus. The time needed for databus operations is 1 μs, for measuring cycles as well as for CAMAC cycles. The FIFO provides temporary data storage during CAMAC cycles and separates the memory part from the application part. The memory is variable from 1 to 64K (24 bits) by using different types of memory chips. The application part, which forms 1/3 of the module, will be specially designed for each application and is added to the memory chian internal connector. The memory unit will be used in Mössbauer experiments and in thermal neutron scattering experiments.

  5. Adaptive Dynamic Process Scheduling on Distributed Memory Parallel Computers

    Directory of Open Access Journals (Sweden)

    Wei Shu

    1994-01-01

    Full Text Available One of the challenges in programming distributed memory parallel machines is deciding how to allocate work to processors. This problem is particularly important for computations with unpredictable dynamic behaviors or irregular structures. We present a scheme for dynamic scheduling of medium-grained processes that is useful in this context. The adaptive contracting within neighborhood (ACWN is a dynamic, distributed, load-dependent, and scalable scheme. It deals with dynamic and unpredictable creation of processes and adapts to different systems. The scheme is described and contrasted with two other schemes that have been proposed in this context, namely the randomized allocation and the gradient model. The performance of the three schemes on an Intel iPSC/2 hypercube is presented and analyzed. The experimental results show that even though the ACWN algorithm incurs somewhat larger overhead than the randomized allocation, it achieves better performance in most cases due to its adaptiveness. Its feature of quickly spreading the work helps it outperform the gradient model in performance and scalability.

  6. Computer programming and computer systems

    CERN Document Server

    Hassitt, Anthony

    1966-01-01

    Computer Programming and Computer Systems imparts a "reading knowledge? of computer systems.This book describes the aspects of machine-language programming, monitor systems, computer hardware, and advanced programming that every thorough programmer should be acquainted with. This text discusses the automatic electronic digital computers, symbolic language, Reverse Polish Notation, and Fortran into assembly language. The routine for reading blocked tapes, dimension statements in subroutines, general-purpose input routine, and efficient use of memory are also elaborated.This publication is inten

  7. Overview of emerging nonvolatile memory technologies.

    Science.gov (United States)

    Meena, Jagan Singh; Sze, Simon Min; Chand, Umesh; Tseng, Tseung-Yuen

    2014-01-01

    Nonvolatile memory technologies in Si-based electronics date back to the 1990s. Ferroelectric field-effect transistor (FeFET) was one of the most promising devices replacing the conventional Flash memory facing physical scaling limitations at those times. A variant of charge storage memory referred to as Flash memory is widely used in consumer electronic products such as cell phones and music players while NAND Flash-based solid-state disks (SSDs) are increasingly displacing hard disk drives as the primary storage device in laptops, desktops, and even data centers. The integration limit of Flash memories is approaching, and many new types of memory to replace conventional Flash memories have been proposed. Emerging memory technologies promise new memories to store more data at less cost than the expensive-to-build silicon chips used by popular consumer gadgets including digital cameras, cell phones and portable music players. They are being investigated and lead to the future as potential alternatives to existing memories in future computing systems. Emerging nonvolatile memory technologies such as magnetic random-access memory (MRAM), spin-transfer torque random-access memory (STT-RAM), ferroelectric random-access memory (FeRAM), phase-change memory (PCM), and resistive random-access memory (RRAM) combine the speed of static random-access memory (SRAM), the density of dynamic random-access memory (DRAM), and the nonvolatility of Flash memory and so become very attractive as another possibility for future memory hierarchies. Many other new classes of emerging memory technologies such as transparent and plastic, three-dimensional (3-D), and quantum dot memory technologies have also gained tremendous popularity in recent years. Subsequently, not an exaggeration to say that computer memory could soon earn the ultimate commercial validation for commercial scale-up and production the cheap plastic knockoff. Therefore, this review is devoted to the rapidly developing new

  8. Overview of emerging nonvolatile memory technologies

    Science.gov (United States)

    2014-01-01

    Nonvolatile memory technologies in Si-based electronics date back to the 1990s. Ferroelectric field-effect transistor (FeFET) was one of the most promising devices replacing the conventional Flash memory facing physical scaling limitations at those times. A variant of charge storage memory referred to as Flash memory is widely used in consumer electronic products such as cell phones and music players while NAND Flash-based solid-state disks (SSDs) are increasingly displacing hard disk drives as the primary storage device in laptops, desktops, and even data centers. The integration limit of Flash memories is approaching, and many new types of memory to replace conventional Flash memories have been proposed. Emerging memory technologies promise new memories to store more data at less cost than the expensive-to-build silicon chips used by popular consumer gadgets including digital cameras, cell phones and portable music players. They are being investigated and lead to the future as potential alternatives to existing memories in future computing systems. Emerging nonvolatile memory technologies such as magnetic random-access memory (MRAM), spin-transfer torque random-access memory (STT-RAM), ferroelectric random-access memory (FeRAM), phase-change memory (PCM), and resistive random-access memory (RRAM) combine the speed of static random-access memory (SRAM), the density of dynamic random-access memory (DRAM), and the nonvolatility of Flash memory and so become very attractive as another possibility for future memory hierarchies. Many other new classes of emerging memory technologies such as transparent and plastic, three-dimensional (3-D), and quantum dot memory technologies have also gained tremendous popularity in recent years. Subsequently, not an exaggeration to say that computer memory could soon earn the ultimate commercial validation for commercial scale-up and production the cheap plastic knockoff. Therefore, this review is devoted to the rapidly developing new

  9. Memory-efficient analysis of dense functional connectomes

    Directory of Open Access Journals (Sweden)

    Kristian Loewe

    2016-11-01

    Full Text Available The functioning of the human brain relies on the interplay and integration of numerous individual units within a complex network. To identify network configurations characteristic of specific cognitive tasks or mental illnesses, functional connectomes can be constructed based on the assessment of synchronous fMRI activity at separate brain sites, and then analyzed using graph-theoretical concepts. In most previous studies, relatively coarse parcellations of the brain were used to define regions as graphical nodes. Such parcellated connectomes are highly dependent on parcellation quality because regional and functional boundaries need to be relatively consistent for the results to be interpretable. In contrast, dense connectomes are not subject to this limitation, since the parcellation inherent to the data is used to define graphical nodes, also allowing for a more detailed spatial mapping of connectivity patterns. However, dense connectomes are associated with considerable computational demands in terms of both time and memory requirements. The memory required to explicitly store dense connectomes in main memory can render their analysis infeasible, especially when considering high-resolution data or analyses across multiple subjects or conditions. Here, we present an object-based matrix representation that achieves a very low memory footprint by computing matrix elements on demand instead of explicitly storing them. In doing so, memory required for a dense connectome is reduced to the amount needed to store the underlying time series data. Based on theoretical considerations and benchmarks, different matrix object implementations and additional programs (based on available Matlab functions and Matlab-based third-party software are compared with regard to their computational efficiency in terms of memory requirements and computation time. The matrix implementation based on on-demand computations has very low memory requirements thus enabling

  10. Quantum memories: emerging applications and recent advances

    Science.gov (United States)

    Heshami, Khabat; England, Duncan G.; Humphreys, Peter C.; Bustard, Philip J.; Acosta, Victor M.; Nunn, Joshua; Sussman, Benjamin J.

    2016-01-01

    Quantum light–matter interfaces are at the heart of photonic quantum technologies. Quantum memories for photons, where non-classical states of photons are mapped onto stationary matter states and preserved for subsequent retrieval, are technical realizations enabled by exquisite control over interactions between light and matter. The ability of quantum memories to synchronize probabilistic events makes them a key component in quantum repeaters and quantum computation based on linear optics. This critical feature has motivated many groups to dedicate theoretical and experimental research to develop quantum memory devices. In recent years, exciting new applications, and more advanced developments of quantum memories, have proliferated. In this review, we outline some of the emerging applications of quantum memories in optical signal processing, quantum computation and non-linear optics. We review recent experimental and theoretical developments, and their impacts on more advanced photonic quantum technologies based on quantum memories. PMID:27695198

  11. Towards realising high-speed large-bandwidth quantum memory

    Institute of Scientific and Technical Information of China (English)

    SHI BaoSen; DING DongSheng

    2016-01-01

    Indispensable for quantum communication and quantum computation,quantum memory executes on demand storage and retrieval of quantum states such as those of a single photon,an entangled pair or squeezed states.Among the various forms of quantum memory,Raman quantum memory has advantages forits broadband and high-speed characteristics,which results in a huge potential for applications in quantum networks and quantum computation.However,realising Raman quantum memory with true single photons and photonic entanglementis challenging.In this review,after briefly introducing the main benchmarks in the development of quantum memory and describing the state of the art,we focus on our recent experimental progress inquantum memorystorage of quantum states using the Raman scheme.

  12. Self-Testing Static Random-Access Memory

    Science.gov (United States)

    Chau, Savio; Rennels, David

    1991-01-01

    Proposed static random-access memory for computer features improved error-detecting and -correcting capabilities. New self-testing scheme provides for detection and correction of errors at any time during normal operation - even while data being written into memory. Faults in equipment causing errors in output data detected by repeatedly testing every memory cell to determine whether it can still store both "one" and "zero", without destroying data stored in memory.

  13. Insect olfactory coding and memory at multiple timescales.

    Science.gov (United States)

    Gupta, Nitin; Stopfer, Mark

    2011-10-01

    Insects can learn, allowing them great flexibility for locating seasonal food sources and avoiding wily predators. Because insects are relatively simple and accessible to manipulation, they provide good experimental preparations for exploring mechanisms underlying sensory coding and memory. Here we review how the intertwining of memory with computation enables the coding, decoding, and storage of sensory experience at various stages of the insect olfactory system. Individual parts of this system are capable of multiplexing memories at different timescales, and conversely, memory on a given timescale can be distributed across different parts of the circuit. Our sampling of the olfactory system emphasizes the diversity of memories, and the importance of understanding these memories in the context of computations performed by different parts of a sensory system. Published by Elsevier Ltd.

  14. A parallelization study of the general purpose Monte Carlo code MCNP4 on a distributed memory highly parallel computer

    International Nuclear Information System (INIS)

    Yamazaki, Takao; Fujisaki, Masahide; Okuda, Motoi; Takano, Makoto; Masukawa, Fumihiro; Naito, Yoshitaka

    1993-01-01

    The general purpose Monte Carlo code MCNP4 has been implemented on the Fujitsu AP1000 distributed memory highly parallel computer. Parallelization techniques developed and studied are reported. A shielding analysis function of the MCNP4 code is parallelized in this study. A technique to map a history to each processor dynamically and to map control process to a certain processor was applied. The efficiency of parallelized code is up to 80% for a typical practical problem with 512 processors. These results demonstrate the advantages of a highly parallel computer to the conventional computers in the field of shielding analysis by Monte Carlo method. (orig.)

  15. A memory module for experimental data handling

    International Nuclear Information System (INIS)

    Blois, J. de

    1985-01-01

    A compact CAMAC memory module for experimental data handling was developed to eliminate the need of direct memory access in computer controlled measurements. When using autonomous controllers it also makes measurements more independent of the program and enlarges the available space for programs in the memory of the micro-computer. The memory module has three modes of operation: an increment-, a list- and a fifo mode. This is achieved by connecting the main parts, being: the memory (MEM), the fifo buffer (FIFO), the address buffer (BUF), two counters (AUX and ADDR) and a readout register (ROR), by an internal 24-bit databus. The time needed for databus operations is 1 μs, for measuring cycles as well as for CAMAC cycles. The FIFO provides temporary data storage during CAMAC cycles and separates the memory part from the application part. The memory is variable from 1 to 64K (24 bits) by using different types of memory chips. The application part, which forms 1/3 of the module, will be specially designed for each application and is added to the memory by an internal connector. The memory unit will be used in Moessbauer experiments and in thermal neutron scattering experiments. (orig.)

  16. Failure detection in high-performance clusters and computers using chaotic map computations

    Science.gov (United States)

    Rao, Nageswara S.

    2015-09-01

    A programmable media includes a processing unit capable of independent operation in a machine that is capable of executing 10.sup.18 floating point operations per second. The processing unit is in communication with a memory element and an interconnect that couples computing nodes. The programmable media includes a logical unit configured to execute arithmetic functions, comparative functions, and/or logical functions. The processing unit is configured to detect computing component failures, memory element failures and/or interconnect failures by executing programming threads that generate one or more chaotic map trajectories. The central processing unit or graphical processing unit is configured to detect a computing component failure, memory element failure and/or an interconnect failure through an automated comparison of signal trajectories generated by the chaotic maps.

  17. FPGA Based Intelligent Co-operative Processor in Memory Architecture

    DEFF Research Database (Denmark)

    Ahmed, Zaki; Sotudeh, Reza; Hussain, Dil Muhammad Akbar

    2011-01-01

    benefits of PIM, a concept of Co-operative Intelligent Memory (CIM) was developed by the intelligent system group of University of Hertfordshire, based on the previously developed Co-operative Pseudo Intelligent Memory (CPIM). This paper provides an overview on previous works (CPIM, CIM) and realization......In a continuing effort to improve computer system performance, Processor-In-Memory (PIM) architecture has emerged as an alternative solution. PIM architecture incorporates computational units and control logic directly on the memory to provide immediate access to the data. To exploit the potential...

  18. Computational complexity and memory usage for multi-frontal direct solvers used in p finite element analysis

    KAUST Repository

    Calo, Victor M.; Collier, Nathan; Pardo, David; Paszyński, Maciej R.

    2011-01-01

    The multi-frontal direct solver is the state of the art for the direct solution of linear systems. This paper provides computational complexity and memory usage estimates for the application of the multi-frontal direct solver algorithm on linear systems resulting from p finite elements. Specifically we provide the estimates for systems resulting from C0 polynomial spaces spanned by B-splines. The structured grid and uniform polynomial order used in isogeometric meshes simplifies the analysis.

  19. Computational complexity and memory usage for multi-frontal direct solvers used in p finite element analysis

    KAUST Repository

    Calo, Victor M.

    2011-05-14

    The multi-frontal direct solver is the state of the art for the direct solution of linear systems. This paper provides computational complexity and memory usage estimates for the application of the multi-frontal direct solver algorithm on linear systems resulting from p finite elements. Specifically we provide the estimates for systems resulting from C0 polynomial spaces spanned by B-splines. The structured grid and uniform polynomial order used in isogeometric meshes simplifies the analysis.

  20. A Comparison of Two Paradigms for Distributed Shared Memory

    NARCIS (Netherlands)

    Levelt, W.G.; Kaashoek, M.F.; Bal, H.E.; Tanenbaum, A.S.

    1992-01-01

    Two paradigms for distributed shared memory on loosely‐coupled computing systems are compared: the shared data‐object model as used in Orca, a programming language specially designed for loosely‐coupled computing systems, and the shared virtual memory model. For both paradigms two systems are

  1. Privacy-Preserving Computation with Trusted Computing via Scramble-then-Compute

    Directory of Open Access Journals (Sweden)

    Dang Hung

    2017-07-01

    Full Text Available We consider privacy-preserving computation of big data using trusted computing primitives with limited private memory. Simply ensuring that the data remains encrypted outside the trusted computing environment is insufficient to preserve data privacy, for data movement observed during computation could leak information. While it is possible to thwart such leakage using generic solution such as ORAM [42], designing efficient privacy-preserving algorithms is challenging. Besides computation efficiency, it is critical to keep trusted code bases lean, for large ones are unwieldy to vet and verify. In this paper, we advocate a simple approach wherein many basic algorithms (e.g., sorting can be made privacy-preserving by adding a step that securely scrambles the data before feeding it to the original algorithms. We call this approach Scramble-then-Compute (StC, and give a sufficient condition whereby existing external memory algorithms can be made privacy-preserving via StC. This approach facilitates code-reuse, and its simplicity contributes to a smaller trusted code base. It is also general, allowing algorithm designers to leverage an extensive body of known efficient algorithms for better performance. Our experiments show that StC could offer up to 4.1× speedups over known, application-specific alternatives.

  2. The Case for Higher Computational Density in the Memory-Bound FDTD Method within Multicore Environments

    Directory of Open Access Journals (Sweden)

    Mohammed F. Hadi

    2012-01-01

    Full Text Available It is argued here that more accurate though more compute-intensive alternate algorithms to certain computational methods which are deemed too inefficient and wasteful when implemented within serial codes can be more efficient and cost-effective when implemented in parallel codes designed to run on today's multicore and many-core environments. This argument is most germane to methods that involve large data sets with relatively limited computational density—in other words, algorithms with small ratios of floating point operations to memory accesses. The examples chosen here to support this argument represent a variety of high-order finite-difference time-domain algorithms. It will be demonstrated that a three- to eightfold increase in floating-point operations due to higher-order finite-differences will translate to only two- to threefold increases in actual run times using either graphical or central processing units of today. It is hoped that this argument will convince researchers to revisit certain numerical techniques that have long been shelved and reevaluate them for multicore usability.

  3. Memory-Efficient Analysis of Dense Functional Connectomes.

    Science.gov (United States)

    Loewe, Kristian; Donohue, Sarah E; Schoenfeld, Mircea A; Kruse, Rudolf; Borgelt, Christian

    2016-01-01

    The functioning of the human brain relies on the interplay and integration of numerous individual units within a complex network. To identify network configurations characteristic of specific cognitive tasks or mental illnesses, functional connectomes can be constructed based on the assessment of synchronous fMRI activity at separate brain sites, and then analyzed using graph-theoretical concepts. In most previous studies, relatively coarse parcellations of the brain were used to define regions as graphical nodes. Such parcellated connectomes are highly dependent on parcellation quality because regional and functional boundaries need to be relatively consistent for the results to be interpretable. In contrast, dense connectomes are not subject to this limitation, since the parcellation inherent to the data is used to define graphical nodes, also allowing for a more detailed spatial mapping of connectivity patterns. However, dense connectomes are associated with considerable computational demands in terms of both time and memory requirements. The memory required to explicitly store dense connectomes in main memory can render their analysis infeasible, especially when considering high-resolution data or analyses across multiple subjects or conditions. Here, we present an object-based matrix representation that achieves a very low memory footprint by computing matrix elements on demand instead of explicitly storing them. In doing so, memory required for a dense connectome is reduced to the amount needed to store the underlying time series data. Based on theoretical considerations and benchmarks, different matrix object implementations and additional programs (based on available Matlab functions and Matlab-based third-party software) are compared with regard to their computational efficiency. The matrix implementation based on on-demand computations has very low memory requirements, thus enabling analyses that would be otherwise infeasible to conduct due to

  4. A Time-predictable Memory Network-on-Chip

    DEFF Research Database (Denmark)

    Schoeberl, Martin; Chong, David VH; Puffitsch, Wolfgang

    2014-01-01

    To derive safe bounds on worst-case execution times (WCETs), all components of a computer system need to be time-predictable: the processor pipeline, the caches, the memory controller, and memory arbitration on a multicore processor. This paper presents a solution for time-predictable memory...... arbitration and access for chip-multiprocessors. The memory network-on-chip is organized as a tree with time-division multiplexing (TDM) of accesses to the shared memory. The TDM based arbitration completely decouples processor cores and allows WCET analysis of the memory accesses on individual cores without...

  5. Forms of memory: Investigating the computational basis of semantic-episodic memory interactions

    NARCIS (Netherlands)

    Neville, D.A.

    2015-01-01

    The present thesis investigated how the memory systems related to the processing of semantic and episodic information combine to generate behavioural performance as measured in standard laboratory tasks. Across a series of behavioural experiment I looked at different types of interactions between

  6. System and method for programmable bank selection for banked memory subsystems

    Energy Technology Data Exchange (ETDEWEB)

    Blumrich, Matthias A. (Ridgefield, CT); Chen, Dong (Croton on Hudson, NY); Gara, Alan G. (Mount Kisco, NY); Giampapa, Mark E. (Irvington, NY); Hoenicke, Dirk (Seebruck-Seeon, DE); Ohmacht, Martin (Yorktown Heights, NY); Salapura, Valentina (Chappaqua, NY); Sugavanam, Krishnan (Mahopac, NY)

    2010-09-07

    A programmable memory system and method for enabling one or more processor devices access to shared memory in a computing environment, the shared memory including one or more memory storage structures having addressable locations for storing data. The system comprises: one or more first logic devices associated with a respective one or more processor devices, each first logic device for receiving physical memory address signals and programmable for generating a respective memory storage structure select signal upon receipt of pre-determined address bit values at selected physical memory address bit locations; and, a second logic device responsive to each of the respective select signal for generating an address signal used for selecting a memory storage structure for processor access. The system thus enables each processor device of a computing environment memory storage access distributed across the one or more memory storage structures.

  7. Effects of degraded sensory input on memory for speech: behavioral data and a test of biologically constrained computational models.

    Science.gov (United States)

    Piquado, Tepring; Cousins, Katheryn A Q; Wingfield, Arthur; Miller, Paul

    2010-12-13

    Poor hearing acuity reduces memory for spoken words, even when the words are presented with enough clarity for correct recognition. An "effortful hypothesis" suggests that the perceptual effort needed for recognition draws from resources that would otherwise be available for encoding the word in memory. To assess this hypothesis, we conducted a behavioral task requiring immediate free recall of word-lists, some of which contained an acoustically masked word that was just above perceptual threshold. Results show that masking a word reduces the recall of that word and words prior to it, as well as weakening the linking associations between the masked and prior words. In contrast, recall probabilities of words following the masked word are not affected. To account for this effect we conducted computational simulations testing two classes of models: Associative Linking Models and Short-Term Memory Buffer Models. Only a model that integrated both contextual linking and buffer components matched all of the effects of masking observed in our behavioral data. In this Linking-Buffer Model, the masked word disrupts a short-term memory buffer, causing associative links of words in the buffer to be weakened, affecting memory for the masked word and the word prior to it, while allowing links of words following the masked word to be spared. We suggest that these data account for the so-called "effortful hypothesis", where distorted input has a detrimental impact on prior information stored in short-term memory. Copyright © 2010 Elsevier B.V. All rights reserved.

  8. A memory efficient user interface for CLIPS micro-computer applications

    Science.gov (United States)

    Sterle, Mark E.; Mayer, Richard J.; Jordan, Janice A.; Brodale, Howard N.; Lin, Min-Jin

    1990-01-01

    The goal of the Integrated Southern Pine Beetle Expert System (ISPBEX) is to provide expert level knowledge concerning treatment advice that is convenient and easy to use for Forest Service personnel. ISPBEX was developed in CLIPS and delivered on an IBM PC AT class micro-computer, operating with an MS/DOS operating system. This restricted the size of the run time system to 640K. In order to provide a robust expert system, with on-line explanation, help, and alternative actions menus, as well as features that allow the user to back up or execute 'what if' scenarios, a memory efficient menuing system was developed to interface with the CLIPS programs. By robust, we mean an expert system that (1) is user friendly, (2) provides reasonable solutions for a wide variety of domain specific problems, (3) explains why some solutions were suggested but others were not, and (4) provides technical information relating to the problem solution. Several advantages were gained by using this type of user interface (UI). First, by storing the menus on the hard disk (instead of main memory) during program execution, a more robust system could be implemented. Second, since the menus were built rapidly, development time was reduced. Third, the user may try a new scenario by backing up to any of the input screens and revising segments of the original input without having to retype all the information. And fourth, asserting facts from the menus provided for a dynamic and flexible fact base. This UI technology has been applied successfully in expert systems applications in forest management, agriculture, and manufacturing. This paper discusses the architecture of the UI system, human factors considerations, and the menu syntax design.

  9. The gravitational-wave memory effect

    International Nuclear Information System (INIS)

    Favata, Marc

    2010-01-01

    The nonlinear memory effect is a slowly growing, non-oscillatory contribution to the gravitational-wave amplitude. It originates from gravitational waves that are sourced by the previously emitted waves. In an ideal gravitational-wave interferometer a gravitational wave with memory causes a permanent displacement of the test masses that persists after the wave has passed. Surprisingly, the nonlinear memory affects the signal amplitude starting at leading (Newtonian-quadrupole) order. Despite this fact, the nonlinear memory is not easily extracted from current numerical relativity simulations. After reviewing the linear and nonlinear memory I summarize some recent work, including (1) computations of the memory contribution to the inspiral waveform amplitude (thus completing the waveform to third post-Newtonian order); (2) the first calculations of the nonlinear memory that include all phases of binary black hole coalescence (inspiral, merger, ringdown); and (3) realistic estimates of the detectability of the memory with LISA.

  10. Linking Working Memory and Long-Term Memory: A Computational Model of the Learning of New Words

    Science.gov (United States)

    Jones, Gary; Gobet, Fernand; Pine, Julian M.

    2007-01-01

    The nonword repetition (NWR) test has been shown to be a good predictor of children's vocabulary size. NWR performance has been explained using phonological working memory, which is seen as a critical component in the learning of new words. However, no detailed specification of the link between phonological working memory and long-term memory…

  11. Provably unbounded memory advantage in stochastic simulation using quantum mechanics

    Science.gov (United States)

    Garner, Andrew J. P.; Liu, Qing; Thompson, Jayne; Vedral, Vlatko; Gu, mile

    2017-10-01

    Simulating the stochastic evolution of real quantities on a digital computer requires a trade-off between the precision to which these quantities are approximated, and the memory required to store them. The statistical accuracy of the simulation is thus generally limited by the internal memory available to the simulator. Here, using tools from computational mechanics, we show that quantum processors with a fixed finite memory can simulate stochastic processes of real variables to arbitrarily high precision. This demonstrates a provable, unbounded memory advantage that a quantum simulator can exhibit over its best possible classical counterpart.

  12. Cognitive cooperation groups mediated by computers and internet present significant improvement of cognitive status in older adults with memory complaints: a controlled prospective study

    Directory of Open Access Journals (Sweden)

    Rodrigo de Rosso Krug

    Full Text Available ABSTRACT Objective To estimate the effect of participating in cognitive cooperation groups, mediated by computers and the internet, on the Mini-Mental State Examination (MMSE percent variation of outpatients with memory complaints attending two memory clinics. Methods A prospective controlled intervention study carried out from 2006 to 2013 with 293 elders. The intervention group (n = 160 attended a cognitive cooperation group (20 sessions of 1.5 hours each. The control group (n = 133 received routine medical care. Outcome was the percent variation in the MMSE. Control variables included gender, age, marital status, schooling, hypertension, diabetes, dyslipidaemia, hypothyroidism, depression, vascular diseases, polymedication, use of benzodiazepines, exposure to tobacco, sedentary lifestyle, obesity and functional capacity. The final model was obtained by multivariate linear regression. Results The intervention group obtained an independent positive variation of 24.39% (CI 95% = 14.86/33.91 in the MMSE compared to the control group. Conclusion The results suggested that cognitive cooperation groups, mediated by computers and the internet, are associated with cognitive status improvement of older adults in memory clinics.

  13. In-Depth Analysis of Computer Memory Acquisition Software for Forensic Purposes.

    Science.gov (United States)

    McDown, Robert J; Varol, Cihan; Carvajal, Leonardo; Chen, Lei

    2016-01-01

    The comparison studies on random access memory (RAM) acquisition tools are either limited in metrics or the selected tools were designed to be executed in older operating systems. Therefore, this study evaluates widely used seven shareware or freeware/open source RAM acquisition forensic tools that are compatible to work with the latest 64-bit Windows operating systems. These tools' user interface capabilities, platform limitations, reporting capabilities, total execution time, shared and proprietary DLLs, modified registry keys, and invoked files during processing were compared. We observed that Windows Memory Reader and Belkasoft's Live Ram Capturer leaves the least fingerprints in memory when loaded. On the other hand, ProDiscover and FTK Imager perform poor in memory usage, processing time, DLL usage, and not-wanted artifacts introduced to the system. While Belkasoft's Live Ram Capturer is the fastest to obtain an image of the memory, Pro Discover takes the longest time to do the same job. © 2015 American Academy of Forensic Sciences.

  14. Hypergraph-Based Recognition Memory Model for Lifelong Experience

    Science.gov (United States)

    2014-01-01

    Cognitive agents are expected to interact with and adapt to a nonstationary dynamic environment. As an initial process of decision making in a real-world agent interaction, familiarity judgment leads the following processes for intelligence. Familiarity judgment includes knowing previously encoded data as well as completing original patterns from partial information, which are fundamental functions of recognition memory. Although previous computational memory models have attempted to reflect human behavioral properties on the recognition memory, they have been focused on static conditions without considering temporal changes in terms of lifelong learning. To provide temporal adaptability to an agent, in this paper, we suggest a computational model for recognition memory that enables lifelong learning. The proposed model is based on a hypergraph structure, and thus it allows a high-order relationship between contextual nodes and enables incremental learning. Through a simulated experiment, we investigate the optimal conditions of the memory model and validate the consistency of memory performance for lifelong learning. PMID:25371665

  15. Malware Memory Analysis of the IVYL Linux Rootkit: Investigating a Publicly Available Linux Rootkit Using the Volatility Memory Analysis Framework

    Science.gov (United States)

    2015-04-01

    report is to examine how a computer forensic investigator/incident handler, without specialised computer memory or software reverse engineering skills ...The skills amassed by incident handlers and investigators alike while using Volatility to examine Windows memory images will be of some help...bin/pulseaudio --start --log-target=syslog 1362 1000 1000 nautilus 1366 1000 1000 /usr/lib/pulseaudio/pulse/gconf- helper 1370 1000 1000 nm-applet

  16. Multithreaded Asynchronous Graph Traversal for In-Memory and Semi-External Memory

    KAUST Repository

    Pearce, Roger

    2010-11-01

    Processing large graphs is becoming increasingly important for many domains such as social networks, bioinformatics, etc. Unfortunately, many algorithms and implementations do not scale with increasing graph sizes. As a result, researchers have attempted to meet the growing data demands using parallel and external memory techniques. We present a novel asynchronous approach to compute Breadth-First-Search (BFS), Single-Source-Shortest-Paths, and Connected Components for large graphs in shared memory. Our highly parallel asynchronous approach hides data latency due to both poor locality and delays in the underlying graph data storage. We present an experimental study applying our technique to both In-Memory and Semi-External Memory graphs utilizing multi-core processors and solid-state memory devices. Our experiments using synthetic and real-world datasets show that our asynchronous approach is able to overcome data latencies and provide significant speedup over alternative approaches. For example, on billion vertex graphs our asynchronous BFS scales up to 14x on 16-cores. © 2010 IEEE.

  17. Parallel-vector algorithms for particle simulations on shared-memory multiprocessors

    International Nuclear Information System (INIS)

    Nishiura, Daisuke; Sakaguchi, Hide

    2011-01-01

    Over the last few decades, the computational demands of massive particle-based simulations for both scientific and industrial purposes have been continuously increasing. Hence, considerable efforts are being made to develop parallel computing techniques on various platforms. In such simulations, particles freely move within a given space, and so on a distributed-memory system, load balancing, i.e., assigning an equal number of particles to each processor, is not guaranteed. However, shared-memory systems achieve better load balancing for particle models, but suffer from the intrinsic drawback of memory access competition, particularly during (1) paring of contact candidates from among neighboring particles and (2) force summation for each particle. Here, novel algorithms are proposed to overcome these two problems. For the first problem, the key is a pre-conditioning process during which particle labels are sorted by a cell label in the domain to which the particles belong. Then, a list of contact candidates is constructed by pairing the sorted particle labels. For the latter problem, a table comprising the list indexes of the contact candidate pairs is created and used to sum the contact forces acting on each particle for all contacts according to Newton's third law. With just these methods, memory access competition is avoided without additional redundant procedures. The parallel efficiency and compatibility of these two algorithms were evaluated in discrete element method (DEM) simulations on four types of shared-memory parallel computers: a multicore multiprocessor computer, scalar supercomputer, vector supercomputer, and graphics processing unit. The computational efficiency of a DEM code was found to be drastically improved with our algorithms on all but the scalar supercomputer. Thus, the developed parallel algorithms are useful on shared-memory parallel computers with sufficient memory bandwidth.

  18. Active non-volatile memory post-processing

    Energy Technology Data Exchange (ETDEWEB)

    Kannan, Sudarsun; Milojicic, Dejan S.; Talwar, Vanish

    2017-04-11

    A computing node includes an active Non-Volatile Random Access Memory (NVRAM) component which includes memory and a sub-processor component. The memory is to store data chunks received from a processor core, the data chunks comprising metadata indicating a type of post-processing to be performed on data within the data chunks. The sub-processor component is to perform post-processing of said data chunks based on said metadata.

  19. An energy efficient and high speed architecture for convolution computing based on binary resistive random access memory

    Science.gov (United States)

    Liu, Chen; Han, Runze; Zhou, Zheng; Huang, Peng; Liu, Lifeng; Liu, Xiaoyan; Kang, Jinfeng

    2018-04-01

    In this work we present a novel convolution computing architecture based on metal oxide resistive random access memory (RRAM) to process the image data stored in the RRAM arrays. The proposed image storage architecture shows performances of better speed-device consumption efficiency compared with the previous kernel storage architecture. Further we improve the architecture for a high accuracy and low power computing by utilizing the binary storage and the series resistor. For a 28 × 28 image and 10 kernels with a size of 3 × 3, compared with the previous kernel storage approach, the newly proposed architecture shows excellent performances including: 1) almost 100% accuracy within 20% LRS variation and 90% HRS variation; 2) more than 67 times speed boost; 3) 71.4% energy saving.

  20. Extended memory management under RTOS

    Science.gov (United States)

    Plummer, M.

    1981-01-01

    A technique for extended memory management in ROLM 1666 computers using FORTRAN is presented. A general software system is described for which the technique can be ideally applied. The memory manager interface with the system is described. The protocols by which the manager is invoked are presented, as well as the methods used by the manager.

  1. NONLINEAR GRAVITATIONAL-WAVE MEMORY FROM BINARY BLACK HOLE MERGERS

    International Nuclear Information System (INIS)

    Favata, Marc

    2009-01-01

    Some astrophysical sources of gravitational waves can produce a 'memory effect', which causes a permanent displacement of the test masses in a freely falling gravitational-wave detector. The Christodoulou memory is a particularly interesting nonlinear form of memory that arises from the gravitational-wave stress-energy tensor's contribution to the distant gravitational-wave field. This nonlinear memory contributes a nonoscillatory component to the gravitational-wave signal at leading (Newtonian-quadrupole) order in the waveform amplitude. Previous computations of the memory and its detectability considered only the inspiral phase of binary black hole coalescence. Using an 'effective-one-body' (EOB) approach calibrated to numerical relativity simulations, as well as a simple fully analytic model, the Christodoulou memory is computed for the inspiral, merger, and ringdown. The memory will be very difficult to detect with ground-based interferometers, but is likely to be observable in supermassive black hole mergers with LISA out to redshifts z ∼< 2. Detection of the nonlinear memory could serve as an experimental test of the ability of gravity to 'gravitate'.

  2. Provably unbounded memory advantage in stochastic simulation using quantum mechanics

    International Nuclear Information System (INIS)

    Garner, Andrew J P; Thompson, Jayne; Vedral, Vlatko; Gu, Mile; Liu, Qing

    2017-01-01

    Simulating the stochastic evolution of real quantities on a digital computer requires a trade-off between the precision to which these quantities are approximated, and the memory required to store them. The statistical accuracy of the simulation is thus generally limited by the internal memory available to the simulator. Here, using tools from computational mechanics, we show that quantum processors with a fixed finite memory can simulate stochastic processes of real variables to arbitrarily high precision. This demonstrates a provable, unbounded memory advantage that a quantum simulator can exhibit over its best possible classical counterpart. (paper)

  3. Modeling reconsolidation in kernel associative memory.

    Directory of Open Access Journals (Sweden)

    Dimitri Nowicki

    Full Text Available Memory reconsolidation is a central process enabling adaptive memory and the perception of a constantly changing reality. It causes memories to be strengthened, weakened or changed following their recall. A computational model of memory reconsolidation is presented. Unlike Hopfield-type memory models, our model introduces an unbounded number of attractors that are updatable and can process real-valued, large, realistic stimuli. Our model replicates three characteristic effects of the reconsolidation process on human memory: increased association, extinction of fear memories, and the ability to track and follow gradually changing objects. In addition to this behavioral validation, a continuous time version of the reconsolidation model is introduced. This version extends average rate dynamic models of brain circuits exhibiting persistent activity to include adaptivity and an unbounded number of attractors.

  4. Super-activating Quantum Memory with Entanglement

    OpenAIRE

    Guan, Ji; Feng, Yuan; Ying, Mingsheng

    2017-01-01

    Noiseless subsystems were proved to be an efficient and faithful approach to preserve fragile information against decoherence in quantum information processing and quantum computation. They were employed to design a general (hybrid) quantum memory cell model that can store both quantum and classical information. In this Letter, we find an interesting new phenomenon that the purely classical memory cell can be super-activated to preserve quantum states, whereas the null memory cell can only be...

  5. Efficient implementation of multidimensional fast fourier transform on a distributed-memory parallel multi-node computer

    Science.gov (United States)

    Bhanot, Gyan V [Princeton, NJ; Chen, Dong [Croton-On-Hudson, NY; Gara, Alan G [Mount Kisco, NY; Giampapa, Mark E [Irvington, NY; Heidelberger, Philip [Cortlandt Manor, NY; Steinmacher-Burow, Burkhard D [Mount Kisco, NY; Vranas, Pavlos M [Bedford Hills, NY

    2012-01-10

    The present in invention is directed to a method, system and program storage device for efficiently implementing a multidimensional Fast Fourier Transform (FFT) of a multidimensional array comprising a plurality of elements initially distributed in a multi-node computer system comprising a plurality of nodes in communication over a network, comprising: distributing the plurality of elements of the array in a first dimension across the plurality of nodes of the computer system over the network to facilitate a first one-dimensional FFT; performing the first one-dimensional FFT on the elements of the array distributed at each node in the first dimension; re-distributing the one-dimensional FFT-transformed elements at each node in a second dimension via "all-to-all" distribution in random order across other nodes of the computer system over the network; and performing a second one-dimensional FFT on elements of the array re-distributed at each node in the second dimension, wherein the random order facilitates efficient utilization of the network thereby efficiently implementing the multidimensional FFT. The "all-to-all" re-distribution of array elements is further efficiently implemented in applications other than the multidimensional FFT on the distributed-memory parallel supercomputer.

  6. Deciphering Neural Codes of Memory during Sleep

    Science.gov (United States)

    Chen, Zhe; Wilson, Matthew A.

    2017-01-01

    Memories of experiences are stored in the cerebral cortex. Sleep is critical for consolidating hippocampal memory of wake experiences into the neocortex. Understanding representations of neural codes of hippocampal-neocortical networks during sleep would reveal important circuit mechanisms on memory consolidation, and provide novel insights into memory and dreams. Although sleep-associated ensemble spike activity has been investigated, identifying the content of memory in sleep remains challenging. Here, we revisit important experimental findings on sleep-associated memory (i.e., neural activity patterns in sleep that reflect memory processing) and review computational approaches for analyzing sleep-associated neural codes (SANC). We focus on two analysis paradigms for sleep-associated memory, and propose a new unsupervised learning framework (“memory first, meaning later”) for unbiased assessment of SANC. PMID:28390699

  7. Parallel statistical image reconstruction for cone-beam x-ray CT on a shared memory computation platform

    International Nuclear Information System (INIS)

    Kole, J S; Beekman, F J

    2005-01-01

    Statistical reconstruction methods offer possibilities of improving image quality as compared to analytical methods, but current reconstruction times prohibit routine clinical applications. To reduce reconstruction times we have parallelized a statistical reconstruction algorithm for cone-beam x-ray CT, the ordered subset convex algorithm (OSC), and evaluated it on a shared memory computer. Two different parallelization strategies were developed: one that employs parallelism by computing the work for all projections within a subset in parallel, and one that divides the total volume into parts and processes the work for each sub-volume in parallel. Both methods are used to reconstruct a three-dimensional mathematical phantom on two different grid densities. The reconstructed images are binary identical to the result of the serial (non-parallelized) algorithm. The speed-up factor equals approximately 30 when using 32 to 40 processors, and scales almost linearly with the number of cpus for both methods. The huge reduction in computation time allows us to apply statistical reconstruction to clinically relevant studies for the first time

  8. Sparse distributed memory overview

    Science.gov (United States)

    Raugh, Mike

    1990-01-01

    The Sparse Distributed Memory (SDM) project is investigating the theory and applications of massively parallel computing architecture, called sparse distributed memory, that will support the storage and retrieval of sensory and motor patterns characteristic of autonomous systems. The immediate objectives of the project are centered in studies of the memory itself and in the use of the memory to solve problems in speech, vision, and robotics. Investigation of methods for encoding sensory data is an important part of the research. Examples of NASA missions that may benefit from this work are Space Station, planetary rovers, and solar exploration. Sparse distributed memory offers promising technology for systems that must learn through experience and be capable of adapting to new circumstances, and for operating any large complex system requiring automatic monitoring and control. Sparse distributed memory is a massively parallel architecture motivated by efforts to understand how the human brain works. Sparse distributed memory is an associative memory, able to retrieve information from cues that only partially match patterns stored in the memory. It is able to store long temporal sequences derived from the behavior of a complex system, such as progressive records of the system's sensory data and correlated records of the system's motor controls.

  9. Noise reduction in optically controlled quantum memory

    Science.gov (United States)

    Ma, Lijun; Slattery, Oliver; Tang, Xiao

    2018-05-01

    Quantum memory is an essential tool for quantum communications systems and quantum computers. An important category of quantum memory, called optically controlled quantum memory, uses a strong classical beam to control the storage and re-emission of a single-photon signal through an atomic ensemble. In this type of memory, the residual light from the strong classical control beam can cause severe noise and degrade the system performance significantly. Efficiently suppressing this noise is a requirement for the successful implementation of optically controlled quantum memories. In this paper, we briefly introduce the latest and most common approaches to quantum memory and review the various noise-reduction techniques used in implementing them.

  10. EPS Mid-Career Award 2011. Are there multiple memory systems? Tests of models of implicit and explicit memory.

    Science.gov (United States)

    Shanks, David R; Berry, Christopher J

    2012-01-01

    This article reviews recent work aimed at developing a new framework, based on signal detection theory, for understanding the relationship between explicit (e.g., recognition) and implicit (e.g., priming) memory. Within this framework, different assumptions about sources of memorial evidence can be framed. Application to experimental results provides robust evidence for a single-system model in preference to multiple-systems models. This evidence comes from several sources including studies of the effects of amnesia and ageing on explicit and implicit memory. The framework allows a range of concepts in current memory research, such as familiarity, recollection, fluency, and source memory, to be linked to implicit memory. More generally, this work emphasizes the value of modern computational modelling techniques in the study of learning and memory.

  11. Josephson Thermal Memory

    Science.gov (United States)

    Guarcello, Claudio; Solinas, Paolo; Braggio, Alessandro; Di Ventra, Massimiliano; Giazotto, Francesco

    2018-01-01

    We propose a superconducting thermal memory device that exploits the thermal hysteresis in a flux-controlled temperature-biased superconducting quantum-interference device (SQUID). This system reveals a flux-controllable temperature bistability, which can be used to define two well-distinguishable thermal logic states. We discuss a suitable writing-reading procedure for these memory states. The time of the memory writing operation is expected to be on the order of approximately 0.2 ns for a Nb-based SQUID in thermal contact with a phonon bath at 4.2 K. We suggest a noninvasive readout scheme for the memory states based on the measurement of the effective resonance frequency of a tank circuit inductively coupled to the SQUID. The proposed device paves the way for a practical implementation of thermal logic and computation. The advantage of this proposal is that it represents also an example of harvesting thermal energy in superconducting circuits.

  12. A computer vision-based automated Figure-8 maze for working memory test in rodents.

    Science.gov (United States)

    Pedigo, Samuel F; Song, Eun Young; Jung, Min Whan; Kim, Jeansok J

    2006-09-30

    The benchmark test for prefrontal cortex (PFC)-mediated working memory in rodents is a delayed alternation task utilizing variations of T-maze or Figure-8 maze, which requires the animals to make specific arm entry responses for reward. In this task, however, manual procedures involved in shaping target behavior, imposing delays between trials and delivering rewards can potentially influence the animal's performance on the maze. Here, we report an automated Figure-8 maze which does not necessitate experimenter-subject interaction during shaping, training or testing. This system incorporates a computer vision system for tracking, motorized gates to impose delays, and automated reward delivery. The maze is controlled by custom software that records the animal's location and activates the gates according to the animal's behavior and a control algorithm. The program performs calculations of task accuracy, tracks movement sequence through the maze, and provides other dependent variables (such as running speed, time spent in different maze locations, activity level during delay). Testing in rats indicates that the performance accuracy is inversely proportional to the delay interval, decreases with PFC lesions, and that animals anticipate timing during long delays. Thus, our automated Figure-8 maze is effective at assessing working memory and provides novel behavioral measures in rodents.

  13. Holographic memory system based on projection recording of computer-generated 1D Fourier holograms.

    Science.gov (United States)

    Betin, A Yu; Bobrinev, V I; Donchenko, S S; Odinokov, S B; Evtikhiev, N N; Starikov, R S; Starikov, S N; Zlokazov, E Yu

    2014-10-01

    Utilization of computer generation of holographic structures significantly simplifies the optical scheme that is used to record the microholograms in a holographic memory record system. Also digital holographic synthesis allows to account the nonlinear errors of the record system to improve the microholograms quality. The multiplexed record of holograms is a widespread technique to increase the data record density. In this article we represent the holographic memory system based on digital synthesis of amplitude one-dimensional (1D) Fourier transform holograms and the multiplexed record of these holograms onto the holographic carrier using optical projection scheme. 1D Fourier transform holograms are very sensitive to orientation of the anamorphic optical element (cylindrical lens) that is required for encoded data object reconstruction. The multiplex record of several holograms with different orientation in an optical projection scheme allowed reconstruction of the data object from each hologram by rotating the cylindrical lens on the corresponding angle. Also, we discuss two optical schemes for the recorded holograms readout: a full-page readout system and line-by-line readout system. We consider the benefits of both systems and present the results of experimental modeling of 1D Fourier holograms nonmultiplex and multiplex record and reconstruction.

  14. Atomic crystals resistive switching memory

    International Nuclear Information System (INIS)

    Liu Chunsen; Zhang David Wei; Zhou Peng

    2017-01-01

    Facing the growing data storage and computing demands, a high accessing speed memory with low power and non-volatile character is urgently needed. Resistive access random memory with 4F 2 cell size, switching in sub-nanosecond, cycling endurances of over 10 12 cycles, and information retention exceeding 10 years, is considered as promising next-generation non-volatile memory. However, the energy per bit is still too high to compete against static random access memory and dynamic random access memory. The sneak leakage path and metal film sheet resistance issues hinder the further scaling down. The variation of resistance between different devices and even various cycles in the same device, hold resistive access random memory back from commercialization. The emerging of atomic crystals, possessing fine interface without dangling bonds in low dimension, can provide atomic level solutions for the obsessional issues. Moreover, the unique properties of atomic crystals also enable new type resistive switching memories, which provide a brand-new direction for the resistive access random memory. (topical reviews)

  15. Models of parallel computation :a survey and classification

    Institute of Scientific and Technical Information of China (English)

    ZHANG Yunquan; CHEN Guoliang; SUN Guangzhong; MIAO Qiankun

    2007-01-01

    In this paper,the state-of-the-art parallel computational model research is reviewed.We will introduce various models that were developed during the past decades.According to their targeting architecture features,especially memory organization,we classify these parallel computational models into three generations.These models and their characteristics are discussed based on three generations classification.We believe that with the ever increasing speed gap between the CPU and memory systems,incorporating non-uniform memory hierarchy into computational models will become unavoidable.With the emergence of multi-core CPUs,the parallelism hierarchy of current computing platforms becomes more and more complicated.Describing this complicated parallelism hierarchy in future computational models becomes more and more important.A semi-automatic toolkit that can extract model parameters and their values on real computers can reduce the model analysis complexity,thus allowing more complicated models with more parameters to be adopted.Hierarchical memory and hierarchical parallelism will be two very important features that should be considered in future model design and research.

  16. The Research on Linux Memory Forensics

    Science.gov (United States)

    Zhang, Jun; Che, ShengBing

    2018-03-01

    Memory forensics is a branch of computer forensics. It does not depend on the operating system API, and analyzes operating system information from binary memory data. Based on the 64-bit Linux operating system, it analyzes system process and thread information from physical memory data. Using ELF file debugging information and propose a method for locating kernel structure member variable, it can be applied to different versions of the Linux operating system. The experimental results show that the method can successfully obtain the sytem process information from physical memory data, and can be compatible with multiple versions of the Linux kernel.

  17. Data fusion using dynamic associative memory

    Science.gov (United States)

    Lo, Titus K. Y.; Leung, Henry; Chan, Keith C. C.

    1997-07-01

    An associative memory, unlike an addressed memory used in conventional computers, is content addressable. That is, storing and retrieving information are not based on the location of the memory cell but on the content of the information. There are a number of approaches to implement an associative memory, one of which is to use a neural dynamical system where objects being memorized or recognized correspond to its basic attractors. The work presented in this paper is the investigation of applying a particular type of neural dynamical associative memory, namely the projection network, to pattern recognition and data fusion. Three types of attractors, which are fixed-point, limit- cycle, and chaotic, have been studied, evaluated and compared.

  18. Aspects of GPU perfomance in algorithms with random memory access

    Science.gov (United States)

    Kashkovsky, Alexander V.; Shershnev, Anton A.; Vashchenkov, Pavel V.

    2017-10-01

    The numerical code for solving the Boltzmann equation on the hybrid computational cluster using the Direct Simulation Monte Carlo (DSMC) method showed that on Tesla K40 accelerators computational performance drops dramatically with increase of percentage of occupied GPU memory. Testing revealed that memory access time increases tens of times after certain critical percentage of memory is occupied. Moreover, it seems to be the common problem of all NVidia's GPUs arising from its architecture. Few modifications of the numerical algorithm were suggested to overcome this problem. One of them, based on the splitting the memory into "virtual" blocks, resulted in 2.5 times speed up.

  19. Concurrent Operations of O2-Tree on Shared Memory Multicore Architectures

    OpenAIRE

    Daniel Ohene-Kwofie; E. J. Otoo1, Gideon Nimako

    2014-01-01

    Modern computer architectures provide high performance computing capability by having multiple CPU cores. Such systems are also typically associated with very large main-memory capacities, thereby allowing them to be used for fast processing of in-memory database applications. However, most of the concurrency control mechanism associated with the index structures of these memory resident databases do not scale well, under high transaction rates. This paper presents the O2-Tree, a fast main me...

  20. Memory Hierarchy Design for Next Generation Scalable Many-core Platforms

    OpenAIRE

    Azarkhish, Erfan

    2016-01-01

    Performance and energy consumption in modern computing platforms is largely dominated by the memory hierarchy. The increasing computational power in the multiprocessors and accelerators, and the emergence of the data-intensive workloads (e.g. large-scale graph traversal and scientific algorithms) requiring fast transfer of large volumes of data, are two main trends which intensify this problem by putting even higher pressure on the memory hierarchy. This increasing gap between computation spe...

  1. Wearable Intrinsically Soft, Stretchable, Flexible Devices for Memories and Computing.

    Science.gov (United States)

    Rajan, Krishna; Garofalo, Erik; Chiolerio, Alessandro

    2018-01-27

    A recent trend in the development of high mass consumption electron devices is towards electronic textiles (e-textiles), smart wearable devices, smart clothes, and flexible or printable electronics. Intrinsically soft, stretchable, flexible, Wearable Memories and Computing devices (WMCs) bring us closer to sci-fi scenarios, where future electronic systems are totally integrated in our everyday outfits and help us in achieving a higher comfort level, interacting for us with other digital devices such as smartphones and domotics, or with analog devices, such as our brain/peripheral nervous system. WMC will enable each of us to contribute to open and big data systems as individual nodes, providing real-time information about physical and environmental parameters (including air pollution monitoring, sound and light pollution, chemical or radioactive fallout alert, network availability, and so on). Furthermore, WMC could be directly connected to human brain and enable extremely fast operation and unprecedented interface complexity, directly mapping the continuous states available to biological systems. This review focuses on recent advances in nanotechnology and materials science and pays particular attention to any result and promising technology to enable intrinsically soft, stretchable, flexible WMC.

  2. Visual memory and visual perception: when memory improves visual search.

    Science.gov (United States)

    Riou, Benoit; Lesourd, Mathieu; Brunel, Lionel; Versace, Rémy

    2011-08-01

    This study examined the relationship between memory and perception in order to identify the influence of a memory dimension in perceptual processing. Our aim was to determine whether the variation of typical size between items (i.e., the size in real life) affects visual search. In two experiments, the congruency between typical size difference and perceptual size difference was manipulated in a visual search task. We observed that congruency between the typical and perceptual size differences decreased reaction times in the visual search (Exp. 1), and noncongruency between these two differences increased reaction times in the visual search (Exp. 2). We argue that these results highlight that memory and perception share some resources and reveal the intervention of typical size difference on the computation of the perceptual size difference.

  3. Sensory memory for ambiguous vision.

    Science.gov (United States)

    Pearson, Joel; Brascamp, Jan

    2008-09-01

    In recent years the overlap between visual perception and memory has shed light on our understanding of both. When ambiguous images that normally cause perception to waver unpredictably are presented briefly with intervening blank periods, perception tends to freeze, locking into one interpretation. This indicates that there is a form of memory storage across the blank interval. This memory trace codes low-level characteristics of the stored stimulus. Although a trace is evident after a single perceptual instance, the trace builds over many separate stimulus presentations, indicating a flexible, variable-length time-course. This memory shares important characteristics with priming by non-ambiguous stimuli. Computational models now provide a framework to interpret many empirical observations.

  4. A Compute Capable SSD Architecture for Next-Generation Non-volatile Memories

    Energy Technology Data Exchange (ETDEWEB)

    De, Arup [Univ. of California, San Diego, CA (United States)

    2014-01-01

    Existing storage technologies (e.g., disks and ash) are failing to cope with the processor and main memory speed and are limiting the overall perfor- mance of many large scale I/O or data-intensive applications. Emerging fast byte-addressable non-volatile memory (NVM) technologies, such as phase-change memory (PCM), spin-transfer torque memory (STTM) and memristor are very promising and are approaching DRAM-like performance with lower power con- sumption and higher density as process technology scales. These new memories are narrowing down the performance gap between the storage and the main mem- ory and are putting forward challenging problems on existing SSD architecture, I/O interface (e.g, SATA, PCIe) and software. This dissertation addresses those challenges and presents a novel SSD architecture called XSSD. XSSD o oads com- putation in storage to exploit fast NVMs and reduce the redundant data tra c across the I/O bus. XSSD o ers a exible RPC-based programming framework that developers can use for application development on SSD without dealing with the complication of the underlying architecture and communication management. We have built a prototype of XSSD on the BEE3 FPGA prototyping system. We implement various data-intensive applications and achieve speedup and energy ef- ciency of 1.5-8.9 and 1.7-10.27 respectively. This dissertation also compares XSSD with previous work on intelligent storage and intelligent memory. The existing ecosystem and these new enabling technologies make this system more viable than earlier ones.

  5. Memory architecture for efficient utilization of SDRAM: a case study of the computation/memory access trade-off

    DEFF Research Database (Denmark)

    Gleerup, Thomas Møller; Holten-Lund, Hans Erik; Madsen, Jan

    2000-01-01

    . In software, forward differencing is usually better, but in this hardware implementation, the trade-off has made it possible to develop a very regular memory architecture with a buffering system, which can reach 95% bandwidth utilization using off-the-shelf SDRAM, This is achieved by changing the algorithm......This paper discusses the trade-off between calculations and memory accesses in a 3D graphics tile renderer for visualization of data from medical scanners. The performance requirement of this application is a frame rate of 25 frames per second when rendering 3D models with 2 million triangles, i...... to use a memory access strategy with write-only and read-only phases, and a buffering system, which uses round-robin bank write-access combined with burst read-access....

  6. The Memory System You Can't Avoid it, You Can't Ignore it, You Can't Fake it

    CERN Document Server

    Jacob, Bruce

    2009-01-01

    Today, computer-system optimization, at both the hardware and software levels, must consider the details of the memory system in its analysis; failing to do so yields systems that are increasingly inefficient as those systems become more complex. This lecture seeks to introduce the reader to the most important details of the memory system; it targets both computer scientists and computer engineers in industry and in academia. Roughly speaking, computer scientists are the users of the memory system and computer engineers are the designers of the memory system. Both can benefit tremendously from

  7. Gravitational-wave memory revisited: Memory from the merger and recoil of binary black holes

    International Nuclear Information System (INIS)

    Favata, Marc

    2009-01-01

    Gravitational-wave memory refers to the permanent displacement of the test masses in an idealized (freely-falling) gravitational-wave interferometer. Inspiraling binaries produce a particularly interesting form of memory-the Christodoulou memory. Although it originates from nonlinear interactions at 2.5 post-Newtonian order, the Christodoulou memory affects the gravitational-wave amplitude at leading (Newtonian) order. Previous calculations have computed this non-oscillatory amplitude correction during the inspiral phase of binary coalescence. Using an 'effective-one-body' description calibrated with the results of numerical relativity simulations, the evolution of the memory during the inspiral, merger, and ringdown phases, as well as the memory's final saturation value, are calculated. Using this model for the memory, the prospects for its detection are examined, particularly for supermassive black hole binary coalescences that LISA will detect with high signal-to-noise ratios. Coalescing binary black holes also experience center-of-mass recoil due to the anisotropic emission of gravitational radiation. These recoils can manifest themselves in the gravitational-wave signal in the form of a 'linear' memory and a Doppler shift of the quasi-normal-mode frequencies. The prospects for observing these effects are also discussed.

  8. Scientific developments of liquid crystal-based optical memory: a review

    Science.gov (United States)

    Prakash, Jai; Chandran, Achu; Biradar, Ashok M.

    2017-01-01

    The memory behavior in liquid crystals (LCs), although rarely observed, has made very significant headway over the past three decades since their discovery in nematic type LCs. It has gone from a mere scientific curiosity to application in variety of commodities. The memory element formed by numerous LCs have been protected by patents, and some commercialized, and used as compensation to non-volatile memory devices, and as memory in personal computers and digital cameras. They also have the low cost, large area, high speed, and high density memory needed for advanced computers and digital electronics. Short and long duration memory behavior for industrial applications have been obtained from several LC materials, and an LC memory with interesting features and applications has been demonstrated using numerous LCs. However, considerable challenges still exist in searching for highly efficient, stable, and long-lifespan materials and methods so that the development of useful memory devices is possible. This review focuses on the scientific and technological approach of fascinating applications of LC-based memory. We address the introduction, development status, novel design and engineering principles, and parameters of LC memory. We also address how the amalgamation of LCs could bring significant change/improvement in memory effects in the emerging field of nanotechnology, and the application of LC memory as the active component for futuristic and interesting memory devices.

  9. Breaking the memory wall in MonetDB

    NARCIS (Netherlands)

    P.A. Boncz (Peter); M.L. Kersten (Martin); S. Manegold (Stefan)

    2008-01-01

    textabstractIn the past decades, advances in speed of commodity CPUs have far outpaced advances in RAM latency. Main-memory access has therefore become a performance bottleneck for many computer applications; a phenomenon that is widely known as the "memory wall." In this paper, we report how

  10. A High Performance VLSI Computer Architecture For Computer Graphics

    Science.gov (United States)

    Chin, Chi-Yuan; Lin, Wen-Tai

    1988-10-01

    A VLSI computer architecture, consisting of multiple processors, is presented in this paper to satisfy the modern computer graphics demands, e.g. high resolution, realistic animation, real-time display etc.. All processors share a global memory which are partitioned into multiple banks. Through a crossbar network, data from one memory bank can be broadcasted to many processors. Processors are physically interconnected through a hyper-crossbar network (a crossbar-like network). By programming the network, the topology of communication links among processors can be reconfigurated to satisfy specific dataflows of different applications. Each processor consists of a controller, arithmetic operators, local memory, a local crossbar network, and I/O ports to communicate with other processors, memory banks, and a system controller. Operations in each processor are characterized into two modes, i.e. object domain and space domain, to fully utilize the data-independency characteristics of graphics processing. Special graphics features such as 3D-to-2D conversion, shadow generation, texturing, and reflection, can be easily handled. With the current high density interconnection (MI) technology, it is feasible to implement a 64-processor system to achieve 2.5 billion operations per second, a performance needed in most advanced graphics applications.

  11. Efficient Numeric and Geometric Computations using Heterogeneous Shared Memory Architectures

    Science.gov (United States)

    2017-10-04

    to the memory architectures of CPUs and GPUs to obtain good performance and result in good memory performance using cache management. These methods ...Accomplishments: The PI and students has developed new methods for path and ray tracing and their Report Date: 14-Oct-2017 INVESTIGATOR(S): Phone...The efficiency of our method makes it a good candidate for forming hybrid schemes with wave-based models. One possibility is to couple the ray curve

  12. PIMS: Memristor-Based Processing-in-Memory-and-Storage.

    Energy Technology Data Exchange (ETDEWEB)

    Cook, Jeanine

    2018-02-01

    Continued progress in computing has augmented the quest for higher performance with a new quest for higher energy efficiency. This has led to the re-emergence of Processing-In-Memory (PIM) ar- chitectures that offer higher density and performance with some boost in energy efficiency. Past PIM work either integrated a standard CPU with a conventional DRAM to improve the CPU- memory link, or used a bit-level processor with Single Instruction Multiple Data (SIMD) control, but neither matched the energy consumption of the memory to the computation. We originally proposed to develop a new architecture derived from PIM that more effectively addressed energy efficiency for high performance scientific, data analytics, and neuromorphic applications. We also originally planned to implement a von Neumann architecture with arithmetic/logic units (ALUs) that matched the power consumption of an advanced storage array to maximize energy efficiency. Implementing this architecture in storage was our original idea, since by augmenting storage (in- stead of memory), the system could address both in-memory computation and applications that accessed larger data sets directly from storage, hence Processing-in-Memory-and-Storage (PIMS). However, as our research matured, we discovered several things that changed our original direc- tion, the most important being that a PIM that implements a standard von Neumann-type archi- tecture results in significant energy efficiency improvement, but only about a O(10) performance improvement. In addition to this, the emergence of new memory technologies moved us to propos- ing a non-von Neumann architecture, called Superstrider, implemented not in storage, but in a new DRAM technology called High Bandwidth Memory (HBM). HBM is a stacked DRAM tech- nology that includes a logic layer where an architecture such as Superstrider could potentially be implemented.

  13. Gestures make memories, but what kind? Patients with impaired procedural memory display disruptions in gesture production and comprehension.

    Science.gov (United States)

    Klooster, Nathaniel B; Cook, Susan W; Uc, Ergun Y; Duff, Melissa C

    2014-01-01

    Hand gesture, a ubiquitous feature of human interaction, facilitates communication. Gesture also facilitates new learning, benefiting speakers and listeners alike. Thus, gestures must impact cognition beyond simply supporting the expression of already-formed ideas. However, the cognitive and neural mechanisms supporting the effects of gesture on learning and memory are largely unknown. We hypothesized that gesture's ability to drive new learning is supported by procedural memory and that procedural memory deficits will disrupt gesture production and comprehension. We tested this proposal in patients with intact declarative memory, but impaired procedural memory as a consequence of Parkinson's disease (PD), and healthy comparison participants with intact declarative and procedural memory. In separate experiments, we manipulated the gestures participants saw and produced in a Tower of Hanoi (TOH) paradigm. In the first experiment, participants solved the task either on a physical board, requiring high arching movements to manipulate the discs from peg to peg, or on a computer, requiring only flat, sideways movements of the mouse. When explaining the task, healthy participants with intact procedural memory displayed evidence of their previous experience in their gestures, producing higher, more arching hand gestures after solving on a physical board, and smaller, flatter gestures after solving on a computer. In the second experiment, healthy participants who saw high arching hand gestures in an explanation prior to solving the task subsequently moved the mouse with significantly higher curvature than those who saw smaller, flatter gestures prior to solving the task. These patterns were absent in both gesture production and comprehension experiments in patients with procedural memory impairment. These findings suggest that the procedural memory system supports the ability of gesture to drive new learning.

  14. Breaking the memory wall in MonetDB

    NARCIS (Netherlands)

    Boncz, P.A.; Kersten, M.L.; Manegold, S.

    2008-01-01

    In the past decades, advances in speed of commodity CPUs have far outpaced advances in RAM latency. Main-memory access has therefore become a performance bottleneck for many computer applications; a phenomenon that is widely known as the "memory wall." In this paper, we report how research around

  15. A Josephson ternary associative memory cell

    International Nuclear Information System (INIS)

    Morisue, M.; Suzuki, K.

    1989-01-01

    This paper describes a three-valued content addressable memory cell using a Josephson complementary ternary logic circuit named as JCTL. The memory cell proposed here can perform three operations of searching, writing and reading in ternary logic system. The principle of the memory circuit is illustrated in detail by using the threshold-characteristics of the JCTL. In order to investigate how a high performance operation can be achieved, computer simulations have been made. Simulation results show that the cycle time of memory operation is 120psec, power consumption is about 0.5 μW/cell and tolerances of writing and reading operation are +-15% and +-24%, respectively

  16. Forming of shape memory composite structures

    DEFF Research Database (Denmark)

    Santo, Loredana; Quadrini, Fabrizio; De Chiffre, Leonardo

    2013-01-01

    A new forming procedure was developed to produce shape memory composite structures having structural composite skins over a shape memory polymer core. Core material was obtained by solid state foaming of an epoxy polyester resin with remarkably shape memory properties. The composite skin consisted...... of a two-layer unidirectional thermoplastic composite (glass filled polypropylene). Skins were joined to the foamed core by hot compression without any adhesive: a very good adhesion was obtained as experimental tests confirmed. The structure of the foam core was investigated by means of computer axial...... tomography. Final shape memory composite panels were mechanically tested by three point bending before and after a shape memory step. This step consisted of a compression to reduce the panel thickness up to 60%. At the end of the bending test the panel shape was recovered by heating and a new memory step...

  17. Efficient implementation of a multidimensional fast fourier transform on a distributed-memory parallel multi-node computer

    Science.gov (United States)

    Bhanot, Gyan V [Princeton, NJ; Chen, Dong [Croton-On-Hudson, NY; Gara, Alan G [Mount Kisco, NY; Giampapa, Mark E [Irvington, NY; Heidelberger, Philip [Cortlandt Manor, NY; Steinmacher-Burow, Burkhard D [Mount Kisco, NY; Vranas, Pavlos M [Bedford Hills, NY

    2008-01-01

    The present in invention is directed to a method, system and program storage device for efficiently implementing a multidimensional Fast Fourier Transform (FFT) of a multidimensional array comprising a plurality of elements initially distributed in a multi-node computer system comprising a plurality of nodes in communication over a network, comprising: distributing the plurality of elements of the array in a first dimension across the plurality of nodes of the computer system over the network to facilitate a first one-dimensional FFT; performing the first one-dimensional FFT on the elements of the array distributed at each node in the first dimension; re-distributing the one-dimensional FFT-transformed elements at each node in a second dimension via "all-to-all" distribution in random order across other nodes of the computer system over the network; and performing a second one-dimensional FFT on elements of the array re-distributed at each node in the second dimension, wherein the random order facilitates efficient utilization of the network thereby efficiently implementing the multidimensional FFT. The "all-to-all" re-distribution of array elements is further efficiently implemented in applications other than the multidimensional FFT on the distributed-memory parallel supercomputer.

  18. Topological Schemas of Memory Spaces

    Science.gov (United States)

    Babichev, Andrey; Dabaghian, Yuri A.

    2018-01-01

    Hippocampal cognitive map—a neuronal representation of the spatial environment—is widely discussed in the computational neuroscience literature for decades. However, more recent studies point out that hippocampus plays a major role in producing yet another cognitive framework—the memory space—that incorporates not only spatial, but also non-spatial memories. Unlike the cognitive maps, the memory spaces, broadly understood as “networks of interconnections among the representations of events,” have not yet been studied from a theoretical perspective. Here we propose a mathematical approach that allows modeling memory spaces constructively, as epiphenomena of neuronal spiking activity and thus to interlink several important notions of cognitive neurophysiology. First, we suggest that memory spaces have a topological nature—a hypothesis that allows treating both spatial and non-spatial aspects of hippocampal function on equal footing. We then model the hippocampal memory spaces in different environments and demonstrate that the resulting constructions naturally incorporate the corresponding cognitive maps and provide a wider context for interpreting spatial information. Lastly, we propose a formal description of the memory consolidation process that connects memory spaces to the Morris' cognitive schemas-heuristic representations of the acquired memories, used to explain the dynamics of learning and memory consolidation in a given environment. The proposed approach allows evaluating these constructs as the most compact representations of the memory space's structure. PMID:29740306

  19. Topological Schemas of Memory Spaces

    Directory of Open Access Journals (Sweden)

    Andrey Babichev

    2018-04-01

    Full Text Available Hippocampal cognitive map—a neuronal representation of the spatial environment—is widely discussed in the computational neuroscience literature for decades. However, more recent studies point out that hippocampus plays a major role in producing yet another cognitive framework—the memory space—that incorporates not only spatial, but also non-spatial memories. Unlike the cognitive maps, the memory spaces, broadly understood as “networks of interconnections among the representations of events,” have not yet been studied from a theoretical perspective. Here we propose a mathematical approach that allows modeling memory spaces constructively, as epiphenomena of neuronal spiking activity and thus to interlink several important notions of cognitive neurophysiology. First, we suggest that memory spaces have a topological nature—a hypothesis that allows treating both spatial and non-spatial aspects of hippocampal function on equal footing. We then model the hippocampal memory spaces in different environments and demonstrate that the resulting constructions naturally incorporate the corresponding cognitive maps and provide a wider context for interpreting spatial information. Lastly, we propose a formal description of the memory consolidation process that connects memory spaces to the Morris' cognitive schemas-heuristic representations of the acquired memories, used to explain the dynamics of learning and memory consolidation in a given environment. The proposed approach allows evaluating these constructs as the most compact representations of the memory space's structure.

  20. A Simulation-Based Soft Error Estimation Methodology for Computer Systems

    OpenAIRE

    Sugihara, Makoto; Ishihara, Tohru; Hashimoto, Koji; Muroyama, Masanori

    2006-01-01

    This paper proposes a simulation-based soft error estimation methodology for computer systems. Accumulating soft error rates (SERs) of all memories in a computer system results in pessimistic soft error estimation. This is because memory cells are used spatially and temporally and not all soft errors in them make the computer system faulty. Our soft-error estimation methodology considers the locations and the timings of soft errors occurring at every level of memory hierarchy and estimates th...

  1. SAR: a fast computer for CAMAC data acquisition

    International Nuclear Information System (INIS)

    Bricaud, B.; Faivre, J.C.; Pain, J.

    1979-01-01

    An original 32-bit computer architecture has been designed, based on bit-slice microprocessors, around the AMD 2901. A 32 bit instruction set was defined with a 200 ns execution time per instruction. Basic memory capacity is equally divided into two 32K 32-bit zones named Program memory and Data memory. The computer has a Camac Branch interface; during a Camac transfer activation, which lasts seven cycles, five cycles are free for processing

  2. Advanced topics in security computer system design

    International Nuclear Information System (INIS)

    Stachniak, D.E.; Lamb, W.R.

    1989-01-01

    The capability, performance, and speed of contemporary computer processors, plus the associated performance capability of the operating systems accommodating the processors, have enormously expanded the scope of possibilities for designers of nuclear power plant security computer systems. This paper addresses the choices that could be made by a designer of security computer systems working with contemporary computers and describes the improvement in functionality of contemporary security computer systems based on an optimally chosen design. Primary initial considerations concern the selection of (a) the computer hardware and (b) the operating system. Considerations for hardware selection concern processor and memory word length, memory capacity, and numerous processor features

  3. Shared Memory Parallelization of an Implicit ADI-type CFD Code

    Science.gov (United States)

    Hauser, Th.; Huang, P. G.

    1999-01-01

    A parallelization study designed for ADI-type algorithms is presented using the OpenMP specification for shared-memory multiprocessor programming. Details of optimizations specifically addressed to cache-based computer architectures are described and performance measurements for the single and multiprocessor implementation are summarized. The paper demonstrates that optimization of memory access on a cache-based computer architecture controls the performance of the computational algorithm. A hybrid MPI/OpenMP approach is proposed for clusters of shared memory machines to further enhance the parallel performance. The method is applied to develop a new LES/DNS code, named LESTool. A preliminary DNS calculation of a fully developed channel flow at a Reynolds number of 180, Re(sub tau) = 180, has shown good agreement with existing data.

  4. A review of emerging non-volatile memory (NVM) technologies and applications

    Science.gov (United States)

    Chen, An

    2016-11-01

    This paper will review emerging non-volatile memory (NVM) technologies, with the focus on phase change memory (PCM), spin-transfer-torque random-access-memory (STTRAM), resistive random-access-memory (RRAM), and ferroelectric field-effect-transistor (FeFET) memory. These promising NVM devices are evaluated in terms of their advantages, challenges, and applications. Their performance is compared based on reported parameters of major industrial test chips. Memory selector devices and cell structures are discussed. Changing market trends toward low power (e.g., mobile, IoT) and data-centric applications create opportunities for emerging NVMs. High-performance and low-cost emerging NVMs may simplify memory hierarchy, introduce non-volatility in logic gates and circuits, reduce system power, and enable novel architectures. Storage-class memory (SCM) based on high-density NVMs could fill the performance and density gap between memory and storage. Some unique characteristics of emerging NVMs can be utilized for novel applications beyond the memory space, e.g., neuromorphic computing, hardware security, etc. In the beyond-CMOS era, emerging NVMs have the potential to fulfill more important functions and enable more efficient, intelligent, and secure computing systems.

  5. Managing internode data communications for an uninitialized process in a parallel computer

    Science.gov (United States)

    Archer, Charles J; Blocksome, Michael A; Miller, Douglas R; Parker, Jeffrey J; Ratterman, Joseph D; Smith, Brian E

    2014-05-20

    A parallel computer includes nodes, each having main memory and a messaging unit (MU). Each MU includes computer memory, which in turn includes, MU message buffers. Each MU message buffer is associated with an uninitialized process on the compute node. In the parallel computer, managing internode data communications for an uninitialized process includes: receiving, by an MU of a compute node, one or more data communications messages in an MU message buffer associated with an uninitialized process on the compute node; determining, by an application agent, that the MU message buffer associated with the uninitialized process is full prior to initialization of the uninitialized process; establishing, by the application agent, a temporary message buffer for the uninitialized process in main computer memory; and moving, by the application agent, data communications messages from the MU message buffer associated with the uninitialized process to the temporary message buffer in main computer memory.

  6. Parietal EEG alpha suppression time of memory retrieval reflects memory load while the alpha power of memory maintenance is a composite of the visual process according to simultaneous and successive Sternberg memory tasks.

    Science.gov (United States)

    Okuhata, Shiho; Kusanagi, Takuya; Kobayashi, Tetsuo

    2013-10-25

    The present study investigated EEG alpha activity during visual Sternberg memory tasks using two different stimulus presentation modes to elucidate how the presentation mode affected parietal alpha activity. EEGs were recorded from 10 healthy adults during the Sternberg tasks in which memory items were presented simultaneously and successively. EEG power and suppression time (ST) in the alpha band (8-13Hz) were computed for the memory maintenance and retrieval phases. The alpha activity differed according to the presentation mode during the maintenance phase but not during the retrieval phase. Results indicated that parietal alpha power recorded during the maintenance phase did not reflect the memory load alone. In contrast, ST during the retrieval phase increased with the memory load for both presentation modes, indicating a serial memory scanning process, regardless of the presentation mode. These results indicate that there was a dynamic transition in the memory process from the maintenance phase, which was sensitive to external factors, toward the retrieval phase, during which the process converged on the sequential scanning process, the Sternberg task essentially required. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  7. Operational Semantics of a Weak Memory Model inspired by Go

    OpenAIRE

    Fava, Daniel Schnetzer; Stolz, Volker; Valle, Stian

    2017-01-01

    A memory model dictates which values may be returned when reading from memory. In a parallel computing setting, the memory model affects how processes communicate through shared memory. The design of a proper memory model is a balancing act. On one hand, memory models must be lax enough to allow common hardware and compiler optimizations. On the other, the more lax the model, the harder it is for developers to reason about their programs. In order to alleviate the burden on programmers, a wea...

  8. Gestures make memories, but what kind? Patients with impaired procedural memory display disruptions in gesture production and comprehension

    Directory of Open Access Journals (Sweden)

    Nathaniel Bloem Klooster

    2015-01-01

    Full Text Available Hand gesture, a ubiquitous feature of human interaction, facilitates communication. Gesture also facilitates new learning, benefiting speakers and listeners alike. Thus, gestures must impact cognition beyond simply supporting the expression of already-formed ideas. However, the cognitive and neural mechanisms supporting the effects of gesture on learning and memory are largely unknown. We hypothesized that gesture’s ability to drive new learning is supported by procedural memory and that procedural memory deficits will disrupt gesture production and comprehension. We tested this proposal in patients with intact declarative memory, but impaired procedural memory as a consequence of Parkinson’s disease, and healthy comparison participants with intact declarative and procedural memory. In separate experiments, we manipulated the gestures participants saw and produced in a Tower of Hanoi paradigm. In the first experiment, participants solved the task either on a physical board, requiring high arching movements to manipulate the discs from peg to peg, or on a computer, requiring only flat, sideways movements of the mouse. When explaining the task, healthy participants with intact procedural memory displayed evidence of their previous experience in their gestures, producing higher, more arching hand gestures after solving on a physical board, and smaller, flatter gestures after solving on a computer. In the second experiment, healthy participants who saw high arching hand gestures in an explanation prior to solving the task subsequently moved the mouse with significantly higher curvature than those who saw smaller, flatter gestures prior to solving the task. These patterns were absent in both gesture production and comprehension experiments in patients with procedural memory impairment. These findings suggest that the procedural memory system supports the ability of gesture to drive new learning.

  9. C-RAM: breaking mobile device memory barriers using the cloud

    OpenAIRE

    Pamboris, A; Pietzuch, P

    2015-01-01

    ?Mobile applications are constrained by the available memory of mobile devices. We present C-RAM, a system that uses cloud-based memory to extend the memory of mobile devices. It splits application state and its associated computation between a mobile device and a cloud node to allow applications to consume more memory, while minimising the performance impact. C-RAM thus enables developers to realise new applications or port legacy desktop applications with a large memory footprint to mobile ...

  10. A Heterogeneous High-Performance System for Computational and Computer Science

    Science.gov (United States)

    2016-11-15

    expand the research infrastructure at the institution but also to enhance the high -performance computing training provided to both undergraduate and... cloud computing, supercomputing, and the availability of cheap memory and storage led to enormous amounts of data to be sifted through in forensic... High -Performance Computing (HPC) tools that can be integrated with existing curricula and support our research to modernize and dramatically advance

  11. A Working Memory Test Battery: Java-Based Collection of Seven Working Memory Tasks

    Directory of Open Access Journals (Sweden)

    James M Stone

    2015-06-01

    Full Text Available Working memory is a key construct within cognitive science. It is an important theory in its own right, but the influence of working memory is enriched due to the widespread evidence that measures of its capacity are linked to a variety of functions in wider cognition. To facilitate the active research environment into this topic, we describe seven computer-based tasks that provide estimates of short-term and working memory incorporating both visuospatial and verbal material. The memory span tasks provided are; digit span, matrix span, arrow span, reading span, operation span, rotation span, and symmetry span. These tasks are built to be simple to use, flexible to adapt to the specific needs of the research design, and are open source. All files can be downloaded from the project website http://www.cognitivetools.uk and the source code is available via Github.

  12. Experimental Effects of Acute Exercise on Iconic Memory, Short-Term Episodic, and Long-Term Episodic Memory.

    Science.gov (United States)

    Yanes, Danielle; Loprinzi, Paul D

    2018-06-11

    The present experiment evaluated the effects of acute exercise on iconic memory and short- and long-term episodic memory. A two-arm, parallel-group randomized experiment was employed ( n = 20 per group; M age = 21 year). The experimental group engaged in an acute bout of moderate-intensity treadmill exercise for 15 min, while the control group engaged in a seated, time-matched computer task. Afterwards, the participants engaged in a paragraph-level episodic memory task (20 min delay and 24 h delay recall) as well as an iconic memory task, which involved 10 trials (at various speeds from 100 ms to 800 ms) of recalling letters from a 3 × 3 array matrix. For iconic memory, there was a significant main effect for time (F = 42.9, p memory scores at both the baseline (19.22 vs. 17.20) and follow-up (18.15 vs. 15.77), but these results were not statistically significant. These findings provide some suggestive evidence hinting towards an iconic memory and episodic benefit from acute exercise engagement.

  13. Scalable unit commitment by memory-bounded ant colony optimization with A{sup *} local search

    Energy Technology Data Exchange (ETDEWEB)

    Saber, Ahmed Yousuf; Alshareef, Abdulaziz Mohammed [Department of Electrical and Computer Engineering, King Abdulaziz University, P.O. Box 80204, Jeddah 21589 (Saudi Arabia)

    2008-07-15

    Ant colony optimization (ACO) is successfully applied in optimization problems. Performance of the basic ACO for small problems with moderate dimension and searching space is satisfactory. As the searching space grows exponentially in the large-scale unit commitment problem, the basic ACO is not applicable for the vast size of pheromone matrix of ACO in practical time and physical computer-memory limit. However, memory-bounded methods prune the least-promising nodes to fit the system in computer memory. Therefore, the authors propose memory-bounded ant colony optimization (MACO) in this paper for the scalable (no restriction for system size) unit commitment problem. This MACO intelligently solves the limitation of computer memory, and does not permit the system to grow beyond a bound on memory. In the memory-bounded ACO implementation, A{sup *} heuristic is introduced to increase local searching ability and probabilistic nearest neighbor method is applied to estimate pheromone intensity for the forgotten value. Finally, the benchmark data sets and existing methods are used to show the effectiveness of the proposed method. (author)

  14. Event boundaries and memory improvement.

    Science.gov (United States)

    Pettijohn, Kyle A; Thompson, Alexis N; Tamplin, Andrea K; Krawietz, Sabine A; Radvansky, Gabriel A

    2016-03-01

    The structure of events can influence later memory for information that is embedded in them, with evidence indicating that event boundaries can both impair and enhance memory. The current study explored whether the presence of event boundaries during encoding can structure information to improve memory. In Experiment 1, memory for a list of words was tested in which event structure was manipulated by having participants walk through a doorway, or not, halfway through the word list. In Experiment 2, memory for lists of words was tested in which event structure was manipulated using computer windows. Finally, in Experiments 3 and 4, event structure was manipulated by having event shifts described in narrative texts. The consistent finding across all of these methods and materials was that memory was better when the information was distributed across two events rather than combined into a single event. Moreover, Experiment 4 demonstrated that increasing the number of event boundaries from one to two increased the memory benefit. These results are interpreted in the context of the Event Horizon Model of event cognition. Copyright © 2015 Elsevier B.V. All rights reserved.

  15. Achieving memory scalability in the GYSELA code to fit Exascale constraints

    International Nuclear Information System (INIS)

    Rozar, Fabien; Latu, Guillaume; Roman, Jean

    2014-01-01

    Gyrokinetic simulations lead to huge computational needs. Up to now, the semi-Lagrangian code Gysela performed large simulations using a few thousands cores (65 k cores). But to understand more accurately the nature of the plasma turbulence, finer resolutions are wished which make Gysela a good candidate to exploit the computational power of future Exascale machines. Among the Exascale challenges, the less memory per core issue is one of the must critical. This paper deals with memory management in order to reduce the memory peak, and presents an approach to understand the memory behaviour of an application when dealing with very large meshes. This enables us to extrapolate the behaviour of Gysela for expected capabilities of Exascale machine. (authors)

  16. Dynamical behaviour of neuronal networks iterated with memory

    International Nuclear Information System (INIS)

    Melatagia, P.M.; Ndoundam, R.; Tchuente, M.

    2005-11-01

    We study memory iteration where the updating consider a longer history of each site and the set of interaction matrices is palindromic. We analyze two different ways of updating the networks: parallel iteration with memory and sequential iteration with memory that we introduce in this paper. For parallel iteration, we define Lyapunov functional which permits us to characterize the periods behaviour and explicitly bounds the transient lengths of neural networks iterated with memory. For sequential iteration, we use an algebraic invariant to characterize the periods behaviour of the studied model of neural computation. (author)

  17. Polymorphous Computing Architecture (PCA) Kernel-Level Benchmarks

    National Research Council Canada - National Science Library

    Lebak, J

    2004-01-01

    .... "Computation" aspects include floating-point and integer performance, as well as the memory hierarchy, while the "communication" aspects include the network, the memory hierarchy, and the 110 capabilities...

  18. Emerging memory technologies design, architecture, and applications

    CERN Document Server

    2014-01-01

    This book explores the design implications of emerging, non-volatile memory (NVM) technologies on future computer memory hierarchy architecture designs. Since NVM technologies combine the speed of SRAM, the density of DRAM, and the non-volatility of Flash memory, they are very attractive as the basis for future universal memories. This book provides a holistic perspective on the topic, covering modeling, design, architecture and applications. The practical information included in this book will enable designers to exploit emerging memory technologies to improve significantly the performance/power/reliability of future, mainstream integrated circuits. • Provides a comprehensive reference on designing modern circuits with emerging, non-volatile memory technologies, such as MRAM and PCRAM; • Explores new design opportunities offered by emerging memory technologies, from a holistic perspective; • Describes topics in technology, modeling, architecture and applications; • Enables circuit designers to ex...

  19. EDITORIAL: Non-volatile memory based on nanostructures Non-volatile memory based on nanostructures

    Science.gov (United States)

    Kalinin, Sergei; Yang, J. Joshua; Demming, Anna

    2011-06-01

    Non-volatile memory refers to the crucial ability of computers to store information once the power source has been removed. Traditionally this has been achieved through flash, magnetic computer storage and optical discs, and in the case of very early computers paper tape and punched cards. While computers have advanced considerably from paper and punched card memory devices, there are still limits to current non-volatile memory devices that restrict them to use as secondary storage from which data must be loaded and carefully saved when power is shut off. Denser, faster, low-energy non-volatile memory is highly desired and nanostructures are the critical enabler. This special issue on non-volatile memory based on nanostructures describes some of the new physics and technology that may revolutionise future computers. Phase change random access memory, which exploits the reversible phase change between crystalline and amorphous states, also holds potential for future memory devices. The chalcogenide Ge2Sb2Te5 (GST) is a promising material in this field because it combines a high activation energy for crystallization and a relatively low crystallization temperature, as well as a low melting temperature and low conductivity, which accommodates localized heating. Doping is often used to lower the current required to activate the phase change or 'reset' GST but this often aggravates other problems. Now researchers in Korea report in-depth studies of SiO2-doped GST and identify ways of optimising the material's properties for phase-change random access memory [1]. Resistance switching is an area that has attracted a particularly high level of interest for non-volatile memory technology, and a great deal of research has focused on the potential of TiO2 as a model system in this respect. Researchers at HP labs in the US have made notable progress in this field, and among the work reported in this special issue they describe means to control the switch resistance and show

  20. Parallel computing by Monte Carlo codes MVP/GMVP

    International Nuclear Information System (INIS)

    Nagaya, Yasunobu; Nakagawa, Masayuki; Mori, Takamasa

    2001-01-01

    General-purpose Monte Carlo codes MVP/GMVP are well-vectorized and thus enable us to perform high-speed Monte Carlo calculations. In order to achieve more speedups, we parallelized the codes on the different types of parallel computing platforms or by using a standard parallelization library MPI. The platforms used for benchmark calculations are a distributed-memory vector-parallel computer Fujitsu VPP500, a distributed-memory massively parallel computer Intel paragon and a distributed-memory scalar-parallel computer Hitachi SR2201, IBM SP2. As mentioned generally, linear speedup could be obtained for large-scale problems but parallelization efficiency decreased as the batch size per a processing element(PE) was smaller. It was also found that the statistical uncertainty for assembly powers was less than 0.1% by the PWR full-core calculation with more than 10 million histories and it took about 1.5 hours by massively parallel computing. (author)

  1. I. WORKING MEMORY CAPACITY IN CONTEXT: MODELING DYNAMIC PROCESSES OF BEHAVIOR, MEMORY, AND DEVELOPMENT.

    Science.gov (United States)

    Simmering, Vanessa R

    2016-09-01

    Working memory is a vital cognitive skill that underlies a broad range of behaviors. Higher cognitive functions are reliably predicted by working memory measures from two domains: children's performance on complex span tasks, and infants' performance in looking paradigms. Despite the similar predictive power across these research areas, theories of working memory development have not connected these different task types and developmental periods. The current project takes a first step toward bridging this gap by presenting a process-oriented theory, focusing on two tasks designed to assess visual working memory capacity in infants (the change-preference task) versus children and adults (the change detection task). Previous studies have shown inconsistent results, with capacity estimates increasing from one to four items during infancy, but only two to three items during early childhood. A probable source of this discrepancy is the different task structures used with each age group, but prior theories were not sufficiently specific to explain how performance relates across tasks. The current theory focuses on cognitive dynamics, that is, how memory representations are formed, maintained, and used within specific task contexts over development. This theory was formalized in a computational model to generate three predictions: 1) capacity estimates in the change-preference task should continue to increase beyond infancy; 2) capacity estimates should be higher in the change-preference versus change detection task when tested within individuals; and 3) performance should correlate across tasks because both rely on the same underlying memory system. I also tested a fourth prediction, that development across tasks could be explained through increasing real-time stability, realized computationally as strengthening connectivity within the model. Results confirmed these predictions, supporting the cognitive dynamics account of performance and developmental changes in real

  2. Resilient and Robust High Performance Computing Platforms for Scientific Computing Integrity

    Energy Technology Data Exchange (ETDEWEB)

    Jin, Yier [Univ. of Central Florida, Orlando, FL (United States)

    2017-07-14

    As technology advances, computer systems are subject to increasingly sophisticated cyber-attacks that compromise both their security and integrity. High performance computing platforms used in commercial and scientific applications involving sensitive, or even classified data, are frequently targeted by powerful adversaries. This situation is made worse by a lack of fundamental security solutions that both perform efficiently and are effective at preventing threats. Current security solutions fail to address the threat landscape and ensure the integrity of sensitive data. As challenges rise, both private and public sectors will require robust technologies to protect its computing infrastructure. The research outcomes from this project try to address all these challenges. For example, we present LAZARUS, a novel technique to harden kernel Address Space Layout Randomization (KASLR) against paging-based side-channel attacks. In particular, our scheme allows for fine-grained protection of the virtual memory mappings that implement the randomization. We demonstrate the effectiveness of our approach by hardening a recent Linux kernel with LAZARUS, mitigating all of the previously presented side-channel attacks on KASLR. Our extensive evaluation shows that LAZARUS incurs only 0.943% overhead for standard benchmarks, and is therefore highly practical. We also introduced HA2lloc, a hardware-assisted allocator that is capable of leveraging an extended memory management unit to detect memory errors in the heap. We also perform testing using HA2lloc in a simulation environment and find that the approach is capable of preventing common memory vulnerabilities.

  3. Consolidation of long-term memory: Evidence and alternatives.

    NARCIS (Netherlands)

    Meeter, M.; Murre, J.M.J.

    2004-01-01

    Memory loss in retrograde amnesia has long been held to be larger for recent periods than for remote periods, a pattern usually referred to as the Ribot gradient. One explanation for this gradient is consolidation of long-term memories. Several computational models of such a process have shown how

  4. Near-field NanoThermoMechanical memory

    International Nuclear Information System (INIS)

    Elzouka, Mahmoud; Ndao, Sidy

    2014-01-01

    In this letter, we introduce the concept of NanoThermoMechanical Memory. Unlike electronic memory, a NanoThermoMechanical memory device uses heat instead of electricity to record, store, and recover data. Memory function is achieved through the coupling of near-field thermal radiation and thermal expansion resulting in negative differential thermal resistance and thermal latching. Here, we demonstrate theoretically via numerical modeling the concept of near-field thermal radiation enabled negative differential thermal resistance that achieves bistable states. Design and implementation of a practical silicon based NanoThermoMechanical memory device are proposed along with a study of its dynamic response under write/read cycles. With more than 50% of the world's energy losses being in the form of heat along with the ever increasing need to develop computer technologies which can operate in harsh environments (e.g., very high temperatures), NanoThermoMechanical memory and logic devices may hold the answer

  5. Phenomenological validity of an OCD-memory model and the remember/know distinction

    NARCIS (Netherlands)

    van den Hout, M.; Kindt, M.

    2003-01-01

    In earlier experiments using interactive computer animation with healthy subjects, it was found that displaying compulsive-like repeated checking behavior affects memory. That is, checking does not alter actual memory accuracy, but it does affect 'meta-memory': as checking continues, recollections

  6. Polymorphous computing fabric

    Science.gov (United States)

    Wolinski, Christophe Czeslaw [Los Alamos, NM; Gokhale, Maya B [Los Alamos, NM; McCabe, Kevin Peter [Los Alamos, NM

    2011-01-18

    Fabric-based computing systems and methods are disclosed. A fabric-based computing system can include a polymorphous computing fabric that can be customized on a per application basis and a host processor in communication with said polymorphous computing fabric. The polymorphous computing fabric includes a cellular architecture that can be highly parameterized to enable a customized synthesis of fabric instances for a variety of enhanced application performances thereof. A global memory concept can also be included that provides the host processor random access to all variables and instructions associated with the polymorphous computing fabric.

  7. Selective updating of working memory content modulates meso-cortico-striatal activity.

    Science.gov (United States)

    Murty, Vishnu P; Sambataro, Fabio; Radulescu, Eugenia; Altamura, Mario; Iudicello, Jennifer; Zoltick, Bradley; Weinberger, Daniel R; Goldberg, Terry E; Mattay, Venkata S

    2011-08-01

    Accumulating evidence from non-human primates and computational modeling suggests that dopaminergic signals arising from the midbrain (substantia nigra/ventral tegmental area) mediate striatal gating of the prefrontal cortex during the selective updating of working memory. Using event-related functional magnetic resonance imaging, we explored the neural mechanisms underlying the selective updating of information stored in working memory. Participants were scanned during a novel working memory task that parses the neurophysiology underlying working memory maintenance, overwriting, and selective updating. Analyses revealed a functionally coupled network consisting of a midbrain region encompassing the substantia nigra/ventral tegmental area, caudate, and dorsolateral prefrontal cortex that was selectively engaged during working memory updating compared to the overwriting and maintenance of working memory content. Further analysis revealed differential midbrain-dorsolateral prefrontal interactions during selective updating between low-performing and high-performing individuals. These findings highlight the role of this meso-cortico-striatal circuitry during the selective updating of working memory in humans, which complements previous research in behavioral neuroscience and computational modeling. Published by Elsevier Inc.

  8. Attention and visual memory in visualization and computer graphics.

    Science.gov (United States)

    Healey, Christopher G; Enns, James T

    2012-07-01

    A fundamental goal of visualization is to produce images of data that support visual analysis, exploration, and discovery of novel insights. An important consideration during visualization design is the role of human visual perception. How we "see" details in an image can directly impact a viewer's efficiency and effectiveness. This paper surveys research on attention and visual perception, with a specific focus on results that have direct relevance to visualization and visual analytics. We discuss theories of low-level visual perception, then show how these findings form a foundation for more recent work on visual memory and visual attention. We conclude with a brief overview of how knowledge of visual attention and visual memory is being applied in visualization and graphics. We also discuss how challenges in visualization are motivating research in psychophysics.

  9. Construction and Application of an AMR Algorithm for Distributed Memory Computers

    OpenAIRE

    Deiterding, Ralf

    2003-01-01

    While the parallelization of blockstructured adaptive mesh refinement techniques is relatively straight-forward on shared memory architectures, appropriate distribution strategies for the emerging generation of distributed memory machines are a topic of on-going research. In this paper, a locality-preserving domain decomposition is proposed that partitions the entire AMR hierarchy from the base level on. It is shown that the approach reduces the communication costs and simplifies the im...

  10. Short-term memory and long-term memory are still different.

    Science.gov (United States)

    Norris, Dennis

    2017-09-01

    A commonly expressed view is that short-term memory (STM) is nothing more than activated long-term memory. If true, this would overturn a central tenet of cognitive psychology-the idea that there are functionally and neurobiologically distinct short- and long-term stores. Here I present an updated case for a separation between short- and long-term stores, focusing on the computational demands placed on any STM system. STM must support memory for previously unencountered information, the storage of multiple tokens of the same type, and variable binding. None of these can be achieved simply by activating long-term memory. For example, even a simple sequence of digits such as "1, 3, 1" where there are 2 tokens of the digit "1" cannot be stored in the correct order simply by activating the representations of the digits "1" and "3" in LTM. I also review recent neuroimaging data that has been presented as evidence that STM is activated LTM and show that these data are exactly what one would expect to see based on a conventional 2-store view. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  11. MEMORY EFFICIENT SEMI-GLOBAL MATCHING

    Directory of Open Access Journals (Sweden)

    H. Hirschmüller

    2012-07-01

    Full Text Available Semi-GlobalMatching (SGM is a robust stereo method that has proven its usefulness in various applications ranging from aerial image matching to driver assistance systems. It supports pixelwise matching for maintaining sharp object boundaries and fine structures and can be implemented efficiently on different computation hardware. Furthermore, the method is not sensitive to the choice of parameters. The structure of the matching algorithm is well suited to be processed by highly paralleling hardware e.g. FPGAs and GPUs. The drawback of SGM is the temporary memory requirement that depends on the number of pixels and the disparity range. On the one hand this results in long idle times due to the bandwidth limitations of the external memory and on the other hand the capacity bounds are quickly reached. A full HD image with a size of 1920 × 1080 pixels and a disparity range of 512 pixels requires already 1 billion elements, which is at least several GB of RAM, depending on the element size, wich are not available at standard FPGA- and GPUboards. The novel memory efficient (eSGM method is an advancement in which the amount of temporary memory only depends on the number of pixels and not on the disparity range. This permits matching of huge images in one piece and reduces the requirements of the memory bandwidth for real-time mobile robotics. The feature comes at the cost of 50% more compute operations as compared to SGM. This overhead is compensated by the previously idle compute logic within the FPGA and the GPU and therefore results in an overall performance increase. We show that eSGM produces the same high quality disparity images as SGM and demonstrate its performance both on an aerial image pair with 142 MPixel and within a real-time mobile robotic application. We have implemented the new method on the CPU, GPU and FPGA.We conclude that eSGM is advantageous for a GPU implementation and essential for an implementation on our FPGA.

  12. Enhancing an appointment diary on a pocket computer for use by people after brain injury.

    Science.gov (United States)

    Wright, P; Rogers, N; Hall, C; Wilson, B; Evans, J; Emslie, H

    2001-12-01

    People with memory loss resulting from brain injury benefit from purpose-designed memory aids such as appointment diaries on pocket computers. The present study explores the effects of extending the range of memory aids and including games. For 2 months, 12 people who had sustained brain injury were loaned a pocket computer containing three purpose-designed memory aids: diary, notebook and to-do list. A month later they were given another computer with the same memory aids but a different method of text entry (physical keyboard or touch-screen keyboard). Machine order was counterbalanced across participants. Assessment was by interviews during the loan periods, rating scales, performance tests and computer log files. All participants could use the memory aids and ten people (83%) found them very useful. Correlations among the three memory aids were not significant, suggesting individual variation in how they were used. Games did not increase use of the memory aids, nor did loan of the preferred pocket computer (with physical keyboard). Significantly more diary entries were made by people who had previously used other memory aids, suggesting that a better understanding of how to use a range of memory aids could benefit some people with brain injury.

  13. Towards scalable parellelism in Monte Carlo particle transport codes using remote memory access

    Energy Technology Data Exchange (ETDEWEB)

    Romano, Paul K [Los Alamos National Laboratory; Brown, Forrest B [Los Alamos National Laboratory; Forget, Benoit [MIT

    2010-01-01

    One forthcoming challenge in the area of high-performance computing is having the ability to run large-scale problems while coping with less memory per compute node. In this work, they investigate a novel data decomposition method that would allow Monte Carlo transport calculations to be performed on systems with limited memory per compute node. In this method, each compute node remotely retrieves a small set of geometry and cross-section data as needed and remotely accumulates local tallies when crossing the boundary of the local spatial domain. initial results demonstrate that while the method does allow large problems to be run in a memory-limited environment, achieving scalability may be difficult due to inefficiencies in the current implementation of RMA operations.

  14. Towards scalable parallelism in Monte Carlo particle transport codes using remote memory access

    International Nuclear Information System (INIS)

    Romano, Paul K.; Forget, Benoit; Brown, Forrest

    2010-01-01

    One forthcoming challenge in the area of high-performance computing is having the ability to run large-scale problems while coping with less memory per compute node. In this work, we investigate a novel data decomposition method that would allow Monte Carlo transport calculations to be performed on systems with limited memory per compute node. In this method, each compute node remotely retrieves a small set of geometry and cross-section data as needed and remotely accumulates local tallies when crossing the boundary of the local spatial domain. Initial results demonstrate that while the method does allow large problems to be run in a memory-limited environment, achieving scalability may be difficult due to inefficiencies in the current implementation of RMA operations. (author)

  15. Comparing memory-efficient genome assemblers on stand-alone and cloud infrastructures.

    Science.gov (United States)

    Kleftogiannis, Dimitrios; Kalnis, Panos; Bajic, Vladimir B

    2013-01-01

    A fundamental problem in bioinformatics is genome assembly. Next-generation sequencing (NGS) technologies produce large volumes of fragmented genome reads, which require large amounts of memory to assemble the complete genome efficiently. With recent improvements in DNA sequencing technologies, it is expected that the memory footprint required for the assembly process will increase dramatically and will emerge as a limiting factor in processing widely available NGS-generated reads. In this report, we compare current memory-efficient techniques for genome assembly with respect to quality, memory consumption and execution time. Our experiments prove that it is possible to generate draft assemblies of reasonable quality on conventional multi-purpose computers with very limited available memory by choosing suitable assembly methods. Our study reveals the minimum memory requirements for different assembly programs even when data volume exceeds memory capacity by orders of magnitude. By combining existing methodologies, we propose two general assembly strategies that can improve short-read assembly approaches and result in reduction of the memory footprint. Finally, we discuss the possibility of utilizing cloud infrastructures for genome assembly and we comment on some findings regarding suitable computational resources for assembly.

  16. Comparing Memory-Efficient Genome Assemblers on Stand-Alone and Cloud Infrastructures

    KAUST Repository

    Kleftogiannis, Dimitrios A.

    2013-09-27

    A fundamental problem in bioinformatics is genome assembly. Next-generation sequencing (NGS) technologies produce large volumes of fragmented genome reads, which require large amounts of memory to assemble the complete genome efficiently. With recent improvements in DNA sequencing technologies, it is expected that the memory footprint required for the assembly process will increase dramatically and will emerge as a limiting factor in processing widely available NGS-generated reads. In this report, we compare current memory-efficient techniques for genome assembly with respect to quality, memory consumption and execution time. Our experiments prove that it is possible to generate draft assemblies of reasonable quality on conventional multi-purpose computers with very limited available memory by choosing suitable assembly methods. Our study reveals the minimum memory requirements for different assembly programs even when data volume exceeds memory capacity by orders of magnitude. By combining existing methodologies, we propose two general assembly strategies that can improve short-read assembly approaches and result in reduction of the memory footprint. Finally, we discuss the possibility of utilizing cloud infrastructures for genome assembly and we comment on some findings regarding suitable computational resources for assembly.

  17. [Artificial intelligence meeting neuropsychology. Semantic memory in normal and pathological aging].

    Science.gov (United States)

    Aimé, Xavier; Charlet, Jean; Maillet, Didier; Belin, Catherine

    2015-03-01

    Artificial intelligence (IA) is the subject of much research, but also many fantasies. It aims to reproduce human intelligence in its learning capacity, knowledge storage and computation. In 2014, the Defense Advanced Research Projects Agency (DARPA) started the restoring active memory (RAM) program that attempt to develop implantable technology to bridge gaps in the injured brain and restore normal memory function to people with memory loss caused by injury or disease. In another IA's field, computational ontologies (a formal and shared conceptualization) try to model knowledge in order to represent a structured and unambiguous meaning of the concepts of a target domain. The aim of these structures is to ensure a consensual understanding of their meaning and a univariant use (the same concept is used by all to categorize the same individuals). The first representations of knowledge in the AI's domain are largely based on model tests of semantic memory. This one, as a component of long-term memory is the memory of words, ideas, concepts. It is the only declarative memory system that resists so remarkably to the effects of age. In contrast, non-specific cognitive changes may decrease the performance of elderly in various events and instead report difficulties of access to semantic representations that affect the semantics stock itself. Some dementias, like semantic dementia and Alzheimer's disease, are linked to alteration of semantic memory. We propose in this paper, using the computational ontologies model, a formal and relatively thin modeling, in the service of neuropsychology: 1) for the practitioner with decision support systems, 2) for the patient as cognitive prosthesis outsourced, and 3) for the researcher to study semantic memory.

  18. The numerical parallel computing of photon transport

    International Nuclear Information System (INIS)

    Huang Qingnan; Liang Xiaoguang; Zhang Lifa

    1998-12-01

    The parallel computing of photon transport is investigated, the parallel algorithm and the parallelization of programs on parallel computers both with shared memory and with distributed memory are discussed. By analyzing the inherent law of the mathematics and physics model of photon transport according to the structure feature of parallel computers, using the strategy of 'to divide and conquer', adjusting the algorithm structure of the program, dissolving the data relationship, finding parallel liable ingredients and creating large grain parallel subtasks, the sequential computing of photon transport into is efficiently transformed into parallel and vector computing. The program was run on various HP parallel computers such as the HY-1 (PVP), the Challenge (SMP) and the YH-3 (MPP) and very good parallel speedup has been gotten

  19. Consciousness and working memory: Current trends and research perspectives.

    Science.gov (United States)

    Velichkovsky, Boris B

    2017-10-01

    Working memory has long been thought to be closely related to consciousness. However, recent empirical studies show that unconscious content may be maintained within working memory and that complex cognitive computations may be performed on-line. This promotes research on the exact relationships between consciousness and working memory. Current evidence for working memory being a conscious as well as an unconscious process is reviewed. Consciousness is shown to be considered a subset of working memory by major current theories of working memory. Evidence for unconscious elements in working memory is shown to come from visual masking and attentional blink paradigms, and from the studies of implicit working memory. It is concluded that more research is needed to explicate the relationship between consciousness and working memory. Future research directions regarding the relationship between consciousness and working memory are discussed. Copyright © 2017 Elsevier Inc. All rights reserved.

  20. An A.P.L. micro-programmed machine: implementation on a Multi-20 mini-computer, memory organization, micro-programming and flowcharts

    International Nuclear Information System (INIS)

    Granger, Jean-Louis

    1975-01-01

    This work deals with the presentation of an APL interpreter implemented on an MULTI 20 mini-computer. It includes a left to right syntax analyser, a recursive routine for generation and execution. This routine uses a beating method for array processing. Moreover, during the execution of all APL statements, dynamic memory allocation is used. Execution of basic operations has been micro-programmed. The basic APL interpreter has a length of 10 K bytes. It uses overlay methods. (author) [fr

  1. Neural Global Pattern Similarity Underlies True and False Memories.

    Science.gov (United States)

    Ye, Zhifang; Zhu, Bi; Zhuang, Liping; Lu, Zhonglin; Chen, Chuansheng; Xue, Gui

    2016-06-22

    The neural processes giving rise to human memory strength signals remain poorly understood. Inspired by formal computational models that posit a central role of global matching in memory strength, we tested a novel hypothesis that the strengths of both true and false memories arise from the global similarity of an item's neural activation pattern during retrieval to that of all the studied items during encoding (i.e., the encoding-retrieval neural global pattern similarity [ER-nGPS]). We revealed multiple ER-nGPS signals that carried distinct information and contributed differentially to true and false memories: Whereas the ER-nGPS in the parietal regions reflected semantic similarity and was scaled with the recognition strengths of both true and false memories, ER-nGPS in the visual cortex contributed solely to true memory. Moreover, ER-nGPS differences between the parietal and visual cortices were correlated with frontal monitoring processes. By combining computational and neuroimaging approaches, our results advance a mechanistic understanding of memory strength in recognition. What neural processes give rise to memory strength signals, and lead to our conscious feelings of familiarity? Using fMRI, we found that the memory strength of a given item depends not only on how it was encoded during learning, but also on the similarity of its neural representation with other studied items. The global neural matching signal, mainly in the parietal lobule, could account for the memory strengths of both studied and unstudied items. Interestingly, a different global matching signal, originated from the visual cortex, could distinguish true from false memories. The findings reveal multiple neural mechanisms underlying the memory strengths of events registered in the brain. Copyright © 2016 the authors 0270-6474/16/366792-11$15.00/0.

  2. A view of Kanerva's sparse distributed memory

    Science.gov (United States)

    Denning, P. J.

    1986-01-01

    Pentti Kanerva is working on a new class of computers, which are called pattern computers. Pattern computers may close the gap between capabilities of biological organisms to recognize and act on patterns (visual, auditory, tactile, or olfactory) and capabilities of modern computers. Combinations of numeric, symbolic, and pattern computers may one day be capable of sustaining robots. The overview of the requirements for a pattern computer, a summary of Kanerva's Sparse Distributed Memory (SDM), and examples of tasks this computer can be expected to perform well are given.

  3. High Temperature Memories in SiC Technology

    OpenAIRE

    Ekström, Mattias

    2014-01-01

    This thesis is part of the Working On Venus (WOV) project. The aim of the project is to design electronics in silicon carbide (SiC) that can withstand the extreme surface environmen  of Venus. This thesis investigates some possible computer memory technologies that could survive on the surface of Venus. A memory must be able to function at 460 °C and after a total radiation dose of at least 200 Gy (SiC). This thesis is a literature survey. The thesis covers several Random-Access Memory (RAM) ...

  4. Performing stencil computations

    Energy Technology Data Exchange (ETDEWEB)

    Donofrio, David

    2018-01-16

    A method and apparatus for performing stencil computations efficiently are disclosed. In one embodiment, a processor receives an offset, and in response, retrieves a value from a memory via a single instruction, where the retrieving comprises: identifying, based on the offset, one of a plurality of registers of the processor; loading an address stored in the identified register; and retrieving from the memory the value at the address.

  5. Electrostatically telescoping nanotube nonvolatile memory device

    International Nuclear Information System (INIS)

    Kang, Jeong Won; Jiang Qing

    2007-01-01

    We propose a nonvolatile memory based on carbon nanotubes (CNTs) serving as the key building blocks for molecular-scale computers and investigate the dynamic operations of a double-walled CNT memory element by classical molecular dynamics simulations. The localized potential energy wells achieved from both the interwall van der Waals energy and CNT-metal binding energy make the bistability of the CNT positions and the electrostatic attractive forces induced by the voltage differences lead to the reversibility of this CNT memory. The material for the electrodes should be carefully chosen to achieve the nonvolatility of this memory. The kinetic energy of the CNT shuttle experiences several rebounds induced by the collisions of the CNT onto the metal electrodes, and this is critically important to the performance of such an electrostatically telescoping CNT memory because the collision time is sufficiently long to cause a delay of the state transition

  6. Computational Modeling of Shape Memory Polymer Origami that Responds to Light

    Science.gov (United States)

    Mailen, Russell William

    Shape memory polymers (SMPs) transform in response to external stimuli, such as infrared (IR) light. Although SMPs have many applications, this investigation focuses on their use as actuators in self-folding origami structures. Ink patterned on the surface of the SMP sheet absorbs thermal energy from the IR light, which produces localized heating. The material shrinks wherever the activation temperature is exceeded and can produce out-of-plane deformation. The time and temperature dependent response of these SMPs provides unique opportunities for developing complex three-dimensional (3D) structures from initially flat sheets through self-folding origami, but the application of this technique requires predicting accurately the final folded or deformed shape. Furthermore, current computational approaches for SMPs do not fully couple the thermo-mechanical response of the material. Hence, a proposed nonlinear, 3D, thermo-viscoelastic finite element framework was formulated to predict deformed shapes for different self-folding systems and compared to experimental results for self-folding origami structures. A detailed understanding of the shape memory response and the effect of controllable design parameters, such as the ink pattern, pre-strain conditions, and applied thermal and mechanical fields, allows for a predictive understanding and design of functional, 3D structures. The proposed modeling framework was used to obtain a fundamental understanding of the thermo-mechanical behavior of SMPs and the impact of the material behavior on hinged self-folding. These predictions indicated how the thermal and mechanical conditions during pre-strain significantly affect the shrinking and folding response of the SMP. Additionally, the externally applied thermal loads significantly influenced the folding rate and maximum bending angle. The computational framework was also adapted to understand the effects of fully coupling the thermal and mechanical response of the material

  7. Directions for memory hierarchies and their components: research and development

    International Nuclear Information System (INIS)

    Smith, A.J.

    1978-10-01

    The memory hierarchy is usually the largest identifiable part of a computer system and making effective use of it is critical to the operation and use of the system. The levels of such a memory hierarchy are considered and the state of the art and likely directions for both research and development are described. Algorithmic and logical features of the hierarchy not directly associated with specific components are also discussed. Among the problems believed to be the most significant are the following: (a) evaluate the effectiveness of gap filler technology as a level of storage between main memory and disk, and if it proves to be effective, determine how/where it should be used, (b) develop algorithms for the use of mass storage in a large computer system, and (c) determine how cache memories should be implemented in very large, fast multiprocessor systems

  8. Computers, the Human Mind, and My In-Laws' House.

    Science.gov (United States)

    Esque, Timm J.

    1996-01-01

    Discussion of human memory, computer memory, and the storage of information focuses on a metaphor that can account for memory without storage and can set the stage for systemic research around a more comprehensive, understandable theory. (Author/LRW)

  9. Space-Bounded Church-Turing Thesis and Computational Tractability of Closed Systems.

    Science.gov (United States)

    Braverman, Mark; Schneider, Jonathan; Rojas, Cristóbal

    2015-08-28

    We report a new limitation on the ability of physical systems to perform computation-one that is based on generalizing the notion of memory, or storage space, available to the system to perform the computation. Roughly, we define memory as the maximal amount of information that the evolving system can carry from one instant to the next. We show that memory is a limiting factor in computation even in lieu of any time limitations on the evolving system-such as when considering its equilibrium regime. We call this limitation the space-bounded Church-Turing thesis (SBCT). The SBCT is supported by a simulation assertion (SA), which states that predicting the long-term behavior of bounded-memory systems is computationally tractable. In particular, one corollary of SA is an explicit bound on the computational hardness of the long-term behavior of a discrete-time finite-dimensional dynamical system that is affected by noise. We prove such a bound explicitly.

  10. Personal Computers.

    Science.gov (United States)

    Toong, Hoo-min D.; Gupta, Amar

    1982-01-01

    Describes the hardware, software, applications, and current proliferation of personal computers (microcomputers). Includes discussions of microprocessors, memory, output (including printers), application programs, the microcomputer industry, and major microcomputer manufacturers (Apple, Radio Shack, Commodore, and IBM). (JN)

  11. Database architecture optimized for the new bottleneck: Memory access

    NARCIS (Netherlands)

    P.A. Boncz (Peter); S. Manegold (Stefan); M.L. Kersten (Martin)

    1999-01-01

    textabstractIn the past decade, advances in speed of commodity CPUs have far out-paced advances in memory latency. Main-memory access is therefore increasingly a performance bottleneck for many computer applications, including database systems. In this article, we use a simple scan test to show the

  12. Optimizing Database Architecture for the New Bottleneck: Memory Access

    NARCIS (Netherlands)

    S. Manegold (Stefan); P.A. Boncz (Peter); M.L. Kersten (Martin)

    2000-01-01

    textabstractIn the past decade, advances in speed of commodity CPUs have far out-paced advances in memory latency. Main-memory access is therefore increasingly a performance bottleneck for many computer applications, including database systems. In this article, we use a simple scan test to show the

  13. Bulk-memory processor for data acquisition

    International Nuclear Information System (INIS)

    Nelson, R.O.; McMillan, D.E.; Sunier, J.W.; Meier, M.; Poore, R.V.

    1981-01-01

    To meet the diverse needs and data rate requirements at the Van de Graaff and Weapons Neutron Research (WNR) facilities, a bulk memory system has been implemented which includes a fast and flexible processor. This bulk memory processor (BMP) utilizes bit slice and microcode techniques and features a 24 bit wide internal architecture allowing direct addressing of up to 16 megawords of memory and histogramming up to 16 million counts per channel without overflow. The BMP is interfaced to the MOSTEK MK 8000 bulk memory system and to the standard MODCOMP computer I/O bus. Coding for the BMP both at the microcode level and with macro instructions is supported. The generalized data acquisition system has been extended to support the BMP in a manner transparent to the user

  14. 32-Bit FASTBUS computer

    International Nuclear Information System (INIS)

    Blossom, J.M.; Hong, J.P.; Kellner, R.G.

    1985-01-01

    Los Alamos National Laboratory is building a 32-bit FASTBUS computer using the NATIONAL SEMICONDUCTOR 32032 central processing unit (CPU) and containing 16 million bytes of memory. The board can act both as a FASTBUS master and as a FASTBUS slave. It contains a custom direct memory access (DMA) channel which can perform 80 million bytes per second block transfers across the FASTBUS

  15. Data Movement Dominates: Advanced Memory Technology to Address the Real Exascale Power Problem

    Energy Technology Data Exchange (ETDEWEB)

    Bergman, Keren

    2014-08-28

    Energy is the fundamental barrier to Exascale supercomputing and is dominated by the cost of moving data from one point to another, not computation. Similarly, performance is dominated by data movement, not computation. The solution to this problem requires three critical technologies: 3D integration, optical chip-to-chip communication, and a new communication model. The central goal of the Sandia led "Data Movement Dominates" project aimed to develop memory systems and new architectures based on these technologies that have the potential to lower the cost of local memory accesses by orders of magnitude and provide substantially more bandwidth. Only through these transformational advances can future systems reach the goals of Exascale computing with a manageable power budgets. The Sandia led team included co-PIs from Columbia University, Lawrence Berkeley Lab, and the University of Maryland. The Columbia effort of Data Movement Dominates focused on developing a physically accurate simulation environment and experimental verification for optically-connected memory (OCM) systems that can enable continued performance scaling through high-bandwidth capacity, energy-efficient bit-rate transparency, and time-of-flight latency. With OCM, memory device parallelism and total capacity can scale to match future high-performance computing requirements without sacrificing data-movement efficiency. When we consider systems with integrated photonics, links to memory can be seamlessly integrated with the interconnection network-in a sense, memory becomes a primary aspect of the interconnection network. At the core of the Columbia effort, toward expanding our understanding of OCM enabled computing we have created an integrated modeling and simulation environment that uniquely integrates the physical behavior of the optical layer. The PhoenxSim suite of design and software tools developed under this effort has enabled the co-design of and performance evaluation photonics-enabled OCM

  16. SODR Memory Control Buffer Control ASIC

    Science.gov (United States)

    Hodson, Robert F.

    1994-01-01

    The Spacecraft Optical Disk Recorder (SODR) is a state of the art mass storage system for future NASA missions requiring high transmission rates and a large capacity storage system. This report covers the design and development of an SODR memory buffer control applications specific integrated circuit (ASIC). The memory buffer control ASIC has two primary functions: (1) buffering data to prevent loss of data during disk access times, (2) converting data formats from a high performance parallel interface format to a small computer systems interface format. Ten 144 p in, 50 MHz CMOS ASIC's were designed, fabricated and tested to implement the memory buffer control function.

  17. Multichannel analyzer using the direct-memory-access channel in a personal computer; Mnogokanal`nyj analizator v personal`nom komp`yutere, ispol`zuyushchij kanal pryamogo dostupa k pamyati

    Energy Technology Data Exchange (ETDEWEB)

    Georgiev, G; Vankov, I; Dimitrov, L [Incn. Yadernykh Issledovanij i Yadernoj Ehnergetiki Bolgarskoj Akademii Nuk, Sofiya (Bulgaria); Peev, I [Firma TOIVEL, Sofiya (Bulgaria)

    1996-12-31

    Paper describes a multichannel analyzer of the spectrometry data developed on the basis of a personal computer memory and a controlled channel of direct access. Analyzer software covering a driver and program of spectrum display control is studied. 2 figs.

  18. Optical interconnection network for parallel access to multi-rank memory in future computing systems.

    Science.gov (United States)

    Wang, Kang; Gu, Huaxi; Yang, Yintang; Wang, Kun

    2015-08-10

    With the number of cores increasing, there is an emerging need for a high-bandwidth low-latency interconnection network, serving core-to-memory communication. In this paper, aiming at the goal of simultaneous access to multi-rank memory, we propose an optical interconnection network for core-to-memory communication. In the proposed network, the wavelength usage is delicately arranged so that cores can communicate with different ranks at the same time and broadcast for flow control can be achieved. A distributed memory controller architecture that works in a pipeline mode is also designed for efficient optical communication and transaction address processes. The scaling method and wavelength assignment for the proposed network are investigated. Compared with traditional electronic bus-based core-to-memory communication, the simulation results based on the PARSEC benchmark show that the bandwidth enhancement and latency reduction are apparent.

  19. Quantum memory for superconducting qubits

    International Nuclear Information System (INIS)

    Pritchett, Emily J.; Geller, Michael R.

    2005-01-01

    Many protocols for quantum computation require a memory element to store qubits. We discuss the speed and accuracy with which quantum states prepared in a superconducting qubit can be stored in and later retrieved from an attached high-Q resonator. The memory fidelity depends on both the qubit-resonator coupling strength and the location of the state on the Bloch sphere. Our results show that a quantum memory demonstration should be possible with existing superconducting qubit designs, which would be an important milestone in solid-state quantum information processing. Although we specifically focus on a large-area, current-biased Josesphson-junction phase qubit coupled to the dilatational mode of a piezoelectric nanoelectromechanical disk resonator, many of our results will apply to other qubit-oscillator models

  20. Tiling and Asynchronous Communication Optimizations for Stencil Computations

    KAUST Repository

    Malas, Tareq

    2015-12-07

    The importance of stencil-based algorithms in computational science has focused attention on optimized parallel implementations for multilevel cache-based processors. Temporal blocking schemes leverage the large bandwidth and low latency of caches to accelerate stencil updates and approach theoretical peak performance. A key ingredient is the reduction of data traffic across slow data paths, especially the main memory interface. Most of the established work concentrates on updating separate cache blocks per thread, which works on all types of shared memory systems, regardless of whether there is a shared cache among the cores. This approach is memory-bandwidth limited in several situations, where the cache space for each thread can be too small to provide sufficient in-cache data reuse. We introduce a generalized multi-dimensional intra-tile parallelization scheme for shared-cache multicore processors that results in a significant reduction of cache size requirements and shows a large saving in memory bandwidth usage compared to existing approaches. It also provides data access patterns that allow efficient hardware prefetching. Our parameterized thread groups concept provides a controllable trade-off between concurrency and memory usage, shifting the pressure between the memory interface and the Central Processing Unit (CPU).We also introduce efficient diamond tiling structure for both shared memory cache blocking and distributed memory relaxed-synchronization communication, demonstrated using one-dimensional domain decomposition. We describe the approach and our open-source testbed implementation details (called Girih), present performance results on contemporary Intel processors, and apply advanced performance modeling techniques to reconcile the observed performance with hardware capabilities. Furthermore, we conduct a comparison with the state-of-the-art stencil frameworks PLUTO and Pochoir in shared memory, using corner-case stencil operators. We study the

  1. Real-time stereo matching architecture based on 2D MRF model: a memory-efficient systolic array

    Directory of Open Access Journals (Sweden)

    Park Sungchan

    2011-01-01

    Full Text Available Abstract There is a growing need in computer vision applications for stereopsis, requiring not only accurate distance but also fast and compact physical implementation. Global energy minimization techniques provide remarkably precise results. But they suffer from huge computational complexity. One of the main challenges is to parallelize the iterative computation, solving the memory access problem between the big external memory and the massive processors. Remarkable memory saving can be obtained with our memory reduction scheme, and our new architecture is a systolic array. If we expand it into N's multiple chips in a cascaded manner, we can cope with various ranges of image resolutions. We have realized it using the FPGA technology. Our architecture records 19 times smaller memory than the global minimization technique, which is a principal step toward real-time chip implementation of the various iterative image processing algorithms with tiny and distributed memory resources like optical flow, image restoration, etc.

  2. Disk access controller for Multi 8 computer

    International Nuclear Information System (INIS)

    Segalard, Jean

    1970-01-01

    After having presented the initial characteristics and weaknesses of the software provided for the control of a memory disk coupled with a Multi 8 computer, the author reports the development and improvement of this controller software. He presents the different constitutive parts of the computer and the operation of the disk coupling and of the direct access to memory. He reports the development of the disk access controller: software organisation, loader, subprograms and statements

  3. Bank switched memory interface for an image processor

    International Nuclear Information System (INIS)

    Barron, M.; Downward, J.

    1980-09-01

    A commercially available image processor is interfaced to a PDP-11/45 through an 8K window of memory addresses. When the image processor was not in use it was desired to be able to use the 8K address space as real memory. The standard method of accomplishing this would have been to use UNIBUS switches to switch in either the physical 8K bank of memory or the image processor memory. This method has the disadvantage of being rather expensive. As a simple alternative, a device was built to selectively enable or disable either an 8K bank of memory or the image processor memory. To enable the image processor under program control, GEN is contracted in size, the memory is disabled, a device partition for the image processor is created above GEN, and the image processor memory is enabled. The process is reversed to restore memory to GEN. The hardware to enable/disable the image and computer memories is controlled using spare bits from a DR-11K output register. The image processor and physical memory can be switched in or out on line with no adverse affects on the system's operation

  4. Nonvolatile “AND,” “OR,” and “NOT” Boolean logic gates based on phase-change memory

    Energy Technology Data Exchange (ETDEWEB)

    Li, Y.; Zhong, Y. P.; Deng, Y. F.; Zhou, Y. X.; Xu, L.; Miao, X. S., E-mail: miaoxs@mail.hust.edu.cn [Wuhan National Laboratory for Optoelectronics (WNLO), Huazhong University of Science and Technology (HUST), Wuhan 430074 (China); School of Optical and Electronic Information, Huazhong University of Science and Technology, Wuhan 430074 (China)

    2013-12-21

    Electronic devices or circuits that can implement both logic and memory functions are regarded as the building blocks for future massive parallel computing beyond von Neumann architecture. Here we proposed phase-change memory (PCM)-based nonvolatile logic gates capable of AND, OR, and NOT Boolean logic operations verified in SPICE simulations and circuit experiments. The logic operations are parallel computing and results can be stored directly in the states of the logic gates, facilitating the combination of computing and memory in the same circuit. These results are encouraging for ultralow-power and high-speed nonvolatile logic circuit design based on novel memory devices.

  5. Nonvolatile “AND,” “OR,” and “NOT” Boolean logic gates based on phase-change memory

    International Nuclear Information System (INIS)

    Li, Y.; Zhong, Y. P.; Deng, Y. F.; Zhou, Y. X.; Xu, L.; Miao, X. S.

    2013-01-01

    Electronic devices or circuits that can implement both logic and memory functions are regarded as the building blocks for future massive parallel computing beyond von Neumann architecture. Here we proposed phase-change memory (PCM)-based nonvolatile logic gates capable of AND, OR, and NOT Boolean logic operations verified in SPICE simulations and circuit experiments. The logic operations are parallel computing and results can be stored directly in the states of the logic gates, facilitating the combination of computing and memory in the same circuit. These results are encouraging for ultralow-power and high-speed nonvolatile logic circuit design based on novel memory devices

  6. Accurate metacognition for visual sensory memory representations.

    Science.gov (United States)

    Vandenbroucke, Annelinde R E; Sligte, Ilja G; Barrett, Adam B; Seth, Anil K; Fahrenfort, Johannes J; Lamme, Victor A F

    2014-04-01

    The capacity to attend to multiple objects in the visual field is limited. However, introspectively, people feel that they see the whole visual world at once. Some scholars suggest that this introspective feeling is based on short-lived sensory memory representations, whereas others argue that the feeling of seeing more than can be attended to is illusory. Here, we investigated this phenomenon by combining objective memory performance with subjective confidence ratings during a change-detection task. This allowed us to compute a measure of metacognition--the degree of knowledge that subjects have about the correctness of their decisions--for different stages of memory. We show that subjects store more objects in sensory memory than they can attend to but, at the same time, have similar metacognition for sensory memory and working memory representations. This suggests that these subjective impressions are not an illusion but accurate reflections of the richness of visual perception.

  7. Topology influences performance in the associative memory neural networks

    International Nuclear Information System (INIS)

    Lu Jianquan; He Juan; Cao Jinde; Gao Zhiqiang

    2006-01-01

    To explore how topology affects performance within Hopfield-type associative memory neural networks (AMNNs), we studied the computational performance of the neural networks with regular lattice, random, small-world, and scale-free structures. In this Letter, we found that the memory performance of neural networks obtained through asynchronous updating from 'larger' nodes to 'smaller' nodes are better than asynchronous updating in random order, especially for the scale-free topology. The computational performance of associative memory neural networks linked by the above-mentioned network topologies with the same amounts of nodes (neurons) and edges (synapses) were studied respectively. Along with topologies becoming more random and less locally disordered, we will see that the performance of associative memory neural network is quite improved. By comparing, we show that the regular lattice and random network form two extremes in terms of patterns stability and retrievability. For a network, its patterns stability and retrievability can be largely enhanced by adding a random component or some shortcuts to its structured component. According to the conclusions of this Letter, we can design the associative memory neural networks with high performance and minimal interconnect requirements

  8. A comprehensive approach to decipher biological computation to achieve next generation high-performance exascale computing.

    Energy Technology Data Exchange (ETDEWEB)

    James, Conrad D.; Schiess, Adrian B.; Howell, Jamie; Baca, Michael J.; Partridge, L. Donald; Finnegan, Patrick Sean; Wolfley, Steven L.; Dagel, Daryl James; Spahn, Olga Blum; Harper, Jason C.; Pohl, Kenneth Roy; Mickel, Patrick R.; Lohn, Andrew; Marinella, Matthew

    2013-10-01

    The human brain (volume=1200cm3) consumes 20W and is capable of performing > 10^16 operations/s. Current supercomputer technology has reached 1015 operations/s, yet it requires 1500m^3 and 3MW, giving the brain a 10^12 advantage in operations/s/W/cm^3. Thus, to reach exascale computation, two achievements are required: 1) improved understanding of computation in biological tissue, and 2) a paradigm shift towards neuromorphic computing where hardware circuits mimic properties of neural tissue. To address 1), we will interrogate corticostriatal networks in mouse brain tissue slices, specifically with regard to their frequency filtering capabilities as a function of input stimulus. To address 2), we will instantiate biological computing characteristics such as multi-bit storage into hardware devices with future computational and memory applications. Resistive memory devices will be modeled, designed, and fabricated in the MESA facility in consultation with our internal and external collaborators.

  9. Local computer network of the JINR Neutron Physics Laboratory

    International Nuclear Information System (INIS)

    Alfimenkov, A.V.; Vagov, V.A.; Vajdkhadze, F.

    1988-01-01

    New high-speed local computer network, where intelligent network adapter (NA) is used as hardware base, is developed in the JINR Neutron Physics Laboratory to increase operation efficiency and data transfer rate. NA consists of computer bus interface, cable former, microcomputer segment designed for both program realization of channel-level protocol and organization of bidirectional transfer of information through direct access channel between monochannel and computer memory with or witout buffering in NA operation memory device

  10. Outline of a novel architecture for cortical computation.

    Science.gov (United States)

    Majumdar, Kaushik

    2008-03-01

    In this paper a novel architecture for cortical computation has been proposed. This architecture is composed of computing paths consisting of neurons and synapses. These paths have been decomposed into lateral, longitudinal and vertical components. Cortical computation has then been decomposed into lateral computation (LaC), longitudinal computation (LoC) and vertical computation (VeC). It has been shown that various loop structures in the cortical circuit play important roles in cortical computation as well as in memory storage and retrieval, keeping in conformity with the molecular basis of short and long term memory. A new learning scheme for the brain has also been proposed and how it is implemented within the proposed architecture has been explained. A few mathematical results about the architecture have been proposed, some of which are without proof.

  11. Age effects on explicit and implicit memory

    Directory of Open Access Journals (Sweden)

    Emma eWard

    2013-09-01

    Full Text Available It is well documented that explicit memory (e.g., recognition declines with age. In contrast, many argue that implicit memory (e.g., priming is preserved in healthy aging. For example, priming on tasks such as perceptual identification is often not statistically different in groups of young and older adults. Such observations are commonly taken as evidence for distinct explicit and implicit learning/memory systems. In this article we discuss several lines of evidence that challenge this view. We describe how patterns of differential age-related decline may arise from differences in the ways in which the two forms of memory are commonly measured, and review recent research suggesting that under improved measurement methods, implicit memory is not age-invariant. Formal computational models are of considerable utility in revealing the nature of underlying systems. We report the results of applying single and multiple-systems models to data on age effects in implicit and explicit memory. Model comparison clearly favours the single-system view. Implications for the memory systems debate are discussed.

  12. Age effects on explicit and implicit memory.

    Science.gov (United States)

    Ward, Emma V; Berry, Christopher J; Shanks, David R

    2013-01-01

    It is well-documented that explicit memory (e.g., recognition) declines with age. In contrast, many argue that implicit memory (e.g., priming) is preserved in healthy aging. For example, priming on tasks such as perceptual identification is often not statistically different in groups of young and older adults. Such observations are commonly taken as evidence for distinct explicit and implicit learning/memory systems. In this article we discuss several lines of evidence that challenge this view. We describe how patterns of differential age-related decline may arise from differences in the ways in which the two forms of memory are commonly measured, and review recent research suggesting that under improved measurement methods, implicit memory is not age-invariant. Formal computational models are of considerable utility in revealing the nature of underlying systems. We report the results of applying single and multiple-systems models to data on age effects in implicit and explicit memory. Model comparison clearly favors the single-system view. Implications for the memory systems debate are discussed.

  13. Explorations in quantum computing

    CERN Document Server

    Williams, Colin P

    2011-01-01

    By the year 2020, the basic memory components of a computer will be the size of individual atoms. At such scales, the current theory of computation will become invalid. ""Quantum computing"" is reinventing the foundations of computer science and information theory in a way that is consistent with quantum physics - the most accurate model of reality currently known. Remarkably, this theory predicts that quantum computers can perform certain tasks breathtakingly faster than classical computers -- and, better yet, can accomplish mind-boggling feats such as teleporting information, breaking suppos

  14. Visuospatial memory computations during whole-body rotations in roll

    NARCIS (Netherlands)

    Pelt, S. van; Gisbergen, J.A.M. van; Medendorp, W.P.

    2005-01-01

    We used a memory-saccade task to test whether the location of a target, briefly presented before a whole-body rotation in roll, is stored in egocentric or in allocentric coordinates. To make this distinction, we exploited the fact that subjects, when tilted sideways in darkness, make systematic

  15. Computational aspects of feedback in neural circuits.

    Directory of Open Access Journals (Sweden)

    Wolfgang Maass

    2007-01-01

    Full Text Available It has previously been shown that generic cortical microcircuit models can perform complex real-time computations on continuous input streams, provided that these computations can be carried out with a rapidly fading memory. We investigate the computational capability of such circuits in the more realistic case where not only readout neurons, but in addition a few neurons within the circuit, have been trained for specific tasks. This is essentially equivalent to the case where the output of trained readout neurons is fed back into the circuit. We show that this new model overcomes the limitation of a rapidly fading memory. In fact, we prove that in the idealized case without noise it can carry out any conceivable digital or analog computation on time-varying inputs. But even with noise, the resulting computational model can perform a large class of biologically relevant real-time computations that require a nonfading memory. We demonstrate these computational implications of feedback both theoretically, and through computer simulations of detailed cortical microcircuit models that are subject to noise and have complex inherent dynamics. We show that the application of simple learning procedures (such as linear regression or perceptron learning to a few neurons enables such circuits to represent time over behaviorally relevant long time spans, to integrate evidence from incoming spike trains over longer periods of time, and to process new information contained in such spike trains in diverse ways according to the current internal state of the circuit. In particular we show that such generic cortical microcircuits with feedback provide a new model for working memory that is consistent with a large set of biological constraints. Although this article examines primarily the computational role of feedback in circuits of neurons, the mathematical principles on which its analysis is based apply to a variety of dynamical systems. Hence they may also

  16. Trinary Associative Memory Would Recognize Machine Parts

    Science.gov (United States)

    Liu, Hua-Kuang; Awwal, Abdul Ahad S.; Karim, Mohammad A.

    1991-01-01

    Trinary associative memory combines merits and overcomes major deficiencies of unipolar and bipolar logics by combining them in three-valued logic that reverts to unipolar or bipolar binary selectively, as needed to perform specific tasks. Advantage of associative memory: one obtains access to all parts of it simultaneously on basis of content, rather than address, of data. Consequently, used to exploit fully parallelism and speed of optical computing.

  17. III. NIH TOOLBOX COGNITION BATTERY (CB): MEASURING EPISODIC MEMORY

    OpenAIRE

    Bauer, Patricia J.; Dikmen, Sureyya S.; Heaton, Robert K.; Mungas, Dan; Slotkin, Jerry; Beaumont, Jennifer L.

    2013-01-01

    One of the most significant domains of cognition is episodic memory, which allows for rapid acquisition and long-term storage of new information. For purposes of the NIH Toolbox, we devised a new test of episodic memory. The nonverbal NIH Toolbox Picture Sequence Memory Test (TPSMT) requires participants to reproduce the order of an arbitrarily ordered sequence of pictures presented on a computer. To adjust for ability, sequence length varies from 6 to 15 pictures. Multiple trials are adminis...

  18. Blood leakage detection during dialysis therapy based on fog computing with array photocell sensors and heteroassociative memory model.

    Science.gov (United States)

    Wu, Jian-Xing; Huang, Ping-Tzan; Lin, Chia-Hung; Li, Chien-Ming

    2018-02-01

    Blood leakage and blood loss are serious life-threatening complications occurring during dialysis therapy. These events have been of concerns to both healthcare givers and patients. More than 40% of adult blood volume can be lost in just a few minutes, resulting in morbidities and mortality. The authors intend to propose the design of a warning tool for the detection of blood leakage/blood loss during dialysis therapy based on fog computing with an array of photocell sensors and heteroassociative memory (HAM) model. Photocell sensors are arranged in an array on a flexible substrate to detect blood leakage via the resistance changes with illumination in the visible spectrum of 500-700 nm. The HAM model is implemented to design a virtual alarm unit using electricity changes in an embedded system. The proposed warning tool can indicate the risk level in both end-sensing units and remote monitor devices via a wireless network and fog/cloud computing. The animal experimental results (pig blood) will demonstrate the feasibility.

  19. Computers in Nuclear Medicine. Chapter 12

    Energy Technology Data Exchange (ETDEWEB)

    Parker, J. A. [Division of Nuclear Medicine and Department of Radiology, Beth Israel Deaconess Medical Center, Harvard Medical School, Boston, MA (United States)

    2014-12-15

    In 1965, Gordon Moore, a co-founder of Intel, said that new memory chips have twice the capacity of prior chips, and that new chips are released every 18 to 24 months. This statement has become known as Moore’s law. Moore’s law means that memory size increases exponentially. More generally, the exponential growth of computers has applied not only to memory size, but also to many computer capabilities, and since 1965, Moore’s law has remained remarkably accurate. Further, this remarkable growth in capabilities has occurred with a steady decrease in price. Anyone who has even a little appreciation of exponential growth realizes that exponential growth cannot continue indefinitely. However, the history of computers is littered with ‘experts’ who have prematurely declared the end of Moore’s law. The quotation at the beginning of this section indicates that future growth of computers has often been underestimated. The exponential growth of computer capabilities has a very important implication for the management of a nuclear medicine department. The growth in productivity of the staff of a department is slow, especially when compared to the growth in capabilities of a computer. This means that whatever decision was made in the past about the balance between staff and computers is now out of date. A good heuristic is: always apply more computer capacity and less people to a new task. Or stated more simply, hardware is ‘cheap’, at least with respect to what you learned in training or what you decided last time you considered the balance between hardware and ‘peopleware’.

  20. High-Speed Non-Volatile Optical Memory: Achievements and Challenges

    Directory of Open Access Journals (Sweden)

    Vadym Zayets

    2017-01-01

    Full Text Available We have proposed, fabricated, and studied a new design of a high-speed optical non-volatile memory. The recoding mechanism of the proposed memory utilizes a magnetization reversal of a nanomagnet by a spin-polarized photocurrent. It was shown experimentally that the operational speed of this memory may be extremely fast above 1 TBit/s. The challenges to realize both a high-speed recording and a high-speed reading are discussed. The memory is compact, integratable, and compatible with present semiconductor technology. If realized, it will advance data processing and computing technology towards a faster operation speed.

  1. Investigation of Cloud Computing: Applications and Challenges

    OpenAIRE

    Amid Khatibi Bardsiri; Anis Vosoogh; Fatemeh Ahoojoosh

    2014-01-01

    Cloud computing is a model for saving data or knowledge in distance servers through Internet. It can be save the required memory space and reduce cost of extending memory capacity in users’ own machines and etc., Therefore, Cloud Computing has several benefits for individuals as well as organizations. It provides protection for personal and organizational data. Further, with the help of cloud service, one business owner, organization manager or service provider will be able to make privacy an...

  2. Processing-in-Memory Enabled Graphics Processors for 3D Rendering

    Energy Technology Data Exchange (ETDEWEB)

    Xie, Chenhao; Song, Shuaiwen; Wang, Jing; Zhang, Weigong; Fu, Xin

    2017-02-06

    The performance of 3D rendering of Graphics Processing Unit that convents 3D vector stream into 2D frame with 3D image effects significantly impact users’ gaming experience on modern computer systems. Due to the high texture throughput in 3D rendering, main memory bandwidth becomes a critical obstacle for improving the overall rendering performance. 3D stacked memory systems such as Hybrid Memory Cube (HMC) provide opportunities to significantly overcome the memory wall by directly connecting logic controllers to DRAM dies. Based on the observation that texel fetches significantly impact off-chip memory traffic, we propose two architectural designs to enable Processing-In-Memory based GPU for efficient 3D rendering.

  3. Study and obtention of exact, and approximation, algorithms and heuristics for a mesh partitioning problem under memory constraints

    International Nuclear Information System (INIS)

    Morais, Sebastien

    2016-01-01

    In many scientific areas, the size and the complexity of numerical simulations lead to make intensive use of massively parallel runs on High Performance Computing (HPC) architectures. Such computers consist in a set of processing units (PU) where memory is distributed. Distribution of simulation data is therefore crucial: it has to minimize the computation time of the simulation while ensuring that the data allocated to every PU can be locally stored in memory. For most of the numerical simulations, the physical and numerical data are based on a mesh. The computations are then performed at the cell level (for example within triangles and quadrilaterals in 2D, or within tetrahedrons and hexahedrons in 3D). More specifically, computing and memory cost can be associated to each cell. In our context, where the mathematical methods used are finite elements or finite volumes, the realization of the computations associated with a cell may require information carried by neighboring cells. The standard implementation relies to locally store useful data of this neighborhood on the PU, even if cells of this neighborhood are not locally computed. Such non computed but stored cells are called ghost cells, and can have a significant impact on the memory consumption of a PU. The problem to solve is thus not only to partition a mesh on several parts by affecting each cell to one and only one part while minimizing the computational load assigned to each part. It is also necessary to keep into account that the memory load of both the cells where the computations are performed and their neighbors has to fit into PU memory. This leads to partition the computations while the mesh is distributed with overlaps. Explicitly taking these data overlaps into account is the problem that we propose to study. (author) [fr

  4. Spatial memory and animal movement.

    Science.gov (United States)

    Fagan, William F; Lewis, Mark A; Auger-Méthé, Marie; Avgar, Tal; Benhamou, Simon; Breed, Greg; LaDage, Lara; Schlägel, Ulrike E; Tang, Wen-wu; Papastamatiou, Yannis P; Forester, James; Mueller, Thomas

    2013-10-01

    Memory is critical to understanding animal movement but has proven challenging to study. Advances in animal tracking technology, theoretical movement models and cognitive sciences have facilitated research in each of these fields, but also created a need for synthetic examination of the linkages between memory and animal movement. Here, we draw together research from several disciplines to understand the relationship between animal memory and movement processes. First, we frame the problem in terms of the characteristics, costs and benefits of memory as outlined in psychology and neuroscience. Next, we provide an overview of the theories and conceptual frameworks that have emerged from behavioural ecology and animal cognition. Third, we turn to movement ecology and summarise recent, rapid developments in the types and quantities of available movement data, and in the statistical measures applicable to such data. Fourth, we discuss the advantages and interrelationships of diverse modelling approaches that have been used to explore the memory-movement interface. Finally, we outline key research challenges for the memory and movement communities, focusing on data needs and mathematical and computational challenges. We conclude with a roadmap for future work in this area, outlining axes along which focused research should yield rapid progress. © 2013 John Wiley & Sons Ltd/CNRS.

  5. Context-dependent memory decay is evidence of effort minimization in motor learning: a computational study.

    Science.gov (United States)

    Takiyama, Ken

    2015-01-01

    Recent theoretical models suggest that motor learning includes at least two processes: error minimization and memory decay. While learning a novel movement, a motor memory of the movement is gradually formed to minimize the movement error between the desired and actual movements in each training trial, but the memory is slightly forgotten in each trial. The learning effects of error minimization trained with a certain movement are partially available in other non-trained movements, and this transfer of the learning effect can be reproduced by certain theoretical frameworks. Although most theoretical frameworks have assumed that a motor memory trained with a certain movement decays at the same speed during performing the trained movement as non-trained movements, a recent study reported that the motor memory decays faster during performing the trained movement than non-trained movements, i.e., the decay rate of motor memory is movement or context dependent. Although motor learning has been successfully modeled based on an optimization framework, e.g., movement error minimization, the type of optimization that can lead to context-dependent memory decay is unclear. Thus, context-dependent memory decay raises the question of what is optimized in motor learning. To reproduce context-dependent memory decay, I extend a motor primitive framework. Specifically, I introduce motor effort optimization into the framework because some previous studies have reported the existence of effort optimization in motor learning processes and no conventional motor primitive model has yet considered the optimization. Here, I analytically and numerically revealed that context-dependent decay is a result of motor effort optimization. My analyses suggest that context-dependent decay is not merely memory decay but is evidence of motor effort optimization in motor learning.

  6. Context-dependent memory decay is evidence of effort minimization in motor learning: A computational study

    Directory of Open Access Journals (Sweden)

    Ken eTakiyama

    2015-02-01

    Full Text Available Recent theoretical models suggest that motor learning includes at least two processes: error minimization and memory decay. While learning a novel movement, a motor memory of the movement is gradually formed to minimize the movement error between the desired and actual movements in each training trial, but the memory is slightly forgotten in each trial. The learning effects of error minimization trained with a certain movement are partially available in other non-trained movements, and this transfer of the learning effect can be reproduced by certain theoretical frameworks. Although most theoretical frameworks have assumed that a motor memory trained with a certain movement decays at the same speed during performing the trained movement as non-trained movements, a recent study reported that the motor memory decays faster during performing the trained movement than non-trained movements, i.e., the decay rate of motor memory is movement or context dependent. Although motor learning has been successfully modeled based on an optimization framework, e.g., movement error minimization, the type of optimization that can lead to context-dependent memory decay is unclear. Thus, context-dependent memory decay raises the question of what is optimized in motor learning. To reproduce context-dependent memory decay, I extend a motor primitive framework. Specifically, I introduce motor effort optimization into the framework because some previous studies have reported the existence of effort optimization in motor learning processes and no conventional motor primitive model has yet considered the optimization. Here, I analytically and numerically revealed that context-dependent decay is a result of motor effort optimization. My analyses suggest that context-dependent decay is not merely memory decay but is evidence of motor effort optimization in motor learning.

  7. Serotonergic modulation of spatial working memory: predictions from a computational network model

    Directory of Open Access Journals (Sweden)

    Maria eCano-Colino

    2013-09-01

    Full Text Available Serotonin (5-HT receptors of types 1A and 2A are massively expressed in prefrontal cortex (PFC neurons, an area associated with cognitive function. Hence, 5-HT could be effective in modulating prefrontal-dependent cognitive functions, such as spatial working memory (SWM. However, a direct association between 5-HT and SWM has proved elusive in psycho-pharmacological studies. Recently, a computational network model of the PFC microcircuit was used to explore the relationship between 5‑HT and SWM (Cano-Colino et al. 2013. This study found that both excessive and insufficient 5-HT levels lead to impaired SWM performance in the network, and it concluded that analyzing behavioral responses based on confidence reports could facilitate the experimental identification of SWM behavioral effects of 5‑HT neuromodulation. Such analyses may have confounds based on our limited understanding of metacognitive processes. Here, we extend these results by deriving three additional predictions from the model that do not rely on confidence reports. Firstly, only excessive levels of 5-HT should result in SWM deficits that increase with delay duration. Secondly, excessive 5-HT baseline concentration makes the network vulnerable to distractors at distances that were robust to distraction in control conditions, while the network still ignores distractors efficiently for low 5‑HT levels that impair SWM. Finally, 5-HT modulates neuronal memory fields in neurophysiological experiments: Neurons should be better tuned to the cued stimulus than to the behavioral report for excessive 5-HT levels, while the reverse should happen for low 5-HT concentrations. In all our simulations agonists of 5-HT1A receptors and antagonists of 5-HT2A receptors produced behavioral and physiological effects in line with global 5-HT level increases. Our model makes specific predictions to be tested experimentally and advance our understanding of the neural basis of SWM and its neuromodulation

  8. Assessing Programming Costs of Explicit Memory Localization on a Large Scale Shared Memory Multiprocessor

    Directory of Open Access Journals (Sweden)

    Silvio Picano

    1992-01-01

    Full Text Available We present detailed experimental work involving a commercially available large scale shared memory multiple instruction stream-multiple data stream (MIMD parallel computer having a software controlled cache coherence mechanism. To make effective use of such an architecture, the programmer is responsible for designing the program's structure to match the underlying multiprocessors capabilities. We describe the techniques used to exploit our multiprocessor (the BBN TC2000 on a network simulation program, showing the resulting performance gains and the associated programming costs. We show that an efficient implementation relies heavily on the user's ability to explicitly manage the memory system.

  9. Multiprocessor shared-memory information exchange

    International Nuclear Information System (INIS)

    Santoline, L.L.; Bowers, M.D.; Crew, A.W.; Roslund, C.J.; Ghrist, W.D. III

    1989-01-01

    In distributed microprocessor-based instrumentation and control systems, the inter-and intra-subsystem communication requirements ultimately form the basis for the overall system architecture. This paper describes a software protocol which addresses the intra-subsystem communications problem. Specifically the protocol allows for multiple processors to exchange information via a shared-memory interface. The authors primary goal is to provide a reliable means for information to be exchanged between central application processor boards (masters) and dedicated function processor boards (slaves) in a single computer chassis. The resultant Multiprocessor Shared-Memory Information Exchange (MSMIE) protocol, a standard master-slave shared-memory interface suitable for use in nuclear safety systems, is designed to pass unidirectional buffers of information between the processors while providing a minimum, deterministic cycle time for this data exchange

  10. Milestoning with transition memory

    Science.gov (United States)

    Hawk, Alexander T.; Makarov, Dmitrii E.

    2011-12-01

    Milestoning is a method used to calculate the kinetics and thermodynamics of molecular processes occurring on time scales that are not accessible to brute force molecular dynamics (MD). In milestoning, the conformation space of the system is sectioned by hypersurfaces (milestones), an ensemble of trajectories is initialized on each milestone, and MD simulations are performed to calculate transitions between milestones. The transition probabilities and transition time distributions are then used to model the dynamics of the system with a Markov renewal process, wherein a long trajectory of the system is approximated as a succession of independent transitions between milestones. This approximation is justified if the transition probabilities and transition times are statistically independent. In practice, this amounts to a requirement that milestones are spaced such that trajectories lose position and velocity memory between subsequent transitions. Unfortunately, limiting the number of milestones limits both the resolution at which a system's properties can be analyzed, and the computational speedup achieved by the method. We propose a generalized milestoning procedure, milestoning with transition memory (MTM), which accounts for memory of previous transitions made by the system. When a reaction coordinate is used to define the milestones, the MTM procedure can be carried out at no significant additional expense as compared to conventional milestoning. To test MTM, we have applied its version that allows for the memory of the previous step to the toy model of a polymer chain undergoing Langevin dynamics in solution. We have computed the mean first passage time for the chain to attain a cyclic conformation and found that the number of milestones that can be used, without incurring significant errors in the first passage time is at least 8 times that permitted by conventional milestoning. We further demonstrate that, unlike conventional milestoning, MTM permits

  11. COMPUTERS IN SURGERY

    African Journals Online (AJOL)

    BODE

    Key words: Computers, surgery, applications. Introduction ... With improved memory, speed and processing power in an ever more compact ... with picture and voice embedment to wit. With the ... recall the tedium of anatomy, physiology and.

  12. Studies of Human Memory and Language Processing.

    Science.gov (United States)

    Collins, Allan M.

    The purposes of this study were to determine the nature of human semantic memory and to obtain knowledge usable in the future development of computer systems that can converse with people. The work was based on a computer model which is designed to comprehend English text, relating the text to information stored in a semantic data base that is…

  13. Memory bottlenecks and memory contention in multi-core Monte Carlo transport codes

    International Nuclear Information System (INIS)

    Tramm, J.R.; Siegel, A.R.

    2013-01-01

    The simulation of whole nuclear cores through the use of Monte Carlo codes requires an impracticably long time-to-solution. We have extracted a kernel that executes only the most computationally expensive steps of the Monte Carlo particle transport algorithm - the calculation of macroscopic cross sections - in an effort to expose bottlenecks within multi-core, shared memory architectures. (authors)

  14. Exploiting Data Similarity to Reduce Memory Footprints

    Science.gov (United States)

    2011-01-01

    ure 1 illustrates. We expect the budget for an exascale system to be approximately $200M and memory costs will account for about half of that budget [21...Figure 2 shows that monetary considerations will lead to significantly less main memory relative to compute capability in exascale systems even if...J. Davenport, T. Schlagel, F. John- son, and P. Messina. A Decadal DOE Plan for Providing Exascale Applications and Technologies for DOE Mission

  15. Exploiting Data Sparsity for Large-Scale Matrix Computations

    KAUST Repository

    Akbudak, Kadir

    2018-02-24

    Exploiting data sparsity in dense matrices is an algorithmic bridge between architectures that are increasingly memory-austere on a per-core basis and extreme-scale applications. The Hierarchical matrix Computations on Manycore Architectures (HiCMA) library tackles this challenging problem by achieving significant reductions in time to solution and memory footprint, while preserving a specified accuracy requirement of the application. HiCMA provides a high-performance implementation on distributed-memory systems of one of the most widely used matrix factorization in large-scale scientific applications, i.e., the Cholesky factorization. It employs the tile low-rank data format to compress the dense data-sparse off-diagonal tiles of the matrix. It then decomposes the matrix computations into interdependent tasks and relies on the dynamic runtime system StarPU for asynchronous out-of-order scheduling, while allowing high user-productivity. Performance comparisons and memory footprint on matrix dimensions up to eleven million show a performance gain and memory saving of more than an order of magnitude for both metrics on thousands of cores, against state-of-the-art open-source and vendor optimized numerical libraries. This represents an important milestone in enabling large-scale matrix computations toward solving big data problems in geospatial statistics for climate/weather forecasting applications.

  16. Exploiting Data Sparsity for Large-Scale Matrix Computations

    KAUST Repository

    Akbudak, Kadir; Ltaief, Hatem; Mikhalev, Aleksandr; Charara, Ali; Keyes, David E.

    2018-01-01

    Exploiting data sparsity in dense matrices is an algorithmic bridge between architectures that are increasingly memory-austere on a per-core basis and extreme-scale applications. The Hierarchical matrix Computations on Manycore Architectures (HiCMA) library tackles this challenging problem by achieving significant reductions in time to solution and memory footprint, while preserving a specified accuracy requirement of the application. HiCMA provides a high-performance implementation on distributed-memory systems of one of the most widely used matrix factorization in large-scale scientific applications, i.e., the Cholesky factorization. It employs the tile low-rank data format to compress the dense data-sparse off-diagonal tiles of the matrix. It then decomposes the matrix computations into interdependent tasks and relies on the dynamic runtime system StarPU for asynchronous out-of-order scheduling, while allowing high user-productivity. Performance comparisons and memory footprint on matrix dimensions up to eleven million show a performance gain and memory saving of more than an order of magnitude for both metrics on thousands of cores, against state-of-the-art open-source and vendor optimized numerical libraries. This represents an important milestone in enabling large-scale matrix computations toward solving big data problems in geospatial statistics for climate/weather forecasting applications.

  17. `Unlearning' has a stabilizing effect in collective memories

    Science.gov (United States)

    Hopfield, J. J.; Feinstein, D. I.; Palmer, R. G.

    1983-07-01

    Crick and Mitchison1 have presented a hypothesis for the functional role of dream sleep involving an `unlearning' process. We have independently carried out mathematical and computer modelling of learning and `unlearning' in a collective neural network of 30-1,000 neurones. The model network has a content-addressable memory or `associative memory' which allows it to learn and store many memories. A particular memory can be evoked in its entirety when the network is stimulated by any adequate-sized subpart of the information of that memory2. But different memories of the same size are not equally easy to recall. Also, when memories are learned, spurious memories are also created and can also be evoked. Applying an `unlearning' process, similar to the learning processes but with a reversed sign and starting from a noise input, enhances the performance of the network in accessing real memories and in minimizing spurious ones. Although our model was not motivated by higher nervous function, our system displays behaviours which are strikingly parallel to those needed for the hypothesized role of `unlearning' in rapid eye movement (REM) sleep.

  18. Conditional bistability, a generic cellular mnemonic mechanism for robust and flexible working memory computations.

    Science.gov (United States)

    Rodriguez, Guillaume; Sarazin, Matthieu; Clemente, Alexandra; Holden, Stephanie; Paz, Jeanne T; Delord, Bruno

    2018-04-30

    Persistent neural activity, the substrate of working memory, is thought to emerge from synaptic reverberation within recurrent networks. However, reverberation models do not robustly explain fundamental dynamics of persistent activity, including high-spiking irregularity, large intertrial variability, and state transitions. While cellular bistability may contribute to persistent activity, its rigidity appears incompatible with persistent activity labile characteristics. Here, we unravel in a cellular model a form of spike-mediated conditional bistability that is robust, generic and provides a rich repertoire of mnemonic computations. Under asynchronous synaptic inputs of the awakened state, conditional bistability generates spiking/bursting episodes, accounting for the irregularity, variability and state transitions characterizing persistent activity. This mechanism has likely been overlooked because of the sub-threshold input it requires and we predict how to assess it experimentally. Our results suggest a reexamination of the role of intrinsic properties in the collective network dynamics responsible for flexible working memory. SIGNIFICANCE STATEMENT This study unravels a novel form of intrinsic neuronal property, i.e. conditional bistability. We show that, thanks of its conditional character, conditional bistability favors the emergence of flexible and robust forms of persistent activity in PFC neural networks, in opposition to previously studied classical forms of absolute bistability. Specifically, we demonstrate for the first time that conditional bistability 1) is a generic biophysical spike-dependent mechanism of layer V pyramidal neurons in the PFC and that 2) it accounts for essential neurodynamical features for the organisation and flexibility of PFC persistent activity (the large irregularity and intertrial variability of the discharge and its organization under discrete stable states), which remain unexplained in a robust fashion by current models

  19. Computing visibility on terrains in external memory

    NARCIS (Netherlands)

    Haverkort, H.J.; Toma, L.; Zhuang, Yi

    2007-01-01

    We describe a novel application of the distribution sweeping technique to computing visibility on terrains. Given an arbitrary viewpoint v, the basic problem we address is computing the visibility map or viewshed of v, which is the set of points in the terrain that are visible from v. We give the

  20. Analysis of simultaneous multi-bit induced by a cosmic ray for onboard memory

    International Nuclear Information System (INIS)

    Ono, Takashi; Mori, Masato

    1987-01-01

    Accompanying the development of intelligent onboard equipment using high density memories, the soft-error phenomenon, which is the bit upset induced by a cosmic ray, must be investigated. Especially, the simultaneous multi-bit error (SME) induced by a cosmic ray negligible on earth becomes remarkable in space use. This paper entimates the SME occurrence rate of memory chip by computer simulations and describes the results of the SME experiments using a cyclotron. The computer simulation and experiment results confirm the SME occurrence and show that layout of memory cells is important for the probability of SME occurrence. (author)

  1. Pacing a data transfer operation between compute nodes on a parallel computer

    Science.gov (United States)

    Blocksome, Michael A [Rochester, MN

    2011-09-13

    Methods, systems, and products are disclosed for pacing a data transfer between compute nodes on a parallel computer that include: transferring, by an origin compute node, a chunk of an application message to a target compute node; sending, by the origin compute node, a pacing request to a target direct memory access (`DMA`) engine on the target compute node using a remote get DMA operation; determining, by the origin compute node, whether a pacing response to the pacing request has been received from the target DMA engine; and transferring, by the origin compute node, a next chunk of the application message if the pacing response to the pacing request has been received from the target DMA engine.

  2. Experimental Effects of Acute Exercise on Iconic Memory, Short-Term Episodic, and Long-Term Episodic Memory

    Directory of Open Access Journals (Sweden)

    Danielle Yanes

    2018-06-01

    Full Text Available The present experiment evaluated the effects of acute exercise on iconic memory and short- and long-term episodic memory. A two-arm, parallel-group randomized experiment was employed (n = 20 per group; Mage = 21 year. The experimental group engaged in an acute bout of moderate-intensity treadmill exercise for 15 min, while the control group engaged in a seated, time-matched computer task. Afterwards, the participants engaged in a paragraph-level episodic memory task (20 min delay and 24 h delay recall as well as an iconic memory task, which involved 10 trials (at various speeds from 100 ms to 800 ms of recalling letters from a 3 × 3 array matrix. For iconic memory, there was a significant main effect for time (F = 42.9, p < 0.001, η2p = 0.53 and a trend towards a group × time interaction (F = 2.90, p = 0.09, η2p = 0.07, but no main effect for group (F = 0.82, p = 0.37, η2p = 0.02. The experimental group had higher episodic memory scores at both the baseline (19.22 vs. 17.20 and follow-up (18.15 vs. 15.77, but these results were not statistically significant. These findings provide some suggestive evidence hinting towards an iconic memory and episodic benefit from acute exercise engagement.

  3. Contrasting single and multi-component working-memory systems in dual tasking.

    Science.gov (United States)

    Nijboer, Menno; Borst, Jelmer; van Rijn, Hedderik; Taatgen, Niels

    2016-05-01

    Working memory can be a major source of interference in dual tasking. However, there is no consensus on whether this interference is the result of a single working memory bottleneck, or of interactions between different working memory components that together form a complete working-memory system. We report a behavioral and an fMRI dataset in which working memory requirements are manipulated during multitasking. We show that a computational cognitive model that assumes a distributed version of working memory accounts for both behavioral and neuroimaging data better than a model that takes a more centralized approach. The model's working memory consists of an attentional focus, declarative memory, and a subvocalized rehearsal mechanism. Thus, the data and model favor an account where working memory interference in dual tasking is the result of interactions between different resources that together form a working-memory system. Copyright © 2016 Elsevier Inc. All rights reserved.

  4. Analysis of the Organization of Lexical Memory

    National Research Council Canada - National Science Library

    Miller, George

    1997-01-01

    The practical outcome of the project, Analysis of the Organization of Lexical Memory, is an electronic lexical database called WordNet that can be incorporated into computer systems for processing English text...

  5. Hearing aid noise suppression and working memory function

    OpenAIRE

    Fischer, Rosa-Linde; Neher, Tobias; Wagener, Kirsten C.

    2017-01-01

    OBJECTIVE: Research findings concerning the relation between benefit from hearing aid (HA) noise suppression and working memory function are inconsistent. The current study thus investigated the effects of three noise suppression algorithms on auditory working memory and the relation with reading span.DESIGN: Using a computer simulation of bilaterally fitted HAs, four settings were tested: (1) unprocessed, (2) directional microphones, (3) single-channel noise reduction and (4) binaural cohere...

  6. Read method compensating parasitic sneak currents in a crossbar memristive memory

    KAUST Repository

    Zidan, Mohammed A.; Omran, Hesham; Naous, Rawan; Salem, Ahmed Sultan; Salama, Khaled N.

    2017-01-01

    properties of the computer memory system to address this sneak-paths problem. The method of the invention is a method for reading a target memory cell located at an intersection of a target row of a gateless array and a target column of the gateless array

  7. Modeling aspects of human memory for scientific study.

    Energy Technology Data Exchange (ETDEWEB)

    Caudell, Thomas P. (University of New Mexico); Watson, Patrick (University of Illinois - Champaign-Urbana Beckman Institute); McDaniel, Mark A. (Washington University); Eichenbaum, Howard B. (Boston University); Cohen, Neal J. (University of Illinois - Champaign-Urbana Beckman Institute); Vineyard, Craig Michael; Taylor, Shawn Ellis; Bernard, Michael Lewis; Morrow, James Dan; Verzi, Stephen J.

    2009-10-01

    Working with leading experts in the field of cognitive neuroscience and computational intelligence, SNL has developed a computational architecture that represents neurocognitive mechanisms associated with how humans remember experiences in their past. The architecture represents how knowledge is organized and updated through information from individual experiences (episodes) via the cortical-hippocampal declarative memory system. We compared the simulated behavioral characteristics with those of humans measured under well established experimental standards, controlling for unmodeled aspects of human processing, such as perception. We used this knowledge to create robust simulations of & human memory behaviors that should help move the scientific community closer to understanding how humans remember information. These behaviors were experimentally validated against actual human subjects, which was published. An important outcome of the validation process will be the joining of specific experimental testing procedures from the field of neuroscience with computational representations from the field of cognitive modeling and simulation.

  8. Concurrent Operations of O2-Tree on Shared Memory Multicore Architectures

    Directory of Open Access Journals (Sweden)

    Daniel Ohene-Kwofie

    2014-05-01

    Full Text Available Modern computer architectures provide high performance computing capability by having multiple CPU cores. Such systems are also typically associated with very large main-memory capacities, thereby allowing them to be used for fast processing of in-memory database applications. However, most of the concurrency control mechanism associated with the index structures of these memory resident databases do not scale well, under high transaction rates. This paper presents the O2-Tree, a fast main memory resident index, which is also highly scalable and tolerant of high transaction rates in a concurrent environment using the relaxed balancing tree algorithm. The O2-Tree is a modified Red-Black tree in which the leaf nodes are formed into blocks that hold key-value pairs, while each internal node stores a single key that results from splitting leaf nodes. Multi-threaded concurrent manipulation of the O2-Tree outperforms popular NoSQL based key-value stores considered in this paper.

  9. Evolving spiking networks with variable resistive memories.

    Science.gov (United States)

    Howard, Gerard; Bull, Larry; de Lacy Costello, Ben; Gale, Ella; Adamatzky, Andrew

    2014-01-01

    Neuromorphic computing is a brainlike information processing paradigm that requires adaptive learning mechanisms. A spiking neuro-evolutionary system is used for this purpose; plastic resistive memories are implemented as synapses in spiking neural networks. The evolutionary design process exploits parameter self-adaptation and allows the topology and synaptic weights to be evolved for each network in an autonomous manner. Variable resistive memories are the focus of this research; each synapse has its own conductance profile which modifies the plastic behaviour of the device and may be altered during evolution. These variable resistive networks are evaluated on a noisy robotic dynamic-reward scenario against two static resistive memories and a system containing standard connections only. The results indicate that the extra behavioural degrees of freedom available to the networks incorporating variable resistive memories enable them to outperform the comparative synapse types.

  10. Review on Computational Electromagnetics

    Directory of Open Access Journals (Sweden)

    P. Sumithra

    2017-03-01

    Full Text Available Computational electromagnetics (CEM is applied to model the interaction of electromagnetic fields with the objects like antenna, waveguides, aircraft and their environment using Maxwell equations.  In this paper the strength and weakness of various computational electromagnetic techniques are discussed. Performance of various techniques in terms accuracy, memory and computational time for application specific tasks such as modeling RCS (Radar cross section, space applications, thin wires, antenna arrays are presented in this paper.

  11. Quantum capacity of dephasing channels with memory

    International Nuclear Information System (INIS)

    D'Arrigo, A; Benenti, G; Falci, G

    2007-01-01

    We show that the amount of coherent quantum information that can be reliably transmitted down a dephasing channel with memory is maximized by separable input states. In particular, we model the channel as a Markov chain or a multimode environment of oscillators. While in the first model, the maximization is achieved for the maximally mixed input state, in the latter it is convenient to exploit the presence of a decoherence-protected subspace generated by memory effects. We explicitly compute the quantum channel capacity for the first model while numerical simulations suggest a lower bound for the latter. In both cases memory effects enhance the coherent information. We present results valid for arbitrary input size

  12. Blood leakage detection during dialysis therapy based on fog computing with array photocell sensors and heteroassociative memory model

    Science.gov (United States)

    Wu, Jian-Xing; Huang, Ping-Tzan; Li, Chien-Ming

    2018-01-01

    Blood leakage and blood loss are serious life-threatening complications occurring during dialysis therapy. These events have been of concerns to both healthcare givers and patients. More than 40% of adult blood volume can be lost in just a few minutes, resulting in morbidities and mortality. The authors intend to propose the design of a warning tool for the detection of blood leakage/blood loss during dialysis therapy based on fog computing with an array of photocell sensors and heteroassociative memory (HAM) model. Photocell sensors are arranged in an array on a flexible substrate to detect blood leakage via the resistance changes with illumination in the visible spectrum of 500–700 nm. The HAM model is implemented to design a virtual alarm unit using electricity changes in an embedded system. The proposed warning tool can indicate the risk level in both end-sensing units and remote monitor devices via a wireless network and fog/cloud computing. The animal experimental results (pig blood) will demonstrate the feasibility. PMID:29515815

  13. Noise-assisted morphing of memory and logic function

    International Nuclear Information System (INIS)

    Kohar, Vivek; Sinha, Sudeshna

    2012-01-01

    We demonstrate how noise allows a bistable system to behave as a memory device, as well as a logic gate. Namely, in some optimal range of noise, the system can operate flexibly, both as a NAND/AND gate and a Set–Reset latch, by varying an asymmetrizing bias. Thus we show how this system implements memory, even for sub-threshold input signals, using noise constructively to store information. This can lead to the development of reconfigurable devices, that can switch efficiently between memory tasks and logic operations. -- Highlights: ► We consider a nonlinear system in a noisy environment. ► We show that the system can function as a robust memory element. ► Further, the response of the system can be easily morphed from memory to logic operations. ► Such systems can potentially act as building blocks of “smart” computing devices.

  14. Load and distinctness interact in working memory for lexical manual gestures

    Directory of Open Access Journals (Sweden)

    Mary eRudner

    2015-08-01

    Full Text Available The Ease of Language Understanding model (ELU, Rönnberg et al., 2013 predicts that decreasing the distinctness of language stimuli increases working memory load; in the speech domain this notion is supported by empirical evidence. Our aim was to determine whether such an over-additive interaction can be generalized to sign processing in sign-naïve individuals and whether it is modulated by experience of computer gaming. Twenty young adults with no knowledge of sign language performed an n-back working memory task based on manual gestures lexicalized in sign language; the visual resolution of the signs and working memory load were manipulated. Performance was poorer when load was high and resolution was low. These two effects interacted over-additively, demonstrating that reducing the resolution of signed stimuli increases working memory load when there is no pre-existing semantic representation. This suggests that load and distinctness are handled by a shared amodal mechanism which can be revealed empirically when stimuli are degraded and load is high, even without pre-existing semantic representation. There was some evidence that the mechanism is influenced by computer gaming experience. Future work should explore how the shared mechanism is influenced by pre-existing semantic representation and sensory factors together with computer gaming experience.

  15. Load and distinctness interact in working memory for lexical manual gestures.

    Science.gov (United States)

    Rudner, Mary; Toscano, Elena; Holmer, Emil

    2015-01-01

    The Ease of Language Understanding model (Rönnberg et al., 2013) predicts that decreasing the distinctness of language stimuli increases working memory load; in the speech domain this notion is supported by empirical evidence. Our aim was to determine whether such an over-additive interaction can be generalized to sign processing in sign-naïve individuals and whether it is modulated by experience of computer gaming. Twenty young adults with no knowledge of sign language performed an n-back working memory task based on manual gestures lexicalized in sign language; the visual resolution of the signs and working memory load were manipulated. Performance was poorer when load was high and resolution was low. These two effects interacted over-additively, demonstrating that reducing the resolution of signed stimuli increases working memory load when there is no pre-existing semantic representation. This suggests that load and distinctness are handled by a shared amodal mechanism which can be revealed empirically when stimuli are degraded and load is high, even without pre-existing semantic representation. There was some evidence that the mechanism is influenced by computer gaming experience. Future work should explore how the shared mechanism is influenced by pre-existing semantic representation and sensory factors together with computer gaming experience.

  16. Data input from an analog-to-digital converter into the M-6000 computer

    International Nuclear Information System (INIS)

    Kalashnikov, A.M.; Sheremet'ev, A.K.

    1978-01-01

    A device for spectrometric data input from the ADC-4096 into the M-6000 computer memory operating in the information storage regime is described. The input device made on integrated circuits coordinates signal levels of the fast response analog-to-digital converter and computer with the help of resistors and inverters. Besides, the input forms a strobe to trigger an increment channel used to record information into the computer memory. The use of the input device permits to get rid of the intermediate information storage in the analyzer memory and ensures fast response of the devices

  17. Visualization and Data Analysis for High-Performance Computing

    Energy Technology Data Exchange (ETDEWEB)

    Sewell, Christopher Meyer [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-09-27

    This is a set of slides from a guest lecture for a class at the University of Texas, El Paso on visualization and data analysis for high-performance computing. The topics covered are the following: trends in high-performance computing; scientific visualization, such as OpenGL, ray tracing and volume rendering, VTK, and ParaView; data science at scale, such as in-situ visualization, image databases, distributed memory parallelism, shared memory parallelism, VTK-m, "big data", and then an analysis example.

  18. Prospective memory, working memory, retrospective memory and self-rated memory performance in persons with intellectual disability

    OpenAIRE

    Levén, Anna; Lyxell, Björn; Andersson, Jan; Danielsson, Henrik; Rönnberg, Jerker

    2008-01-01

    The purpose of the present study was to examine the relationship between prospective memory, working memory, retrospective memory and self-rated memory capacity in adults with and without intellectual disability. Prospective memory was investigated by means of a picture-based task. Working memory was measured as performance on span tasks. Retrospective memory was scored as recall of subject performed tasks. Self-ratings of memory performance were based on the prospective and retrospective mem...

  19. The influence of Markov decision process structure on the possible strategic use of working memory and episodic memory.

    Science.gov (United States)

    Zilli, Eric A; Hasselmo, Michael E

    2008-07-23

    Researchers use a variety of behavioral tasks to analyze the effect of biological manipulations on memory function. This research will benefit from a systematic mathematical method for analyzing memory demands in behavioral tasks. In the framework of reinforcement learning theory, these tasks can be mathematically described as partially-observable Markov decision processes. While a wealth of evidence collected over the past 15 years relates the basal ganglia to the reinforcement learning framework, only recently has much attention been paid to including psychological concepts such as working memory or episodic memory in these models. This paper presents an analysis that provides a quantitative description of memory states sufficient for correct choices at specific decision points. Using information from the mathematical structure of the task descriptions, we derive measures that indicate whether working memory (for one or more cues) or episodic memory can provide strategically useful information to an agent. In particular, the analysis determines which observed states must be maintained in or retrieved from memory to perform these specific tasks. We demonstrate the analysis on three simplified tasks as well as eight more complex memory tasks drawn from the animal and human literature (two alternation tasks, two sequence disambiguation tasks, two non-matching tasks, the 2-back task, and the 1-2-AX task). The results of these analyses agree with results from quantitative simulations of the task reported in previous publications and provide simple indications of the memory demands of the tasks which can require far less computation than a full simulation of the task. This may provide a basis for a quantitative behavioral stoichiometry of memory tasks.

  20. The influence of Markov decision process structure on the possible strategic use of working memory and episodic memory.

    Directory of Open Access Journals (Sweden)

    Eric A Zilli

    2008-07-01

    Full Text Available Researchers use a variety of behavioral tasks to analyze the effect of biological manipulations on memory function. This research will benefit from a systematic mathematical method for analyzing memory demands in behavioral tasks. In the framework of reinforcement learning theory, these tasks can be mathematically described as partially-observable Markov decision processes. While a wealth of evidence collected over the past 15 years relates the basal ganglia to the reinforcement learning framework, only recently has much attention been paid to including psychological concepts such as working memory or episodic memory in these models. This paper presents an analysis that provides a quantitative description of memory states sufficient for correct choices at specific decision points. Using information from the mathematical structure of the task descriptions, we derive measures that indicate whether working memory (for one or more cues or episodic memory can provide strategically useful information to an agent. In particular, the analysis determines which observed states must be maintained in or retrieved from memory to perform these specific tasks. We demonstrate the analysis on three simplified tasks as well as eight more complex memory tasks drawn from the animal and human literature (two alternation tasks, two sequence disambiguation tasks, two non-matching tasks, the 2-back task, and the 1-2-AX task. The results of these analyses agree with results from quantitative simulations of the task reported in previous publications and provide simple indications of the memory demands of the tasks which can require far less computation than a full simulation of the task. This may provide a basis for a quantitative behavioral stoichiometry of memory tasks.

  1. Information-Theoretic Properties of Auditory Sequences Dynamically Influence Expectation and Memory.

    Science.gov (United States)

    Agres, Kat; Abdallah, Samer; Pearce, Marcus

    2018-01-01

    A basic function of cognition is to detect regularities in sensory input to facilitate the prediction and recognition of future events. It has been proposed that these implicit expectations arise from an internal predictive coding model, based on knowledge acquired through processes such as statistical learning, but it is unclear how different types of statistical information affect listeners' memory for auditory stimuli. We used a combination of behavioral and computational methods to investigate memory for non-linguistic auditory sequences. Participants repeatedly heard tone sequences varying systematically in their information-theoretic properties. Expectedness ratings of tones were collected during three listening sessions, and a recognition memory test was given after each session. Information-theoretic measures of sequential predictability significantly influenced listeners' expectedness ratings, and variations in these properties had a significant impact on memory performance. Predictable sequences yielded increasingly better memory performance with increasing exposure. Computational simulations using a probabilistic model of auditory expectation suggest that listeners dynamically formed a new, and increasingly accurate, implicit cognitive model of the information-theoretic structure of the sequences throughout the experimental session. Copyright © 2017 Cognitive Science Society, Inc.

  2. Impurity and quaternions in nonrelativistic scattering from a quantum memory

    International Nuclear Information System (INIS)

    Margetis, Dionisios; Grillakis, Manoussos G

    2008-01-01

    Models of quantum computing rely on transformations of the states of a quantum memory. We study mathematical aspects of a model proposed by Wu in which the memory state is changed via the scattering of incoming particles. This operation causes the memory content to deviate from a pure state, i.e. induces impurity. For nonrelativistic particles scattered from a two-state memory and sufficiently general interaction potentials in (1+1) dimensions, we express impurity in terms of quaternionic commutators. In this context, pure memory states correspond to null hyperbolic quaternions. In the case with point interactions, the scattering process amounts to appropriate rotations of quaternions in the frequency domain. Our work complements previous analyses by Margetis and Myers (2006 J. Phys. A 39 11567)

  3. A unitary signal-detection model of implicit and explicit memory.

    Science.gov (United States)

    Berry, Christopher J; Shanks, David R; Henson, Richard N A

    2008-10-01

    Do dissociations imply independent systems? In the memory field, the view that there are independent implicit and explicit memory systems has been predominantly supported by dissociation evidence. Here, we argue that many of these dissociations do not necessarily imply distinct memory systems. We review recent work with a single-system computational model that extends signal-detection theory (SDT) to implicit memory. SDT has had a major influence on research in a variety of domains. The current work shows that it can be broadened even further in its range of application. Indeed, the single-system model that we present does surprisingly well in accounting for some key dissociations that have been taken as evidence for independent implicit and explicit memory systems.

  4. Energy-Efficient Abundant-Data Computing: The N3XT 1,000X

    OpenAIRE

    Aly Mohamed M. Sabry; Gao Mingyu; Hills Gage; Lee Chi-Shuen; Pinter Greg; Shulaker Max M.; Wu Tony F.; Asheghi Mehdi; Bokor Jeff; Franchetti Franz; Goodson Kenneth E.; Kozyrakis Christos; Markov Igor; Olukotun Kunle; Pileggi Larry

    2015-01-01

    Next generation information technologies will process unprecedented amounts of loosely structured data that overwhelm existing computing systems. N3XT improves the energy efficiency of abundant data applications 1000 fold by using new logic and memory technologies 3D integration with fine grained connectivity and new architectures for computation immersed in memory.

  5. Transactional Memory

    CERN Document Server

    Harris, Tim; Rajwar, Ravi

    2010-01-01

    The advent of multicore processors has renewed interest in the idea of incorporating transactions into the programming model used to write parallel programs.This approach, known as transactional memory, offers an alternative, and hopefully better, way to coordinate concurrent threads. The ACI(atomicity, consistency, isolation) properties of transactions provide a foundation to ensure that concurrent reads and writes of shared data do not produce inconsistent or incorrect results. At a higher level, a computation wrapped in a transaction executes atomically - either it completes successfullyand

  6. Languages, compilers and run-time environments for distributed memory machines

    CERN Document Server

    Saltz, J

    1992-01-01

    Papers presented within this volume cover a wide range of topics related to programming distributed memory machines. Distributed memory architectures, although having the potential to supply the very high levels of performance required to support future computing needs, present awkward programming problems. The major issue is to design methods which enable compilers to generate efficient distributed memory programs from relatively machine independent program specifications. This book is the compilation of papers describing a wide range of research efforts aimed at easing the task of programmin

  7. Autonomic Closure for Turbulent Flows Using Approximate Bayesian Computation

    Science.gov (United States)

    Doronina, Olga; Christopher, Jason; Hamlington, Peter; Dahm, Werner

    2017-11-01

    Autonomic closure is a new technique for achieving fully adaptive and physically accurate closure of coarse-grained turbulent flow governing equations, such as those solved in large eddy simulations (LES). Although autonomic closure has been shown in recent a priori tests to more accurately represent unclosed terms than do dynamic versions of traditional LES models, the computational cost of the approach makes it challenging to implement for simulations of practical turbulent flows at realistically high Reynolds numbers. The optimization step used in the approach introduces large matrices that must be inverted and is highly memory intensive. In order to reduce memory requirements, here we propose to use approximate Bayesian computation (ABC) in place of the optimization step, thereby yielding a computationally-efficient implementation of autonomic closure that trades memory-intensive for processor-intensive computations. The latter challenge can be overcome as co-processors such as general purpose graphical processing units become increasingly available on current generation petascale and exascale supercomputers. In this work, we outline the formulation of ABC-enabled autonomic closure and present initial results demonstrating the accuracy and computational cost of the approach.

  8. Meeting the memory challenges of brain-scale network simulation

    Directory of Open Access Journals (Sweden)

    Susanne eKunkel

    2012-01-01

    Full Text Available The development of high-performance simulation software is crucial for studying the brain connectome. Using connectome data to generate neurocomputational models requires software capable of coping with models on a variety of scales: from the microscale, investigating plasticity and dynamics of circuits in local networks, to the macroscale, investigating the interactions between distinct brain regions. Prior to any serious dynamical investigation, the first task of network simulations is to check the consistency of data integrated in the connectome and constrain ranges for yet unknown parameters. Thanks to distributed computing techniques, it is possible today to routinely simulate local cortical networks of around 10^5 neurons with up to 10^9 synapses on clusters and multi-processor shared-memory machines. However, brain-scale networks are one or two orders of magnitude larger than such local networks, in terms of numbers of neurons and synapses as well as in terms of computational load. Such networks have been studied in individual studies, but the underlying simulation technologies have neither been described in sufficient detail to be reproducible nor made publicly available. Here, we discover that as the network model sizes approach the regime of meso- and macroscale simulations, memory consumption on individual compute nodes becomes a critical bottleneck. This is especially relevant on modern supercomputers such as the Bluegene/P architecture where the available working memory per CPU core is rather limited. We develop a simple linear model to analyze the memory consumption of the constituent components of a neuronal simulator as a function of network size and the number of cores used. This approach has multiple benefits. The model enables identification of key contributing components to memory saturation and prediction of the effects of potential improvements to code before any implementation takes place.

  9. The neural bases of orthographic working memory

    Directory of Open Access Journals (Sweden)

    Jeremy Purcell

    2014-04-01

    First, these results reveal a neurotopography of OWM lesion sites that is well-aligned with results from neuroimaging of orthographic working memory in neurally intact participants (Rapp & Dufor, 2011. Second, the dorsal neurotopography of the OWM lesion overlap is clearly distinct from what has been reported for lesions associated with either lexical or sublexical deficits (e.g., Henry, Beeson, Stark, & Rapcsak, 2007; Rapcsak & Beeson, 2004; these have, respectively, been identified with the inferior occipital/temporal and superior temporal/inferior parietal regions. These neurotopographic distinctions support the claims of the computational distinctiveness of long-term vs. working memory operations. The specific lesion loci raise a number of questions to be discussed regarding: (a the selectivity of these regions and associated deficits to orthographic working memory vs. working memory more generally (b the possibility that different lesion sub-regions may correspond to different components of the OWM system.

  10. Context-dependent memory decay is evidence of effort minimization in motor learning: a computational study

    OpenAIRE

    Takiyama, Ken

    2015-01-01

    Recent theoretical models suggest that motor learning includes at least two processes: error minimization and memory decay. While learning a novel movement, a motor memory of the movement is gradually formed to minimize the movement error between the desired and actual movements in each training trial, but the memory is slightly forgotten in each trial. The learning effects of error minimization trained with a certain movement are partially available in other non-trained movements, and this t...

  11. Chemical memory reactions induced bursting dynamics in gene expression.

    Science.gov (United States)

    Tian, Tianhai

    2013-01-01

    Memory is a ubiquitous phenomenon in biological systems in which the present system state is not entirely determined by the current conditions but also depends on the time evolutionary path of the system. Specifically, many memorial phenomena are characterized by chemical memory reactions that may fire under particular system conditions. These conditional chemical reactions contradict to the extant stochastic approaches for modeling chemical kinetics and have increasingly posed significant challenges to mathematical modeling and computer simulation. To tackle the challenge, I proposed a novel theory consisting of the memory chemical master equations and memory stochastic simulation algorithm. A stochastic model for single-gene expression was proposed to illustrate the key function of memory reactions in inducing bursting dynamics of gene expression that has been observed in experiments recently. The importance of memory reactions has been further validated by the stochastic model of the p53-MDM2 core module. Simulations showed that memory reactions is a major mechanism for realizing both sustained oscillations of p53 protein numbers in single cells and damped oscillations over a population of cells. These successful applications of the memory modeling framework suggested that this innovative theory is an effective and powerful tool to study memory process and conditional chemical reactions in a wide range of complex biological systems.

  12. Three-terminal resistive switching memory in a transparent vertical-configuration device

    International Nuclear Information System (INIS)

    Ungureanu, Mariana; Llopis, Roger; Casanova, Fèlix; Hueso, Luis E.

    2014-01-01

    The resistive switching phenomenon has attracted much attention recently for memory applications. It describes the reversible change in the resistance of a dielectric between two non-volatile states by the application of electrical pulses. Typical resistive switching memories are two-terminal devices formed by an oxide layer placed between two metal electrodes. Here, we report on the fabrication and operation of a three-terminal resistive switching memory that works as a reconfigurable logic component and offers an increased logic density on chip. The three-terminal memory device we present is transparent and could be further incorporated in transparent computing electronic technologies

  13. Self-correcting quantum memory in a thermal environment

    International Nuclear Information System (INIS)

    Chesi, Stefano; Roethlisberger, Beat; Loss, Daniel

    2010-01-01

    The ability to store information is of fundamental importance to any computer, be it classical or quantum. To identify systems for quantum memories, which rely, analogously to classical memories, on passive error protection (''self-correction''), is of greatest interest in quantum information science. While systems with topological ground states have been considered to be promising candidates, a large class of them was recently proven unstable against thermal fluctuations. Here, we propose two-dimensional (2D) spin models unaffected by this result. Specifically, we introduce repulsive long-range interactions in the toric code and establish a memory lifetime polynomially increasing with the system size. This remarkable stability is shown to originate directly from the repulsive long-range nature of the interactions. We study the time dynamics of the quantum memory in terms of diffusing anyons and support our analytical results with extensive numerical simulations. Our findings demonstrate that self-correcting quantum memories can exist in 2D at finite temperatures.

  14. Human Uniqueness, Cognition by Description, and Procedural Memory

    Directory of Open Access Journals (Sweden)

    John Bolender

    2008-06-01

    Full Text Available Evidence will be reviewed suggesting a fairly direct link between the human ability to think about entities which one has never perceived — here called “cognition by description” — and procedural memory. Cognition by description is a uniquely hominid trait which makes religion, science, and history possible. It is hypothesized that cognition by description (in the manner of Bertrand Russell’s “knowledge by description” requires variable binding, which in turn utilizes quantifier raising. Quantifier raising plausibly depends upon the computational core of language, specifically the element of it which Noam Chomsky calls “internal Merge”. Internal Merge produces hierarchical structures by means of a memory of derivational steps, a process plausibly involving procedural memory. The hypothesis is testable, predicting that procedural memory deficits will be accompanied by impairments in cognition by description. We also discuss neural mechanisms plausibly underlying procedural memory and also, by our hypothesis, cognition by description.

  15. Single-board 32-bit computer for the FASTBUS

    International Nuclear Information System (INIS)

    Kellner, R.; Blossom, J.M.; Hong, J.P.

    1985-01-01

    The Los Alamos National Laboratory is building a 32bit computer on a FASTBUS board. It will use the National Semiconductor 32032 chip set, including the demand-paged memory management, floating point slave processor and interrupt control chips. The board will support 4 megabytes of memory which can be accessed by the processor over an on-board execution bus at processor speeds and which can be accessed by the FASTBUS at 80 megabytes per second. A windowed, direct memory access mechanism allows transfers of up to all of the memory

  16. Use of non-volatile memories for SSC detector readout

    International Nuclear Information System (INIS)

    Fennelly, A.J.; Woosley, J.K.; Johnson, M.B.

    1990-01-01

    Use of non-volatile memory units at the end of each fiber optic bunch/strand would substantially increase information available from experiments by providing a complete event history, in addition to easing real time processing requirements. This may be an alternative to enhancing technology to optical computing techniques. Available and low-risk projected technologies will be surveyed, with costing addressed. Some discussion will be given to covnersion of optical signals, to electronic information, concepts for providing timing pulses to the memory units, and to the magnetoresistive (MRAM) and ferroelectric (FERAM) random access memory technologies that may be utilized in the prototype system

  17. Implicit and explicit memory for spatial information in Alzheimer's disease.

    Science.gov (United States)

    Kessels, R P C; Feijen, J; Postma, A

    2005-01-01

    There is abundant evidence that memory impairment in dementia in patients with Alzheimer's disease (AD) is related to explicit, conscious forms of memory, whereas implicit, unconscious forms of memory function remain relatively intact or are less severely affected. Only a few studies have been performed on spatial memory function in AD, showing that AD patients' explicit spatial memory is impaired, possibly related to hippocampal dysfunction. However, studies on implicit spatial memory in AD are lacking. The current study set out to investigate implicit and explicit spatial memory in AD patients (n=18) using an ecologically valid computer task, in which participants had to remember the locations of various objects in common rooms. The contribution of implicit and explicit memory functions was estimated by means of the process dissociation procedure. The results show that explicit spatial memory is impaired in AD patients compared with a control group (n=21). However, no group difference was found on implicit spatial function. This indicates that spared implicit memory in AD extends to the spatial domain, while the explicit spatial memory function deteriorates. Clinically, this finding might be relevant, in that an intact implicit memory function might be helpful in overcoming problems in explicit processing. Copyright (c) 2005 S. Karger AG, Basel.

  18. Consolidation of long-term memory: Evidence and alternatives.

    OpenAIRE

    Meeter, M.; Murre, J.M.J.

    2004-01-01

    Memory loss in retrograde amnesia has long been held to be larger for recent periods than for remote periods, a pattern usually referred to as the Ribot gradient. One explanation for this gradient is consolidation of long-term memories. Several computational models of such a process have shown how consolidation can explain characteristics of amnesia, but they have not elucidated how consolidation must be envisaged. Here findings are reviewed that shed light on how consolidation may be impleme...

  19. Efficient computation of aerodynamic influence coefficients for aeroelastic analysis on a transputer network

    Science.gov (United States)

    Janetzke, David C.; Murthy, Durbha V.

    1991-01-01

    Aeroelastic analysis is multi-disciplinary and computationally expensive. Hence, it can greatly benefit from parallel processing. As part of an effort to develop an aeroelastic capability on a distributed memory transputer network, a parallel algorithm for the computation of aerodynamic influence coefficients is implemented on a network of 32 transputers. The aerodynamic influence coefficients are calculated using a 3-D unsteady aerodynamic model and a parallel discretization. Efficiencies up to 85 percent were demonstrated using 32 processors. The effect of subtask ordering, problem size, and network topology are presented. A comparison to results on a shared memory computer indicates that higher speedup is achieved on the distributed memory system.

  20. Parallel computation of aerodynamic influence coefficients for aeroelastic analysis on a transputer network

    Science.gov (United States)

    Janetzke, D. C.; Murthy, D. V.

    1991-01-01

    Aeroelastic analysis is mult-disciplinary and computationally expensive. Hence, it can greatly benefit from parallel processing. As part of an effort to develop an aeroelastic analysis capability on a distributed-memory transputer network, a parallel algorithm for the computation of aerodynamic influence coefficients is implemented on a network of 32 transputers. The aerodynamic influence coefficients are calculated using a three-dimensional unsteady aerodynamic model and a panel discretization. Efficiencies up to 85 percent are demonstrated using 32 processors. The effects of subtask ordering, problem size and network topology are presented. A comparison to results on a shared-memory computer indicates that higher speedup is achieved on the distributed-memory system.

  1. Cycle accurate and cycle reproducible memory for an FPGA based hardware accelerator

    Science.gov (United States)

    Asaad, Sameh W.; Kapur, Mohit

    2016-03-15

    A method, system and computer program product are disclosed for using a Field Programmable Gate Array (FPGA) to simulate operations of a device under test (DUT). The DUT includes a device memory having a number of input ports, and the FPGA is associated with a target memory having a second number of input ports, the second number being less than the first number. In one embodiment, a given set of inputs is applied to the device memory at a frequency Fd and in a defined cycle of time, and the given set of inputs is applied to the target memory at a frequency Ft. Ft is greater than Fd and cycle accuracy is maintained between the device memory and the target memory. In an embodiment, a cycle accurate model of the DUT memory is created by separating the DUT memory interface protocol from the target memory storage array.

  2. Computer hardware for radiologists: Part 2

    Directory of Open Access Journals (Sweden)

    Indrajit I

    2010-01-01

    Full Text Available Computers are an integral part of modern radiology equipment. In the first half of this two-part article, we dwelt upon some fundamental concepts regarding computer hardware, covering components like motherboard, central processing unit (CPU, chipset, random access memory (RAM, and memory modules. In this article, we describe the remaining computer hardware components that are of relevance to radiology. "Storage drive" is a term describing a "memory" hardware used to store data for later retrieval. Commonly used storage drives are hard drives, floppy drives, optical drives, flash drives, and network drives. The capacity of a hard drive is dependent on many factors, including the number of disk sides, number of tracks per side, number of sectors on each track, and the amount of data that can be stored in each sector. "Drive interfaces" connect hard drives and optical drives to a computer. The connections of such drives require both a power cable and a data cable. The four most popular "input/output devices" used commonly with computers are the printer, monitor, mouse, and keyboard. The "bus" is a built-in electronic signal pathway in the motherboard to permit efficient and uninterrupted data transfer. A motherboard can have several buses, including the system bus, the PCI express bus, the PCI bus, the AGP bus, and the (outdated ISA bus. "Ports" are the location at which external devices are connected to a computer motherboard. All commonly used peripheral devices, such as printers, scanners, and portable drives, need ports. A working knowledge of computers is necessary for the radiologist if the workflow is to realize its full potential and, besides, this knowledge will prepare the radiologist for the coming innovations in the ′ever increasing′ digital future.

  3. Simulation of a small computer of the TRA-1001 type on the BESM computer

    International Nuclear Information System (INIS)

    Galaktionov, V.V.

    1975-01-01

    Considered are the purpose and probable simulation ways of one computer by the other. The emulator (simulation program) is given for a small computer of TRA-1001 type on BESM-6 computer. The simulated computer basic elements are the following: memory (8 K words), central processor, input-output program channel, interruption circuit, computer panel. The work with the input-output devices, teletypes ASP-33, FS-1500 is also simulated. Under actual operation the emulator has been used for translating the programs prepared on punched cards with the aid of translator SLANG-1 by BESM-6 computer. The translator alignment from language COPLAN has been realized with the aid of the emulator

  4. Working memory binding and episodic memory formation in aging, mild cognitive impairment, and Alzheimer's dementia.

    Science.gov (United States)

    van Geldorp, Bonnie; Heringa, Sophie M; van den Berg, Esther; Olde Rikkert, Marcel G M; Biessels, Geert Jan; Kessels, Roy P C

    2015-01-01

    Recent studies indicate that in both normal and pathological aging working memory (WM) performance deteriorates, especially when associations have to be maintained. However, most studies typically do not assess the relationship between WM and episodic memory formation. In the present study, we examined WM and episodic memory formation in normal aging and in patients with early Alzheimer's disease (mild cognitive impairment, MCI; and Alzheimer's dementia, AD). In the first study, 26 young adults (mean age 29.6 years) were compared to 18 middle-aged adults (mean age 52.2 years) and 25 older adults (mean age 72.8 years). We used an associative delayed-match-to-sample WM task, which requires participants to maintain two pairs of faces and houses presented on a computer screen for short (3 s) or long (6 s) maintenance intervals. After the WM task, an unexpected subsequent associative memory task was administered (two-alternative forced choice). In the second study, 27 patients with AD and 19 patients with MCI were compared to 25 older controls, using the same paradigm as that in Experiment 1. Older adults performed worse than both middle-aged and young adults. No effect of delay was observed in the healthy adults, and pairs that were processed during long maintenance intervals were not better remembered in the subsequent memory task. In the MCI and AD patients, longer maintenance intervals hampered the task performance. Also, both patient groups performed significantly worse than controls on the episodic memory task as well as the associative WM task. Aging and AD present with a decline in WM binding, a finding that extends similar results in episodic memory. Longer delays in the WM task did not affect episodic memory formation. We conclude that WM deficits are found when WM capacity is exceeded, which may occur during associative processing.

  5. Neuromorphic cognitive systems a learning and memory centered approach

    CERN Document Server

    Yu, Qiang; Hu, Jun; Tan Chen, Kay

    2017-01-01

    This book presents neuromorphic cognitive systems from a learning and memory-centered perspective. It illustrates how to build a system network of neurons to perform spike-based information processing, computing, and high-level cognitive tasks. It is beneficial to a wide spectrum of readers, including undergraduate and postgraduate students and researchers who are interested in neuromorphic computing and neuromorphic engineering, as well as engineers and professionals in industry who are involved in the design and applications of neuromorphic cognitive systems, neuromorphic sensors and processors, and cognitive robotics. The book formulates a systematic framework, from the basic mathematical and computational methods in spike-based neural encoding, learning in both single and multi-layered networks, to a near cognitive level composed of memory and cognition. Since the mechanisms for integrating spiking neurons integrate to formulate cognitive functions as in the brain are little understood, studies of neuromo...

  6. Low-memory iterative density fitting.

    Science.gov (United States)

    Grajciar, Lukáš

    2015-07-30

    A new low-memory modification of the density fitting approximation based on a combination of a continuous fast multipole method (CFMM) and a preconditioned conjugate gradient solver is presented. Iterative conjugate gradient solver uses preconditioners formed from blocks of the Coulomb metric matrix that decrease the number of iterations needed for convergence by up to one order of magnitude. The matrix-vector products needed within the iterative algorithm are calculated using CFMM, which evaluates them with the linear scaling memory requirements only. Compared with the standard density fitting implementation, up to 15-fold reduction of the memory requirements is achieved for the most efficient preconditioner at a cost of only 25% increase in computational time. The potential of the method is demonstrated by performing density functional theory calculations for zeolite fragment with 2592 atoms and 121,248 auxiliary basis functions on a single 12-core CPU workstation. © 2015 Wiley Periodicals, Inc.

  7. Introduction to magnetic random-access memory

    CERN Document Server

    Dieny, Bernard; Lee, Kyung-Jin

    2017-01-01

    Magnetic random-access memory (MRAM) is poised to replace traditional computer memory based on complementary metal-oxide semiconductors (CMOS). MRAM will surpass all other types of memory devices in terms of nonvolatility, low energy dissipation, fast switching speed, radiation hardness, and durability. Although toggle-MRAM is currently a commercial product, it is clear that future developments in MRAM will be based on spin-transfer torque, which makes use of electrons’ spin angular momentum instead of their charge. MRAM will require an amalgamation of magnetics and microelectronics technologies. However, researchers and developers in magnetics and in microelectronics attend different technical conferences, publish in different journals, use different tools, and have different backgrounds in condensed-matter physics, electrical engineering, and materials science. This book is an introduction to MRAM for microelectronics engineers written by specialists in magnetic mat rials and devices. It presents the bas...

  8. Synaptic clustering within dendrites: an emerging theory of memory formation

    Science.gov (United States)

    Kastellakis, George; Cai, Denise J.; Mednick, Sara C.; Silva, Alcino J.; Poirazi, Panayiota

    2015-01-01

    It is generally accepted that complex memories are stored in distributed representations throughout the brain, however the mechanisms underlying these representations are not understood. Here, we review recent findings regarding the subcellular mechanisms implicated in memory formation, which provide evidence for a dendrite-centered theory of memory. Plasticity-related phenomena which affect synaptic properties, such as synaptic tagging and capture, synaptic clustering, branch strength potentiation and spinogenesis provide the foundation for a model of memory storage that relies heavily on processes operating at the dendrite level. The emerging picture suggests that clusters of functionally related synapses may serve as key computational and memory storage units in the brain. We discuss both experimental evidence and theoretical models that support this hypothesis and explore its advantages for neuronal function. PMID:25576663

  9. Laser memory (hologram) and coincident redundant multiplex memory (CRM-memory)

    International Nuclear Information System (INIS)

    Ostojic, Branko

    1975-01-01

    It is shown that besides the memory which remembers the object by memorising of the phases of the interferenting waves of the light (i.e. hologram) it is possible to construct the memory which remembers the object by memorising of the phases of the interferenting impulses (CFM-memory). It is given the mathematical description of the memory, based on the experimental model. Although in the paper only the technical aspect of CRM memory is given. It is mentioned the possibility that the human memory has the same principle and that the invention of CRM memory is due to cybernetical analysis of the system human eye-visual cortex

  10. High-order hydrodynamic algorithms for exascale computing

    Energy Technology Data Exchange (ETDEWEB)

    Morgan, Nathaniel Ray [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-02-05

    Hydrodynamic algorithms are at the core of many laboratory missions ranging from simulating ICF implosions to climate modeling. The hydrodynamic algorithms commonly employed at the laboratory and in industry (1) typically lack requisite accuracy for complex multi- material vortical flows and (2) are not well suited for exascale computing due to poor data locality and poor FLOP/memory ratios. Exascale computing requires advances in both computer science and numerical algorithms. We propose to research the second requirement and create a new high-order hydrodynamic algorithm that has superior accuracy, excellent data locality, and excellent FLOP/memory ratios. This proposal will impact a broad range of research areas including numerical theory, discrete mathematics, vorticity evolution, gas dynamics, interface instability evolution, turbulent flows, fluid dynamics and shock driven flows. If successful, the proposed research has the potential to radically transform simulation capabilities and help position the laboratory for computing at the exascale.

  11. Systemic Lisbon Battery: Normative Data for Memory and Attention Assessments.

    Science.gov (United States)

    Gamito, Pedro; Morais, Diogo; Oliveira, Jorge; Ferreira Lopes, Paulo; Picareli, Luís Felipe; Matias, Marcelo; Correia, Sara; Brito, Rodrigo

    2016-05-04

    Memory and attention are two cognitive domains pivotal for the performance of instrumental activities of daily living (IADLs). The assessment of these functions is still widely carried out with pencil-and-paper tests, which lack ecological validity. The evaluation of cognitive and memory functions while the patients are performing IADLs should contribute to the ecological validity of the evaluation process. The objective of this study is to establish normative data from virtual reality (VR) IADLs designed to activate memory and attention functions. A total of 243 non-clinical participants carried out a paper-and-pencil Mini-Mental State Examination (MMSE) and performed 3 VR activities: art gallery visual matching task, supermarket shopping task, and memory fruit matching game. The data (execution time and errors, and money spent in the case of the supermarket activity) was automatically generated from the app. Outcomes were computed using non-parametric statistics, due to non-normality of distributions. Age, academic qualifications, and computer experience all had significant effects on most measures. Normative values for different levels of these measures were defined. Age, academic qualifications, and computer experience should be taken into account while using our VR-based platform for cognitive assessment purposes. ©Pedro Gamito, Diogo Morais, Jorge Oliveira, Paulo Ferreira Lopes, Luís Felipe Picareli, Marcelo Matias, Sara Correia, Rodrigo Brito. Originally published in JMIR Rehabilitation and Assistive Technology (http://rehab.jmir.org), 04.05.2016.

  12. A Compute Environment of ABC95 Array Computer Based on Multi-FPGA Chip

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    ABC95 array computer is a multi-function network's computer based on FPGA technology, The multi-function network supports processors conflict-free access data from memory and supports processors access data from processors based on enhanced MESH network.ABC95 instruction's system includes control instructions, scalar instructions, vectors instructions.Mostly net-work instructions are introduced.A programming environment of ABC95 array computer assemble language is designed.A programming environment of ABC95 array computer for VC++ is advanced.It includes load function of ABC95 array computer program and data, store function, run function and so on.Specially, The data type of ABC95 array computer conflict-free access is defined.The results show that these technologies can develop programmer of ABC95 array computer effectively.

  13. Remote direct memory access

    Science.gov (United States)

    Archer, Charles J.; Blocksome, Michael A.

    2012-12-11

    Methods, parallel computers, and computer program products are disclosed for remote direct memory access. Embodiments include transmitting, from an origin DMA engine on an origin compute node to a plurality target DMA engines on target compute nodes, a request to send message, the request to send message specifying a data to be transferred from the origin DMA engine to data storage on each target compute node; receiving, by each target DMA engine on each target compute node, the request to send message; preparing, by each target DMA engine, to store data according to the data storage reference and the data length, including assigning a base storage address for the data storage reference; sending, by one or more of the target DMA engines, an acknowledgment message acknowledging that all the target DMA engines are prepared to receive a data transmission from the origin DMA engine; receiving, by the origin DMA engine, the acknowledgement message from the one or more of the target DMA engines; and transferring, by the origin DMA engine, data to data storage on each of the target compute nodes according to the data storage reference using a single direct put operation.

  14. Computer-Based Cognitive Programs for Improvement of Memory, Processing Speed and Executive Function during Age-Related Cognitive Decline: A Meta-Analysis.

    Directory of Open Access Journals (Sweden)

    Yan-kun Shao

    Full Text Available Several studies have assessed the effects of computer-based cognitive programs (CCP in the management of age-related cognitive decline, but the role of CCP remains controversial. Therefore, this systematic review evaluated the evidence on the efficacy of CCP for age-related cognitive decline in healthy older adults.Six electronic databases (through October 2014 were searched. The risk of bias was assessed using the Cochrane Collaboration tool. The standardized mean difference (SMD and 95% confidence intervals (CI of a random-effects model were calculated. The heterogeneity was assessed using the Cochran Q statistic and quantified with the I2 index.Twelve studies were included in the current review and were considered as moderate to high methodological quality. The aggregated results indicate that CCP improves memory performance (SMD, 0.31; 95% CI 0.16 to 0.45; p < 0.0001 and processing speed (SMD, 0.50; 95% CI 0.14 to 0.87; p = 0.007 but not executive function (SMD, -0.12; 95% CI -0.33 to 0.09; p = 0.27. Furthermore, there were long-term gains in memory performance (SMD, 0.59; 95% CI 0.13 to 1.05; p = 0.01.CCP may be a valid complementary and alternative therapy for age-related cognitive decline, especially for memory performance and processing speed. However, more studies with longer follow-ups are warranted to confirm the current findings.

  15. Detection of memory impairment among community-dwelling elderly by using the Rivermead Behavioural Memory Test

    International Nuclear Information System (INIS)

    Shinagawa, Shunichiro; Toyota, Yasutaka; Matsumoto, Teruhisa; Sonobe, Naomi; Adachi, Hiroyoshi; Mori, Takaaki; Ishikawa, Tomohisa; Fukuhara, Ryuji; Ikeda, Manabu

    2010-01-01

    The aim of this study was to use the Rivermead Behavioural Memory Test (RBMT) to evaluate everyday memory impairment among community-dwelling elderly who had normal cognitive function and performed daily activities normally but displayed memory impairments, and to diagnose the condition as either mild cognitive impairment or dementia. Among the 1,290 community-dwelling elderly persons who participated in the study, 72 subjects scored higher than 24 on the Mini-Mental State Examination (MMSE): these subjects performed daily activities normally, but their family members reported that they showed memory impairments. Fifty-two subjects completed RBMT, Clinical Dementia Rating, and brain computed tomography, and a final diagnosis was established. The mean standard profile score was 15.1±5.0 and mean screening score was 6.4±3.0. RBMT score was correlated with the MMSE score. Nine of the subjects were diagnosed with dementia and 26 of them were found to be normal. RBMT achieved 100% sensitivity and specificity with regard to the differentiation of subjects with Alzheimer's disease. However, some subjects were diagnosed with dementia even though their RBMT score was higher than the cut-off score. RBMT was useful in detecting memory impairments of Alzheimer's disease (AD) subjects in community-based surveys. However, some subjects were diagnosed with dementia because of the existence of other cognitive impairments among community-dwelling elderly. (author)

  16. Sparse distributed memory

    Science.gov (United States)

    Denning, Peter J.

    1989-01-01

    Sparse distributed memory was proposed be Pentti Kanerva as a realizable architecture that could store large patterns and retrieve them based on partial matches with patterns representing current sensory inputs. This memory exhibits behaviors, both in theory and in experiment, that resemble those previously unapproached by machines - e.g., rapid recognition of faces or odors, discovery of new connections between seemingly unrelated ideas, continuation of a sequence of events when given a cue from the middle, knowing that one doesn't know, or getting stuck with an answer on the tip of one's tongue. These behaviors are now within reach of machines that can be incorporated into the computing systems of robots capable of seeing, talking, and manipulating. Kanerva's theory is a break with the Western rationalistic tradition, allowing a new interpretation of learning and cognition that respects biology and the mysteries of individual human beings.

  17. Concepts and implementation of a virtual memory developments for business orientation

    International Nuclear Information System (INIS)

    Sablet, Georges de

    1976-05-01

    APL is a very powerful language especially adapted for the manipulation of very large arrays. It is generally implemented as an interpreter included in a general System. The great power of the APL System and the great size of the information on which it may work, need big computers and restrict the use of APL. We tried to find a memory management which permits the implementation of an optimized APL interpreter on a mini-computer. This report presents the most important classical ways of managing memory and explains the System developed on the MULTI-20 (Intertechnique). The memory management is based on the virtual memory principles with paging and segmentation. Two different size of pages are available: small ones and large ones which may work simultaneously and which optimize Input/Output and the use of auxiliary space. The other part of this report describes facilities for developing this language for users which are especially interested in business. We introduce generalized arrays, which suppress the concept of files. The files are only structured arrays and for the user it has no interest to know how to manage tapes or a disk. Everything seems for the user to be in the core memory. (author) [fr

  18. Software Alchemy: Turning Complex Statistical Computations into Embarrassingly-Parallel Ones

    Directory of Open Access Journals (Sweden)

    Norman Matloff

    2016-07-01

    Full Text Available The growth in the use of computationally intensive statistical procedures, especially with big data, has necessitated the usage of parallel computation on diverse platforms such as multicore, GPUs, clusters and clouds. However, slowdown due to interprocess communication costs typically limits such methods to "embarrassingly parallel" (EP algorithms, especially on non-shared memory platforms. This paper develops a broadlyapplicable method for converting many non-EP algorithms into statistically equivalent EP ones. The method is shown to yield excellent levels of speedup for a variety of statistical computations. It also overcomes certain problems of memory limitations.

  19. An introduction to digital computing

    CERN Document Server

    George, F H

    2014-01-01

    An Introduction to Digital Computing provides information pertinent to the fundamental aspects of digital computing. This book represents a major step towards the universal availability of programmed material.Organized into four chapters, this book begins with an overview of the fundamental workings of the computer, including the way it handles simple arithmetic problems. This text then provides a brief survey of the basic features of a typical computer that is divided into three sections, namely, the input and output system, the memory system for data storage, and a processing system. Other c

  20. Oscillations and Episodic Memory: Addressing the Synchronization/Desynchronization Conundrum

    OpenAIRE

    Hanslmayr, Simon; Staresina, Bernhard P.; Bowman, Howard

    2016-01-01

    Trends Data from rodent as well as human studies suggest that theta/gamma synchronization in the hippocampus (i.e., theta phase to gamma power cross-frequency coupling) mediates the binding of different elements in episodic memory. In vivo and in vitro animal studies suggest that theta provides selective time windows for fast-acting synaptic modifications and recent computational models have implemented these mechanisms to explain human memory formation and retrieval. Recent data from human e...

  1. Episodic memory, semantic memory, and amnesia.

    Science.gov (United States)

    Squire, L R; Zola, S M

    1998-01-01

    Episodic memory and semantic memory are two types of declarative memory. There have been two principal views about how this distinction might be reflected in the organization of memory functions in the brain. One view, that episodic memory and semantic memory are both dependent on the integrity of medial temporal lobe and midline diencephalic structures, predicts that amnesic patients with medial temporal lobe/diencephalic damage should be proportionately impaired in both episodic and semantic memory. An alternative view is that the capacity for semantic memory is spared, or partially spared, in amnesia relative to episodic memory ability. This article reviews two kinds of relevant data: 1) case studies where amnesia has occurred early in childhood, before much of an individual's semantic knowledge has been acquired, and 2) experimental studies with amnesic patients of fact and event learning, remembering and knowing, and remote memory. The data provide no compelling support for the view that episodic and semantic memory are affected differently in medial temporal lobe/diencephalic amnesia. However, episodic and semantic memory may be dissociable in those amnesic patients who additionally have severe frontal lobe damage.

  2. NRAM: a disruptive carbon-nanotube resistance-change memory

    Science.gov (United States)

    Gilmer, D. C.; Rueckes, T.; Cleveland, L.

    2018-04-01

    Advanced memory technology based on carbon nanotubes (CNTs) (NRAM) possesses desired properties for implementation in a host of integrated systems due to demonstrated advantages of its operation including high speed (nanotubes can switch state in picoseconds), high endurance (over a trillion), and low power (with essential zero standby power). The applicable integrated systems for NRAM have markets that will see compound annual growth rates (CAGR) of over 62% between 2018 and 2023, with an embedded systems CAGR of 115% in 2018-2023 (http://bccresearch.com/pressroom/smc/bcc-research-predicts:-nram-(finally)-to-revolutionize-computer-memory). These opportunities are helping drive the realization of a shift from silicon-based to carbon-based (NRAM) memories. NRAM is a memory cell made up of an interlocking matrix of CNTs, either touching or slightly separated, leading to low or higher resistance states respectively. The small movement of atoms, as opposed to moving electrons for traditional silicon-based memories, renders NRAM with a more robust endurance and high temperature retention/operation which, along with high speed/low power, is expected to blossom in this memory technology to be a disruptive replacement for the current status quo of DRAM (dynamic RAM), SRAM (static RAM), and NAND flash memories.

  3. High-performance computing — an overview

    Science.gov (United States)

    Marksteiner, Peter

    1996-08-01

    An overview of high-performance computing (HPC) is given. Different types of computer architectures used in HPC are discussed: vector supercomputers, high-performance RISC processors, various parallel computers like symmetric multiprocessors, workstation clusters, massively parallel processors. Software tools and programming techniques used in HPC are reviewed: vectorizing compilers, optimization and vector tuning, optimization for RISC processors; parallel programming techniques like shared-memory parallelism, message passing and data parallelism; and numerical libraries.

  4. Design and analysis of 2T-2M Ternary content addressable memories

    KAUST Repository

    Bahloul, M. A.; Fouda, M. E.; Naous, Rawan; Zidan, Mohammed A.; Eltawil, A. M.; Kurdahi, F.; Salama, Khaled N.

    2017-01-01

    Associate and approximate computing using resistive memory based Ternary Content Addressable Memory is becoming widely used. In this paper, a simplified model based analysis of a 2T2M-Ternary Content Addressable Memory using memristors is introduced. A comprehensive study is presented taking into consideration different circuit parameters and parasitic effects. Parameters such as the memristor Rh/Rl ratio, transistor technology, operating frequency, and memory width are taken into consideration. The proposed model is verified with SPICE showing a high degree of matching between theory and simulation. The utility of the model is established using a design example.

  5. Design and analysis of 2T-2M Ternary content addressable memories

    KAUST Repository

    Bahloul, M. A.

    2017-10-24

    Associate and approximate computing using resistive memory based Ternary Content Addressable Memory is becoming widely used. In this paper, a simplified model based analysis of a 2T2M-Ternary Content Addressable Memory using memristors is introduced. A comprehensive study is presented taking into consideration different circuit parameters and parasitic effects. Parameters such as the memristor Rh/Rl ratio, transistor technology, operating frequency, and memory width are taken into consideration. The proposed model is verified with SPICE showing a high degree of matching between theory and simulation. The utility of the model is established using a design example.

  6. Procedural Memory: Computer Learning in Control Subjects and in Parkinson’s Disease Patients

    Directory of Open Access Journals (Sweden)

    C. Thomas-Antérion

    1996-01-01

    Full Text Available We used perceptual motor tasks involving the learning of mouse control by looking at a Macintosh computer screen. We studied 90 control subjects aged between sixteen and seventy-five years. There was a significant time difference between the scales of age but improvement was the same for all subjects. We also studied 24 patients with Parkinson's disease (PD. We observed an influence of age and also of educational levels. The PD patients had difficulties of learning in all tests but they did not show differences in time when compared to the control group in the first learning session (Student's t-test. They learned two or four and a half times less well than the control group. In the first test, they had some difficulty in initiating the procedure and learned eight times less well than the control group. Performances seemed to be heterogeneous: patients with only tremor (seven and patients without treatment (five performed better than others but learned less. Success in procedural tasks for the PD group seemed to depend on the capacity to initiate the response and not on the development of an accurate strategy. Many questions still remain unanswered, and we have to study different kinds of implicit memory tasks to differentiate performance in control and basal ganglia groups.

  7. Adaptive Digital Predistortion Schemes to Linearize RF Power Amplifiers with Memory Effects

    Institute of Scientific and Technical Information of China (English)

    ZHANG Peng; WU Si-liang; ZHANG Qin

    2008-01-01

    To compensate for nonlinear distortion introduced by RF power amplifiers (PAs) with memory effects, two correlated models, namely an extended memory polynomial (EMP) model and a memory lookup table (LUT) model, are proposed for predistorter design. Two adaptive digital predistortion (ADPD) schemes with indirect learning architecture are presented. One adopts the EMP model and the recursive least square (RLS) algorithm, and the other utilizes the memory LUT model and the least mean square (LMS) algorithm. Simulation results demonstrate that the EMP-based ADPD yields the best linearization performance in terms of suppressing spectral regrowth. It is also shown that the ADPD based on memory LUT makes optimum tradeoff between performance and computational complexity.

  8. Computer hardware for radiologists: Part I

    International Nuclear Information System (INIS)

    Indrajit, IK; Alam, A

    2010-01-01

    Computers are an integral part of modern radiology practice. They are used in different radiology modalities to acquire, process, and postprocess imaging data. They have had a dramatic influence on contemporary radiology practice. Their impact has extended further with the emergence of Digital Imaging and Communications in Medicine (DICOM), Picture Archiving and Communication System (PACS), Radiology information system (RIS) technology, and Teleradiology. A basic overview of computer hardware relevant to radiology practice is presented here. The key hardware components in a computer are the motherboard, central processor unit (CPU), the chipset, the random access memory (RAM), the memory modules, bus, storage drives, and ports. The personnel computer (PC) has a rectangular case that contains important components called hardware, many of which are integrated circuits (ICs). The fiberglass motherboard is the main printed circuit board and has a variety of important hardware mounted on it, which are connected by electrical pathways called “buses”. The CPU is the largest IC on the motherboard and contains millions of transistors. Its principal function is to execute “programs”. A Pentium ® 4 CPU has transistors that execute a billion instructions per second. The chipset is completely different from the CPU in design and function; it controls data and interaction of buses between the motherboard and the CPU. Memory (RAM) is fundamentally semiconductor chips storing data and instructions for access by a CPU. RAM is classified by storage capacity, access speed, data rate, and configuration

  9. Computer hardware for radiologists: Part I

    Directory of Open Access Journals (Sweden)

    Indrajit I

    2010-01-01

    Full Text Available Computers are an integral part of modern radiology practice. They are used in different radiology modalities to acquire, process, and postprocess imaging data. They have had a dramatic influence on contemporary radiology practice. Their impact has extended further with the emergence of Digital Imaging and Communications in Medicine (DICOM, Picture Archiving and Communication System (PACS, Radiology information system (RIS technology, and Teleradiology. A basic overview of computer hardware relevant to radiology practice is presented here. The key hardware components in a computer are the motherboard, central processor unit (CPU, the chipset, the random access memory (RAM, the memory modules, bus, storage drives, and ports. The personnel computer (PC has a rectangular case that contains important components called hardware, many of which are integrated circuits (ICs. The fiberglass motherboard is the main printed circuit board and has a variety of important hardware mounted on it, which are connected by electrical pathways called "buses". The CPU is the largest IC on the motherboard and contains millions of transistors. Its principal function is to execute "programs". A Pentium® 4 CPU has transistors that execute a billion instructions per second. The chipset is completely different from the CPU in design and function; it controls data and interaction of buses between the motherboard and the CPU. Memory (RAM is fundamentally semiconductor chips storing data and instructions for access by a CPU. RAM is classified by storage capacity, access speed, data rate, and configuration.

  10. Short-term memory in networks of dissociated cortical neurons.

    Science.gov (United States)

    Dranias, Mark R; Ju, Han; Rajaram, Ezhilarasan; VanDongen, Antonius M J

    2013-01-30

    Short-term memory refers to the ability to store small amounts of stimulus-specific information for a short period of time. It is supported by both fading and hidden memory processes. Fading memory relies on recurrent activity patterns in a neuronal network, whereas hidden memory is encoded using synaptic mechanisms, such as facilitation, which persist even when neurons fall silent. We have used a novel computational and optogenetic approach to investigate whether these same memory processes hypothesized to support pattern recognition and short-term memory in vivo, exist in vitro. Electrophysiological activity was recorded from primary cultures of dissociated rat cortical neurons plated on multielectrode arrays. Cultures were transfected with ChannelRhodopsin-2 and optically stimulated using random dot stimuli. The pattern of neuronal activity resulting from this stimulation was analyzed using classification algorithms that enabled the identification of stimulus-specific memories. Fading memories for different stimuli, encoded in ongoing neural activity, persisted and could be distinguished from each other for as long as 1 s after stimulation was terminated. Hidden memories were detected by altered responses of neurons to additional stimulation, and this effect persisted longer than 1 s. Interestingly, network bursts seem to eliminate hidden memories. These results are similar to those that have been reported from similar experiments in vivo and demonstrate that mechanisms of information processing and short-term memory can be studied using cultured neuronal networks, thereby setting the stage for therapeutic applications using this platform.

  11. The Processing Using Memory Paradigm:In-DRAM Bulk Copy, Initialization, Bitwise AND and OR

    OpenAIRE

    Seshadri, Vivek; Mutlu, Onur

    2016-01-01

    In existing systems, the off-chip memory interface allows the memory controller to perform only read or write operations. Therefore, to perform any operation, the processor must first read the source data and then write the result back to memory after performing the operation. This approach consumes high latency, bandwidth, and energy for operations that work on a large amount of data. Several works have proposed techniques to process data near memory by adding a small amount of compute logic...

  12. Fog computing job scheduling optimization based on bees swarm

    Science.gov (United States)

    Bitam, Salim; Zeadally, Sherali; Mellouk, Abdelhamid

    2018-04-01

    Fog computing is a new computing architecture, composed of a set of near-user edge devices called fog nodes, which collaborate together in order to perform computational services such as running applications, storing an important amount of data, and transmitting messages. Fog computing extends cloud computing by deploying digital resources at the premise of mobile users. In this new paradigm, management and operating functions, such as job scheduling aim at providing high-performance, cost-effective services requested by mobile users and executed by fog nodes. We propose a new bio-inspired optimization approach called Bees Life Algorithm (BLA) aimed at addressing the job scheduling problem in the fog computing environment. Our proposed approach is based on the optimized distribution of a set of tasks among all the fog computing nodes. The objective is to find an optimal tradeoff between CPU execution time and allocated memory required by fog computing services established by mobile users. Our empirical performance evaluation results demonstrate that the proposal outperforms the traditional particle swarm optimization and genetic algorithm in terms of CPU execution time and allocated memory.

  13. Development of Ethernet emulation driver for reflective memory

    International Nuclear Information System (INIS)

    Seo, Seong-Heon

    2010-01-01

    Reflective memory (RFM) is adopted as a real time network in the KSTAR plasma control system (PCS). Since the data uploaded from any computer are automatically shared among all the computers on the RFM network, the design of a distributed control system based on RFM is easily implemented through the management of memory mapping. The data providers and consumers are logically well seperated so that, if memory mapping information is given, a new control unit can be added without any modification to the existing system except connecting a new RFM module through an optical cable. The KSTAR PCS is also connected with the Ethernet in addition to the RFM because the RFM does not support the Transmission Control Protocol/Internet Protocol (TCP/IP) and many network services of the operating system such as the Network File System (NFS) and the Secure Shell (SSH) are based on the TCP/IP. Therefore we developed an Ethernet emulation driver for the RFM to eliminate the need for a separate Ethernet network. The driver was tested on the Linux kernel 2.6.31. The algorithm of the emulation driver is explained and the experimental setup is presented.

  14. Several problems of algorithmization in integrated computation programs on third generation computers for short circuit currents in complex power networks

    Energy Technology Data Exchange (ETDEWEB)

    Krylov, V.A.; Pisarenko, V.P.

    1982-01-01

    Methods of modeling complex power networks with short circuits in the networks are described. The methods are implemented in integrated computation programs for short circuit currents and equivalents in electrical networks with a large number of branch points (up to 1000) on a computer with a limited on line memory capacity (M equals 4030 for the computer).

  15. Memory-induced nonlinear dynamics of excitation in cardiac diseases.

    Science.gov (United States)

    Landaw, Julian; Qu, Zhilin

    2018-04-01

    Excitable cells, such as cardiac myocytes, exhibit short-term memory, i.e., the state of the cell depends on its history of excitation. Memory can originate from slow recovery of membrane ion channels or from accumulation of intracellular ion concentrations, such as calcium ion or sodium ion concentration accumulation. Here we examine the effects of memory on excitation dynamics in cardiac myocytes under two diseased conditions, early repolarization and reduced repolarization reserve, each with memory from two different sources: slow recovery of a potassium ion channel and slow accumulation of the intracellular calcium ion concentration. We first carry out computer simulations of action potential models described by differential equations to demonstrate complex excitation dynamics, such as chaos. We then develop iterated map models that incorporate memory, which accurately capture the complex excitation dynamics and bifurcations of the action potential models. Finally, we carry out theoretical analyses of the iterated map models to reveal the underlying mechanisms of memory-induced nonlinear dynamics. Our study demonstrates that the memory effect can be unmasked or greatly exacerbated under certain diseased conditions, which promotes complex excitation dynamics, such as chaos. The iterated map models reveal that memory converts a monotonic iterated map function into a nonmonotonic one to promote the bifurcations leading to high periodicity and chaos.

  16. Comparing soil moisture memory in satellite observations and models

    Science.gov (United States)

    Stacke, Tobias; Hagemann, Stefan; Loew, Alexander

    2013-04-01

    A major obstacle to a correct parametrization of soil processes in large scale global land surface models is the lack of long term soil moisture observations for large parts of the globe. Currently, a compilation of soil moisture data derived from a range of satellites is released by the ESA Climate Change Initiative (ECV_SM). Comprising the period from 1978 until 2010, it provides the opportunity to compute climatological relevant statistics on a quasi-global scale and to compare these to the output of climate models. Our study is focused on the investigation of soil moisture memory in satellite observations and models. As a proxy for memory we compute the autocorrelation length (ACL) of the available satellite data and the uppermost soil layer of the models. Additional to the ECV_SM data, AMSR-E soil moisture is used as observational estimate. Simulated soil moisture fields are taken from ERA-Interim reanalysis and generated with the land surface model JSBACH, which was driven with quasi-observational meteorological forcing data. The satellite data show ACLs between one week and one month for the greater part of the land surface while the models simulate a longer memory of up to two months. Some pattern are similar in models and observations, e.g. a longer memory in the Sahel Zone and the Arabian Peninsula, but the models are not able to reproduce regions with a very short ACL of just a few days. If the long term seasonality is subtracted from the data the memory is strongly shortened, indicating the importance of seasonal variations for the memory in most regions. Furthermore, we analyze the change of soil moisture memory in the different soil layers of the models to investigate to which extent the surface soil moisture includes information about the whole soil column. A first analysis reveals that the ACL is increasing for deeper layers. However, its increase is stronger in the soil moisture anomaly than in its absolute values and the first even exceeds the

  17. Collectively loading an application in a parallel computer

    Science.gov (United States)

    Aho, Michael E.; Attinella, John E.; Gooding, Thomas M.; Miller, Samuel J.; Mundy, Michael B.

    2016-01-05

    Collectively loading an application in a parallel computer, the parallel computer comprising a plurality of compute nodes, including: identifying, by a parallel computer control system, a subset of compute nodes in the parallel computer to execute a job; selecting, by the parallel computer control system, one of the subset of compute nodes in the parallel computer as a job leader compute node; retrieving, by the job leader compute node from computer memory, an application for executing the job; and broadcasting, by the job leader to the subset of compute nodes in the parallel computer, the application for executing the job.

  18. Towards Scalable Graph Computation on Mobile Devices.

    Science.gov (United States)

    Chen, Yiqi; Lin, Zhiyuan; Pienta, Robert; Kahng, Minsuk; Chau, Duen Horng

    2014-10-01

    Mobile devices have become increasingly central to our everyday activities, due to their portability, multi-touch capabilities, and ever-improving computational power. Such attractive features have spurred research interest in leveraging mobile devices for computation. We explore a novel approach that aims to use a single mobile device to perform scalable graph computation on large graphs that do not fit in the device's limited main memory, opening up the possibility of performing on-device analysis of large datasets, without relying on the cloud. Based on the familiar memory mapping capability provided by today's mobile operating systems, our approach to scale up computation is powerful and intentionally kept simple to maximize its applicability across the iOS and Android platforms. Our experiments demonstrate that an iPad mini can perform fast computation on large real graphs with as many as 272 million edges (Google+ social graph), at a speed that is only a few times slower than a 13″ Macbook Pro. Through creating a real world iOS app with this technique, we demonstrate the strong potential application for scalable graph computation on a single mobile device using our approach.

  19. Towards Scalable Graph Computation on Mobile Devices

    Science.gov (United States)

    Chen, Yiqi; Lin, Zhiyuan; Pienta, Robert; Kahng, Minsuk; Chau, Duen Horng

    2015-01-01

    Mobile devices have become increasingly central to our everyday activities, due to their portability, multi-touch capabilities, and ever-improving computational power. Such attractive features have spurred research interest in leveraging mobile devices for computation. We explore a novel approach that aims to use a single mobile device to perform scalable graph computation on large graphs that do not fit in the device's limited main memory, opening up the possibility of performing on-device analysis of large datasets, without relying on the cloud. Based on the familiar memory mapping capability provided by today's mobile operating systems, our approach to scale up computation is powerful and intentionally kept simple to maximize its applicability across the iOS and Android platforms. Our experiments demonstrate that an iPad mini can perform fast computation on large real graphs with as many as 272 million edges (Google+ social graph), at a speed that is only a few times slower than a 13″ Macbook Pro. Through creating a real world iOS app with this technique, we demonstrate the strong potential application for scalable graph computation on a single mobile device using our approach. PMID:25859564

  20. ELASTIC CLOUD COMPUTING ARCHITECTURE AND SYSTEM FOR HETEROGENEOUS SPATIOTEMPORAL COMPUTING

    Directory of Open Access Journals (Sweden)

    X. Shi

    2017-10-01

    Full Text Available Spatiotemporal computation implements a variety of different algorithms. When big data are involved, desktop computer or standalone application may not be able to complete the computation task due to limited memory and computing power. Now that a variety of hardware accelerators and computing platforms are available to improve the performance of geocomputation, different algorithms may have different behavior on different computing infrastructure and platforms. Some are perfect for implementation on a cluster of graphics processing units (GPUs, while GPUs may not be useful on certain kind of spatiotemporal computation. This is the same situation in utilizing a cluster of Intel's many-integrated-core (MIC or Xeon Phi, as well as Hadoop or Spark platforms, to handle big spatiotemporal data. Furthermore, considering the energy efficiency requirement in general computation, Field Programmable Gate Array (FPGA may be a better solution for better energy efficiency when the performance of computation could be similar or better than GPUs and MICs. It is expected that an elastic cloud computing architecture and system that integrates all of GPUs, MICs, and FPGAs could be developed and deployed to support spatiotemporal computing over heterogeneous data types and computational problems.

  1. Elastic Cloud Computing Architecture and System for Heterogeneous Spatiotemporal Computing

    Science.gov (United States)

    Shi, X.

    2017-10-01

    Spatiotemporal computation implements a variety of different algorithms. When big data are involved, desktop computer or standalone application may not be able to complete the computation task due to limited memory and computing power. Now that a variety of hardware accelerators and computing platforms are available to improve the performance of geocomputation, different algorithms may have different behavior on different computing infrastructure and platforms. Some are perfect for implementation on a cluster of graphics processing units (GPUs), while GPUs may not be useful on certain kind of spatiotemporal computation. This is the same situation in utilizing a cluster of Intel's many-integrated-core (MIC) or Xeon Phi, as well as Hadoop or Spark platforms, to handle big spatiotemporal data. Furthermore, considering the energy efficiency requirement in general computation, Field Programmable Gate Array (FPGA) may be a better solution for better energy efficiency when the performance of computation could be similar or better than GPUs and MICs. It is expected that an elastic cloud computing architecture and system that integrates all of GPUs, MICs, and FPGAs could be developed and deployed to support spatiotemporal computing over heterogeneous data types and computational problems.

  2. Memory blindness: Altered memory reports lead to distortion in eyewitness memory.

    Science.gov (United States)

    Cochran, Kevin J; Greenspan, Rachel L; Bogart, Daniel F; Loftus, Elizabeth F

    2016-07-01

    Choice blindness refers to the finding that people can often be misled about their own self-reported choices. However, little research has investigated the more long-term effects of choice blindness. We examined whether people would detect alterations to their own memory reports, and whether such alterations could influence participants' memories. Participants viewed slideshows depicting crimes, and then either reported their memories for episodic details of the event (Exp. 1) or identified a suspect from a lineup (Exp. 2). Then we exposed participants to manipulated versions of their memory reports, and later tested their memories a second time. The results indicated that the majority of participants failed to detect the misinformation, and that exposing witnesses to misleading versions of their own memory reports caused their memories to change to be consistent with those reports. These experiments have implications for eyewitness memory.

  3. Electric-field-controlled interface dipole modulation for Si-based memory devices.

    Science.gov (United States)

    Miyata, Noriyuki

    2018-05-31

    Various nonvolatile memory devices have been investigated to replace Si-based flash memories or emulate synaptic plasticity for next-generation neuromorphic computing. A crucial criterion to achieve low-cost high-density memory chips is material compatibility with conventional Si technologies. In this paper, we propose and demonstrate a new memory concept, interface dipole modulation (IDM) memory. IDM can be integrated as a Si field-effect transistor (FET) based memory device. The first demonstration of this concept employed a HfO 2 /Si MOS capacitor where the interface monolayer (ML) TiO 2 functions as a dipole modulator. However, this configuration is unsuitable for Si-FET-based devices due to its large interface state density (D it ). Consequently, we propose, a multi-stacked amorphous HfO 2 /1-ML TiO 2 /SiO 2 IDM structure to realize a low D it and a wide memory window. Herein we describe the quasi-static and pulse response characteristics of multi-stacked IDM MOS capacitors and demonstrate flash-type and analog memory operations of an IDM FET device.

  4. The focus of attention in working memory – from metaphors to mechanisms

    Directory of Open Access Journals (Sweden)

    Klaus eOberauer

    2013-10-01

    Full Text Available Many verbal theories describe working memory in terms of physical metaphors such as information flow or information containers. These metaphors are often useful but can also be misleading. This article contrasts the verbal version of the author’s three-embedded-component theory with a computational implementation of the theory. The analysis focuses on phenomena that have been attributed to the focus of attention in working memory. The verbal theory characterizes the focus of attention by a container metaphor, which gives rise to questions such as: How many items fit into the focus? The computational model explains the same phenomena mechanistically through a combination of strengthened bindings between items and their retrieval cues, and priming of these cues. The author applies the computational model to three findings that have been used to argue about how many items can be held in the focus of attention (Gilchrist & Cowan, Journal of Experimental Psychology: Learning, Memory, and Cognition, 2011; Oberauer & Bialkova, Journal of Experimental Psychology: General, 2009; Oberauer & Bialkova, Journal of Experimental Psychology: Human Perception and Performance, 2011. The modeling results imply a new interpretation of those findings: The different patterns of results across those studies don’t imply different capacity estimates for the focus of attention; they rather reflect to what extent retrieval from working memory is parallel or serial.

  5. Fast computation of the characteristics method on vector computers

    International Nuclear Information System (INIS)

    Kugo, Teruhiko

    2001-11-01

    Fast computation of the characteristics method to solve the neutron transport equation in a heterogeneous geometry has been studied. Two vector computation algorithms; an odd-even sweep (OES) method and an independent sequential sweep (ISS) method have been developed and their efficiency to a typical fuel assembly calculation has been investigated. For both methods, a vector computation is 15 times faster than a scalar computation. From a viewpoint of comparison between the OES and ISS methods, the followings are found: 1) there is a small difference in a computation speed, 2) the ISS method shows a faster convergence and 3) the ISS method saves about 80% of computer memory size compared with the OES method. It is, therefore, concluded that the ISS method is superior to the OES method as a vectorization method. In the vector computation, a table-look-up method to reduce computation time of an exponential function saves only 20% of a whole computation time. Both the coarse mesh rebalance method and the Aitken acceleration method are effective as acceleration methods for the characteristics method, a combination of them saves 70-80% of outer iterations compared with a free iteration. (author)

  6. FMT (Flight Software Memory Tracker) For Cassini Spacecraft-Software Engineering Using JAVA

    Science.gov (United States)

    Kan, Edwin P.; Uffelman, Hal; Wax, Allan H.

    1997-01-01

    The software engineering design of the Flight Software Memory Tracker (FMT) Tool is discussed in this paper. FMT is a ground analysis software set, consisting of utilities and procedures, designed to track the flight software, i.e., images of memory load and updatable parameters of the computers on-board Cassini spacecraft. FMT is implemented in Java.

  7. Computer assisted treatments for image pattern data of laser plasma experiments

    International Nuclear Information System (INIS)

    Yaoita, Akira; Matsushima, Isao

    1987-01-01

    An image data processing system for laser-plasma experiments has been constructed. These image data are two dimensional images taken by X-ray, UV, infrared and visible light television cameras and also taken by streak cameras. They are digitized by frame memories. The digitized image data are stored in disk memories with the aid of a microcomputer. The data are processed by a host computer and stored in the files of the host computer and on magnetic tapes. In this paper, the over view of the image data processing system and some software for data handling in the host computer are reported. (author)

  8. Are subjective memory problems related to suggestibility, compliance, false memories, and objective memory performance?

    Science.gov (United States)

    Van Bergen, Saskia; Jelicic, Marko; Merckelbach, Harald

    2009-01-01

    The relationship between subjective memory beliefs and suggestibility, compliance, false memories, and objective memory performance was studied in a community sample of young and middle-aged people (N = 142). We hypothesized that people with subjective memory problems would exhibit higher suggestibility and compliance levels and would be more susceptible to false recollections than those who are optimistic about their memory. In addition, we expected a discrepancy between subjective memory judgments and objective memory performance. We found that subjective memory judgments correlated significantly with compliance, with more negative memory judgments accompanying higher levels of compliance. Contrary to our expectation, subjective memory problems did not correlate with suggestibility or false recollections. Furthermore, participants were accurate in estimating their objective memory performance.

  9. Data processing system with a micro-computer for high magnetic field tokamak, TRIAM-1

    International Nuclear Information System (INIS)

    Kawasaki, Shoji; Nakamura, Kazuo; Nakamura, Yukio; Hiraki, Naoharu; Toi, Kazuo

    1981-01-01

    A data processing system was designed and constructed for the purpose of analyzing the data of the high magnetic field tokamak TRIAM-1. The system consists of a 10-channel A-D converter, a 20 K byte memory (RAM), an address bus control circuit, a data bus control circuit, a timing pulse and control signal generator, a D-A converter, a micro-computer, and a power source. The memory can be used as a CPU memory except at the time of sampling and data output. The out-put devices of the system are an X-Y recorder and an oscilloscope. The computer is composed of a CPU, a memory and an I/O part. The memory size can be extended. A cassette tape recorder is provided to keep the programs of the computer. An interface circuit between the computer and the tape recorder was designed and constructed. An electric discharge printer as an I/O device can be connected. From TRIAM-1, the signals of magnetic probes, plasma current, vertical field coil current, and one-turn loop voltage are fed into the processing system. The plasma displacement calculated from these signals is shown by one of I/O devices. The results of test run showed good performance. (Kato, T.)

  10. Data processing system with a micro-computer for high magnetic field tokamak, TRIAM-1

    Energy Technology Data Exchange (ETDEWEB)

    Kawasaki, S; Nakamura, K; Nakamura, Y; Hiraki, N; Toi, K [Kyushu Univ., Fukuoka (Japan). Research Inst. for Applied Mechanics

    1981-02-01

    A data processing system was designed and constructed for the purpose of analyzing the data of the high magnetic field tokamak TRIAM-1. The system consists of a 10-channel A-D converter, a 20 K byte memory (RAM), an address bus control circuit, a data bus control circuit, a timing pulse and control signal generator, a D-A converter, a micro-computer, and a power source. The memory can be used as a CPU memory except at the time of sampling and data output. The out-put devices of the system are an X-Y recorder and an oscilloscope. The computer is composed of a CPU, a memory and an I/O part. The memory size can be extended. A cassette tape recorder is provided to keep the programs of the computer. An interface circuit between the computer and the tape recorder was designed and constructed. An electric discharge printer as an I/O device can be connected. From TRIAM-1, the signals of magnetic probes, plasma current, vertical field coil current, and one-turn loop voltage are fed into the processing system. The plasma displacement calculated from these signals is shown by one of I/O devices. The results of test run showed good performance.

  11. An Investigation of Unified Memory Access Performance in CUDA

    Science.gov (United States)

    Landaverde, Raphael; Zhang, Tiansheng; Coskun, Ayse K.; Herbordt, Martin

    2015-01-01

    Managing memory between the CPU and GPU is a major challenge in GPU computing. A programming model, Unified Memory Access (UMA), has been recently introduced by Nvidia to simplify the complexities of memory management while claiming good overall performance. In this paper, we investigate this programming model and evaluate its performance and programming model simplifications based on our experimental results. We find that beyond on-demand data transfers to the CPU, the GPU is also able to request subsets of data it requires on demand. This feature allows UMA to outperform full data transfer methods for certain parallel applications and small data sizes. We also find, however, that for the majority of applications and memory access patterns, the performance overheads associated with UMA are significant, while the simplifications to the programming model restrict flexibility for adding future optimizations. PMID:26594668

  12. Inovation of the computer system for the WWER-440 simulator

    International Nuclear Information System (INIS)

    Schrumpf, L.

    1988-01-01

    The configuration of the WWER-440 simulator computer system consists of four SMEP computers. The basic data processing unit consists of two interlinked SM 52/11.M1 computers with 1 MB of main memory. This part of the computer system of the simulator controls the operation of the entire simulator, processes the programs of technology behavior simulation, of the unit information system and of other special systems, guarantees program support and the operation of the instructor's console. An SM 52/11 computer with 256 kB of main memory is connected to each unit. It is used as a communication unit for data transmission using the DASIO 600 interface. Semigraphic color displays are based on the microprocessor modules of the SM 50/40 and SM 53/10 kit supplemented with a modified TESLA COLOR 110 ST tv receiver. (J.B.). 1 fig

  13. Fencing network direct memory access data transfers in a parallel active messaging interface of a parallel computer

    Science.gov (United States)

    Blocksome, Michael A.; Mamidala, Amith R.

    2015-07-07

    Fencing direct memory access (`DMA`) data transfers in a parallel active messaging interface (`PAMI`) of a parallel computer, the PAMI including data communications endpoints, each endpoint including specifications of a client, a context, and a task, the endpoints coupled for data communications through the PAMI and through DMA controllers operatively coupled to a deterministic data communications network through which the DMA controllers deliver data communications deterministically, including initiating execution through the PAMI of an ordered sequence of active DMA instructions for DMA data transfers between two endpoints, effecting deterministic DMA data transfers through a DMA controller and the deterministic data communications network; and executing through the PAMI, with no FENCE accounting for DMA data transfers, an active FENCE instruction, the FENCE instruction completing execution only after completion of all DMA instructions initiated prior to execution of the FENCE instruction for DMA data transfers between the two endpoints.

  14. Computational models of neuromodulation.

    Science.gov (United States)

    Fellous, J M; Linster, C

    1998-05-15

    Computational modeling of neural substrates provides an excellent theoretical framework for the understanding of the computational roles of neuromodulation. In this review, we illustrate, with a large number of modeling studies, the specific computations performed by neuromodulation in the context of various neural models of invertebrate and vertebrate preparations. We base our characterization of neuromodulations on their computational and functional roles rather than on anatomical or chemical criteria. We review the main framework in which neuromodulation has been studied theoretically (central pattern generation and oscillations, sensory processing, memory and information integration). Finally, we present a detailed mathematical overview of how neuromodulation has been implemented at the single cell and network levels in modeling studies. Overall, neuromodulation is found to increase and control computational complexity.

  15. Parallel grid generation algorithm for distributed memory computers

    Science.gov (United States)

    Moitra, Stuti; Moitra, Anutosh

    1994-01-01

    A parallel grid-generation algorithm and its implementation on the Intel iPSC/860 computer are described. The grid-generation scheme is based on an algebraic formulation of homotopic relations. Methods for utilizing the inherent parallelism of the grid-generation scheme are described, and implementation of multiple levELs of parallelism on multiple instruction multiple data machines are indicated. The algorithm is capable of providing near orthogonality and spacing control at solid boundaries while requiring minimal interprocessor communications. Results obtained on the Intel hypercube for a blended wing-body configuration are used to demonstrate the effectiveness of the algorithm. Fortran implementations bAsed on the native programming model of the iPSC/860 computer and the Express system of software tools are reported. Computational gains in execution time speed-up ratios are given.

  16. Systematic Development Strategy for Smart Devices Based on Shape-Memory Polymers

    Directory of Open Access Journals (Sweden)

    Andrés Díaz Lantada

    2017-10-01

    Full Text Available Shape-memory polymers are outstanding “smart” materials, which can perform important geometrical changes, when activated by several types of external stimuli, and which can be applied to several emerging engineering fields, from aerospace applications, to the development of biomedical devices. The fact that several shape-memory polymers can be structured in an additive way is an especially noteworthy advantage, as the development of advanced actuators with complex geometries for improved performance can be achieved, if adequate design and manufacturing considerations are taken into consideration. Present study presents a review of challenges and good practices, leading to a straightforward methodology (or integration of strategies, for the development of “smart” actuators based on shape-memory polymers. The combination of computer-aided design, computer-aided engineering and additive manufacturing technologies is analyzed and applied to the complete development of interesting shape-memory polymer-based actuators. Aspects such as geometrical design and optimization, development of the activation system, selection of the adequate materials and related manufacturing technologies, training of the shape-memory effect, final integration and testing are considered, as key processes of the methodology. Current trends, including the use of low-cost 3D and 4D printing, and main challenges, including process eco-efficiency and biocompatibility, are also discussed and their impact on the proposed methodology is considered.

  17. Quantum memory for images: A quantum hologram

    International Nuclear Information System (INIS)

    Vasilyev, Denis V.; Sokolov, Ivan V.; Polzik, Eugene S.

    2008-01-01

    Matter-light quantum interface and quantum memory for light are important ingredients of quantum information protocols, such as quantum networks, distributed quantum computation, etc. [P. Zoller et al., Eur. Phys. J. D 36, 203 (2005)]. In this paper we present a spatially multimode scheme for quantum memory for light, which we call a quantum hologram. Our approach uses a multiatom ensemble which has been shown to be efficient for a single spatial mode quantum memory. Due to the multiatom nature of the ensemble and to the optical parallelism it is capable of storing many spatial modes, a feature critical for the present proposal. A quantum hologram with the fidelity exceeding that of classical hologram will be able to store quantum features of an image, such as multimode superposition and entangled quantum states, something that a standard hologram is unable to achieve

  18. Subversion: The Neglected Aspect of Computer Security.

    Science.gov (United States)

    1980-06-01

    it into the memory of the computer . These are called flows on covert channels... A simple covert channel is the running time of a program . Because... program and, in doing so, gives it ’permission’ to perform its covert functions. Not only will most computer systems not prevent the employment of such a...R. Schell, Major, USAF, June 1974. 109 11. Lackey, R.p., "Penetration of Computer Systems, an Overviev , Honeywell Computer Journal, Vol. 8, no. 21974

  19. Propagation of soil moisture memory to runoff and evapotranspiration

    Science.gov (United States)

    Orth, R.; Seneviratne, S. I.

    2012-10-01

    As a key variable of the land-climate system soil moisture is a main driver of runoff and evapotranspiration under certain conditions. Soil moisture furthermore exhibits outstanding memory (persistence) characteristics. Also for runoff many studies report distinct low frequency variations that represent a memory. Using data from over 100 near-natural catchments located across Europe we investigate in this study the connection between soil moisture memory and the respective memory of runoff and evapotranspiration on different time scales. For this purpose we use a simple water balance model in which dependencies of runoff (normalized by precipitation) and evapotranspiration (normalized by radiation) on soil moisture are fitted using runoff observations. The model therefore allows to compute memory of soil moisture, runoff and evapotranspiration on catchment scale. We find considerable memory in soil moisture and runoff in many parts of the continent, and evapotranspiration also displays some memory on a monthly time scale in some catchments. We show that the memory of runoff and evapotranspiration jointly depend on soil moisture memory and on the strength of the coupling of runoff and evapotranspiration to soil moisture. Furthermore we find that the coupling strengths of runoff and evapotranspiration to soil moisture depend on the shape of the fitted dependencies and on the variance of the meteorological forcing. To better interpret the magnitude of the respective memories across Europe we finally provide a new perspective on hydrological memory by relating it to the mean duration required to recover from anomalies exceeding a certain threshold.

  20. Mini-computer in standard CAMAC

    International Nuclear Information System (INIS)

    Meyer, J.M.; Perrin, J.; Lecoq, J.; Tedjini, H.; Metzger, G.

    1975-01-01

    CAMAC is the designation of rules for the design and use of modular electronic data-handling equipment. The rules offer a standard scheme for interfacing computers to transducers and actuators in on-line systems. Where systems do not need a large memory capacity or where computing power is provided by an associated computer, a processor implemented in a CAMAC structure will be of a great interest for such a standard. In such a way built such a processor with an INTEL 8008 CPU chip with use of a CAMAC crate, a memory bus, an 1/0 bus or CAMAC horizontal Dataway and a bus connecting the CPU to the operator's panel. The interrupt system has six levels. To allow multi-programmation, the 8008's instruction set was extended with the creating of an Jump and mark instruction. A multi-task operating system was implemented allowing the execution of real time tasks, process control and program debugging. Three units have been built nowadays for: process control, education, test of CAMAC modules, image processing [fr

  1. Optimizing NEURON Simulation Environment Using Remote Memory Access with Recursive Doubling on Distributed Memory Systems.

    Science.gov (United States)

    Shehzad, Danish; Bozkuş, Zeki

    2016-01-01

    Increase in complexity of neuronal network models escalated the efforts to make NEURON simulation environment efficient. The computational neuroscientists divided the equations into subnets amongst multiple processors for achieving better hardware performance. On parallel machines for neuronal networks, interprocessor spikes exchange consumes large section of overall simulation time. In NEURON for communication between processors Message Passing Interface (MPI) is used. MPI_Allgather collective is exercised for spikes exchange after each interval across distributed memory systems. The increase in number of processors though results in achieving concurrency and better performance but it inversely affects MPI_Allgather which increases communication time between processors. This necessitates improving communication methodology to decrease the spikes exchange time over distributed memory systems. This work has improved MPI_Allgather method using Remote Memory Access (RMA) by moving two-sided communication to one-sided communication, and use of recursive doubling mechanism facilitates achieving efficient communication between the processors in precise steps. This approach enhanced communication concurrency and has improved overall runtime making NEURON more efficient for simulation of large neuronal network models.

  2. Optimizing NEURON Simulation Environment Using Remote Memory Access with Recursive Doubling on Distributed Memory Systems

    Directory of Open Access Journals (Sweden)

    Danish Shehzad

    2016-01-01

    Full Text Available Increase in complexity of neuronal network models escalated the efforts to make NEURON simulation environment efficient. The computational neuroscientists divided the equations into subnets amongst multiple processors for achieving better hardware performance. On parallel machines for neuronal networks, interprocessor spikes exchange consumes large section of overall simulation time. In NEURON for communication between processors Message Passing Interface (MPI is used. MPI_Allgather collective is exercised for spikes exchange after each interval across distributed memory systems. The increase in number of processors though results in achieving concurrency and better performance but it inversely affects MPI_Allgather which increases communication time between processors. This necessitates improving communication methodology to decrease the spikes exchange time over distributed memory systems. This work has improved MPI_Allgather method using Remote Memory Access (RMA by moving two-sided communication to one-sided communication, and use of recursive doubling mechanism facilitates achieving efficient communication between the processors in precise steps. This approach enhanced communication concurrency and has improved overall runtime making NEURON more efficient for simulation of large neuronal network models.

  3. A discrete Fourier transform for virtual memory machines

    Science.gov (United States)

    Galant, David C.

    1992-01-01

    An algebraic theory of the Discrete Fourier Transform is developed in great detail. Examination of the details of the theory leads to a computationally efficient fast Fourier transform for the use on computers with virtual memory. Such an algorithm is of great use on modern desktop machines. A FORTRAN coded version of the algorithm is given for the case when the sequence of numbers to be transformed is a power of two.

  4. Real-world-time simulation of memory consolidation in a large-scale cerebellar model

    Directory of Open Access Journals (Sweden)

    Masato eGosui

    2016-03-01

    Full Text Available We report development of a large-scale spiking network model of thecerebellum composed of more than 1 million neurons. The model isimplemented on graphics processing units (GPUs, which are dedicatedhardware for parallel computing. Using 4 GPUs simultaneously, we achieve realtime simulation, in which computer simulation ofcerebellar activity for 1 sec completes within 1 sec in thereal-world time, with temporal resolution of 1 msec.This allows us to carry out a very long-term computer simulationof cerebellar activity in a practical time with millisecond temporalresolution. Using the model, we carry out computer simulationof long-term gain adaptation of optokinetic response (OKR eye movementsfor 5 days aimed to study the neural mechanisms of posttraining memoryconsolidation. The simulation results are consistent with animal experimentsand our theory of posttraining memory consolidation. These resultssuggest that realtime computing provides a useful means to studya very slow neural process such as memory consolidation in the brain.

  5. Computer-Based Working Memory Training in Children with Mild Intellectual Disability

    Science.gov (United States)

    Delavarian, Mona; Bokharaeian, Behrouz; Towhidkhah, Farzad; Gharibzadeh, Shahriar

    2015-01-01

    We designed a working memory (WM) training programme in game framework for mild intellectually disabled students. Twenty-four students participated as test and control groups. The auditory and visual-spatial WM were assessed by primary test, which included computerised Wechsler numerical forward and backward sub-tests and secondary tests, which…

  6. Quantum memory Quantum memory

    Science.gov (United States)

    Le Gouët, Jean-Louis; Moiseev, Sergey

    2012-06-01

    Interaction of quantum radiation with multi-particle ensembles has sparked off intense research efforts during the past decade. Emblematic of this field is the quantum memory scheme, where a quantum state of light is mapped onto an ensemble of atoms and then recovered in its original shape. While opening new access to the basics of light-atom interaction, quantum memory also appears as a key element for information processing applications, such as linear optics quantum computation and long-distance quantum communication via quantum repeaters. Not surprisingly, it is far from trivial to practically recover a stored quantum state of light and, although impressive progress has already been accomplished, researchers are still struggling to reach this ambitious objective. This special issue provides an account of the state-of-the-art in a fast-moving research area that makes physicists, engineers and chemists work together at the forefront of their discipline, involving quantum fields and atoms in different media, magnetic resonance techniques and material science. Various strategies have been considered to store and retrieve quantum light. The explored designs belong to three main—while still overlapping—classes. In architectures derived from photon echo, information is mapped over the spectral components of inhomogeneously broadened absorption bands, such as those encountered in rare earth ion doped crystals and atomic gases in external gradient magnetic field. Protocols based on electromagnetic induced transparency also rely on resonant excitation and are ideally suited to the homogeneous absorption lines offered by laser cooled atomic clouds or ion Coulomb crystals. Finally off-resonance approaches are illustrated by Faraday and Raman processes. Coupling with an optical cavity may enhance the storage process, even for negligibly small atom number. Multiple scattering is also proposed as a way to enlarge the quantum interaction distance of light with matter. The

  7. Ensemble clustering in visual working memory biases location memories and reduces the Weber noise of relative positions.

    Science.gov (United States)

    Lew, Timothy F; Vul, Edward

    2015-01-01

    People seem to compute the ensemble statistics of objects and use this information to support the recall of individual objects in visual working memory. However, there are many different ways that hierarchical structure might be encoded. We examined the format of structured memories by asking subjects to recall the locations of objects arranged in different spatial clustering structures. Consistent with previous investigations of structured visual memory, subjects recalled objects biased toward the center of their clusters. Subjects also recalled locations more accurately when they were arranged in fewer clusters containing more objects, suggesting that subjects used the clustering structure of objects to aid recall. Furthermore, subjects had more difficulty recalling larger relative distances, consistent with subjects encoding the positions of objects relative to clusters and recalling them with magnitude-proportional (Weber) noise. Our results suggest that clustering improved the fidelity of recall by biasing the recall of locations toward cluster centers to compensate for uncertainty and by reducing the magnitude of encoded relative distances.

  8. HTMT-class Latency Tolerant Parallel Architecture for Petaflops Scale Computation

    Science.gov (United States)

    Sterling, Thomas; Bergman, Larry

    2000-01-01

    Computational Aero Sciences and other numeric intensive computation disciplines demand computing throughputs substantially greater than the Teraflops scale systems only now becoming available. The related fields of fluids, structures, thermal, combustion, and dynamic controls are among the interdisciplinary areas that in combination with sufficient resolution and advanced adaptive techniques may force performance requirements towards Petaflops. This will be especially true for compute intensive models such as Navier-Stokes are or when such system models are only part of a larger design optimization computation involving many design points. Yet recent experience with conventional MPP configurations comprising commodity processing and memory components has shown that larger scale frequently results in higher programming difficulty and lower system efficiency. While important advances in system software and algorithms techniques have had some impact on efficiency and programmability for certain classes of problems, in general it is unlikely that software alone will resolve the challenges to higher scalability. As in the past, future generations of high-end computers may require a combination of hardware architecture and system software advances to enable efficient operation at a Petaflops level. The NASA led HTMT project has engaged the talents of a broad interdisciplinary team to develop a new strategy in high-end system architecture to deliver petaflops scale computing in the 2004/5 timeframe. The Hybrid-Technology, MultiThreaded parallel computer architecture incorporates several advanced technologies in combination with an innovative dynamic adaptive scheduling mechanism to provide unprecedented performance and efficiency within practical constraints of cost, complexity, and power consumption. The emerging superconductor Rapid Single Flux Quantum electronics can operate at 100 GHz (the record is 770 GHz) and one percent of the power required by convention

  9. Rapid formation and flexible expression of memories of subliminal word pairs.

    Science.gov (United States)

    Reber, Thomas P; Henke, Katharina

    2011-01-01

    Our daily experiences are incidentally and rapidly encoded as episodic memories. Episodic memories consist of numerous associations (e.g., who gave what to whom where and when) that can be expressed flexibly in new situations. Key features of episodic memory are speed of encoding, its associative nature, and its representational flexibility. Another defining feature of human episodic memory has been consciousness of encoding/retrieval. Here, we show that humans can rapidly form associations between subliminal words and minutes later retrieve these associations even if retrieval words were conceptually related to, but different from encoding words. Because encoding words were presented subliminally, associative encoding, and retrieval were unconscious. Unconscious association formation and retrieval were dependent on a preceding understanding of task principles. We conclude that key computations underlying episodic memory - rapid encoding and flexible expression of associations - can operate outside consciousness.

  10. Next generation spin torque memories

    CERN Document Server

    Kaushik, Brajesh Kumar; Kulkarni, Anant Aravind; Prajapati, Sanjay

    2017-01-01

    This book offers detailed insights into spin transfer torque (STT) based devices, circuits and memories. Starting with the basic concepts and device physics, it then addresses advanced STT applications and discusses the outlook for this cutting-edge technology. It also describes the architectures, performance parameters, fabrication, and the prospects of STT based devices. Further, moving from the device to the system perspective it presents a non-volatile computing architecture composed of STT based magneto-resistive and all-spin logic devices and demonstrates that efficient STT based magneto-resistive and all-spin logic devices can turn the dream of instant on/off non-volatile computing into reality.

  11. Computing ordinary least-squares parameter estimates for the National Descriptive Model of Mercury in Fish

    Science.gov (United States)

    Donato, David I.

    2013-01-01

    A specialized technique is used to compute weighted ordinary least-squares (OLS) estimates of the parameters of the National Descriptive Model of Mercury in Fish (NDMMF) in less time using less computer memory than general methods. The characteristics of the NDMMF allow the two products X'X and X'y in the normal equations to be filled out in a second or two of computer time during a single pass through the N data observations. As a result, the matrix X does not have to be stored in computer memory and the computationally expensive matrix multiplications generally required to produce X'X and X'y do not have to be carried out. The normal equations may then be solved to determine the best-fit parameters in the OLS sense. The computational solution based on this specialized technique requires O(8p2+16p) bytes of computer memory for p parameters on a machine with 8-byte double-precision numbers. This publication includes a reference implementation of this technique and a Gaussian-elimination solver in preliminary custom software.

  12. Identifying Memory Allocation Patterns in HEP Software

    Science.gov (United States)

    Kama, S.; Rauschmayr, N.

    2017-10-01

    HEP applications perform an excessive amount of allocations/deallocations within short time intervals which results in memory churn, poor locality and performance degradation. These issues are already known for a decade, but due to the complexity of software frameworks and billions of allocations for a single job, up until recently no efficient mechanism has been available to correlate these issues with source code lines. However, with the advent of the Big Data era, many tools and platforms are now available to do large scale memory profiling. This paper presents, a prototype program developed to track and identify each single (de-)allocation. The CERN IT Hadoop cluster is used to compute memory key metrics, like locality, variation, lifetime and density of allocations. The prototype further provides a web based visualization back-end that allows the user to explore the results generated on the Hadoop cluster. Plotting these metrics for every single allocation over time gives a new insight into application’s memory handling. For instance, it shows which algorithms cause which kind of memory allocation patterns, which function flow causes how many short-lived objects, what are the most commonly allocated sizes etc. The paper will give an insight into the prototype and will show profiling examples for the LHC reconstruction, digitization and simulation jobs.

  13. Read method compensating parasitic sneak currents in a crossbar memristive memory

    KAUST Repository

    Zidan, Mohammed A.

    2017-03-02

    Methods are provided for mitigating problems caused by sneak- paths current during memory cell access in gateless arrays. Example methods contemplated herein utilize adaptive-threshold readout techniques that utilize the locality and hierarchy properties of the computer memory system to address this sneak-paths problem. The method of the invention is a method for reading a target memory cell located at an intersection of a target row of a gateless array and a target column of the gateless array, the method comprising: -reading a value of the target memory cell; and -calculating an actual value of the target memory cell based on the read value of the memory cell and a component of the read value caused by sneak path current. Utilizing either an "initial bits" strategy or a "dummy bits" strategy in order to calculate the component of the read value caused by sneak path current, example embodiments significantly reduce the number of memory accesses pixel for an array readout. In addition, these strategies consume an order of magnitude less power in comparison to alternative state-of-the-art readout techniques.

  14. Performance of particle in cell methods on highly concurrent computational architectures

    International Nuclear Information System (INIS)

    Adams, M.F.; Ethier, S.; Wichmann, N.

    2009-01-01

    Particle in cell (PIC) methods are effective in computing Vlasov-Poisson system of equations used in simulations of magnetic fusion plasmas. PIC methods use grid based computations, for solving Poisson's equation or more generally Maxwell's equations, as well as Monte-Carlo type methods to sample the Vlasov equation. The presence of two types of discretizations, deterministic field solves and Monte-Carlo methods for the Vlasov equation, pose challenges in understanding and optimizing performance on today large scale computers which require high levels of concurrency. These challenges arises from the need to optimize two very different types of processes and the interactions between them. Modern cache based high-end computers have very deep memory hierarchies and high degrees of concurrency which must be utilized effectively to achieve good performance. The effective use of these machines requires maximizing concurrency by eliminating serial or redundant work and minimizing global communication. A related issue is minimizing the memory traffic between levels of the memory hierarchy because performance is often limited by the bandwidths and latencies of the memory system. This paper discusses some of the performance issues, particularly in regard to parallelism, of PIC methods. The gyrokinetic toroidal code (GTC) is used for these studies and a new radial grid decomposition is presented and evaluated. Scaling of the code is demonstrated on ITER sized plasmas with up to 16K Cray XT3/4 cores.

  15. Performance of particle in cell methods on highly concurrent computational architectures

    International Nuclear Information System (INIS)

    Adams, M F; Ethier, S; Wichmann, N

    2007-01-01

    Particle in cell (PIC) methods are effective in computing Vlasov-Poisson system of equations used in simulations of magnetic fusion plasmas. PIC methods use grid based computations, for solving Poisson's equation or more generally Maxwell's equations, as well as Monte-Carlo type methods to sample the Vlasov equation. The presence of two types of discretizations, deterministic field solves and Monte-Carlo methods for the Vlasov equation, pose challenges in understanding and optimizing performance on today large scale computers which require high levels of concurrency. These challenges arises from the need to optimize two very different types of processes and the interactions between them. Modern cache based high-end computers have very deep memory hierarchies and high degrees of concurrency which must be utilized effectively to achieve good performance. The effective use of these machines requires maximizing concurrency by eliminating serial or redundant work and minimizing global communication. A related issue is minimizing the memory traffic between levels of the memory hierarchy because performance is often limited by the bandwidths and latencies of the memory system. This paper discusses some of the performance issues, particularly in regard to parallelism, of PIC methods. The gyrokinetic toroidal code (GTC) is used for these studies and a new radial grid decomposition is presented and evaluated. Scaling of the code is demonstrated on ITER sized plasmas with up to 16K Cray XT3/4 cores

  16. Performance Analysis of Memory Transfers and GEMM Subroutines on NVIDIA Tesla GPU Cluster

    Energy Technology Data Exchange (ETDEWEB)

    Allada, Veerendra, Benjegerdes, Troy; Bode, Brett

    2009-08-31

    Commodity clusters augmented with application accelerators are evolving as competitive high performance computing systems. The Graphical Processing Unit (GPU) with a very high arithmetic density and performance per price ratio is a good platform for the scientific application acceleration. In addition to the interconnect bottlenecks among the cluster compute nodes, the cost of memory copies between the host and the GPU device have to be carefully amortized to improve the overall efficiency of the application. Scientific applications also rely on efficient implementation of the BAsic Linear Algebra Subroutines (BLAS), among which the General Matrix Multiply (GEMM) is considered as the workhorse subroutine. In this paper, they study the performance of the memory copies and GEMM subroutines that are critical to port the computational chemistry algorithms to the GPU clusters. To that end, a benchmark based on the NetPIPE framework is developed to evaluate the latency and bandwidth of the memory copies between the host and the GPU device. The performance of the single and double precision GEMM subroutines from the NVIDIA CUBLAS 2.0 library are studied. The results have been compared with that of the BLAS routines from the Intel Math Kernel Library (MKL) to understand the computational trade-offs. The test bed is a Intel Xeon cluster equipped with NVIDIA Tesla GPUs.

  17. Performance Analysis of Memory Transfers and GEMM Subroutines on NVIDIA Tesla GPU Cluster

    International Nuclear Information System (INIS)

    Allada, Veerendra; Benjegerdes, Troy; Bode, Brett

    2009-01-01

    Commodity clusters augmented with application accelerators are evolving as competitive high performance computing systems. The Graphical Processing Unit (GPU) with a very high arithmetic density and performance per price ratio is a good platform for the scientific application acceleration. In addition to the interconnect bottlenecks among the cluster compute nodes, the cost of memory copies between the host and the GPU device have to be carefully amortized to improve the overall efficiency of the application. Scientific applications also rely on efficient implementation of the BAsic Linear Algebra Subroutines (BLAS), among which the General Matrix Multiply (GEMM) is considered as the workhorse subroutine. In this paper, they study the performance of the memory copies and GEMM subroutines that are critical to port the computational chemistry algorithms to the GPU clusters. To that end, a benchmark based on the NetPIPE framework is developed to evaluate the latency and bandwidth of the memory copies between the host and the GPU device. The performance of the single and double precision GEMM subroutines from the NVIDIA CUBLAS 2.0 library are studied. The results have been compared with that of the BLAS routines from the Intel Math Kernel Library (MKL) to understand the computational trade-offs. The test bed is a Intel Xeon cluster equipped with NVIDIA Tesla GPUs.

  18. CUBESIM, Hypercube and Denelcor Hep Parallel Computer Simulation

    International Nuclear Information System (INIS)

    Dunigan, T.H.

    1988-01-01

    1 - Description of program or function: CUBESIM is a set of subroutine libraries and programs for the simulation of message-passing parallel computers and shared-memory parallel computers. Subroutines are supplied to simulate the Intel hypercube and the Denelcor HEP parallel computers. The system permits a user to develop and test parallel programs written in C or FORTRAN on a single processor. The user may alter such hypercube parameters as message startup times, packet size, and the computation-to-communication ratio. The simulation generates a trace file that can be used for debugging, performance analysis, or graphical display. 2 - Method of solution: The CUBESIM simulator is linked with the user's parallel application routines to run as a single UNIX process. The simulator library provides a small operating system to perform process and message management. 3 - Restrictions on the complexity of the problem: Up to 128 processors can be simulated with a virtual memory limit of 6 million bytes. Up to 1000 processes can be simulated

  19. A silicon-nanowire memory driven by optical gradient force induced bistability

    Energy Technology Data Exchange (ETDEWEB)

    Dong, B. [School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore 639798 (Singapore); Institute of Microelectronics, A*STAR (Agency for Science, Technology and Research), Singapore 117685 (Singapore); Cai, H., E-mail: caih@ime.a-star.edu.sg; Gu, Y. D.; Kwong, D. L. [Institute of Microelectronics, A*STAR (Agency for Science, Technology and Research), Singapore 117685 (Singapore); Chin, L. K.; Ng, G. I.; Ser, W. [School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore 639798 (Singapore); Huang, J. G. [School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore 639798 (Singapore); Institute of Microelectronics, A*STAR (Agency for Science, Technology and Research), Singapore 117685 (Singapore); School of Mechanical Engineering, Xi' an Jiaotong University, Xi' an 710049 (China); Yang, Z. C. [School of Electronics Engineering and Computer Science, Peking University, Beijing 100871 (China); Liu, A. Q., E-mail: eaqliu@ntu.edu.sg [School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore 639798 (Singapore); School of Electronics Engineering and Computer Science, Peking University, Beijing 100871 (China)

    2015-12-28

    In this paper, a bistable optical-driven silicon-nanowire memory is demonstrated, which employs ring resonator to generate optical gradient force over a doubly clamped silicon-nanowire. Two stable deformation positions of a doubly clamped silicon-nanowire represent two memory states (“0” and “1”) and can be set/reset by modulating the light intensity (<3 mW) based on the optical force induced bistability. The time response of the optical-driven memory is less than 250 ns. It has applications in the fields of all optical communication, quantum computing, and optomechanical circuits.

  20. Scalable quantum memory in the ultrastrong coupling regime.

    Science.gov (United States)

    Kyaw, T H; Felicetti, S; Romero, G; Solano, E; Kwek, L-C

    2015-03-02

    Circuit quantum electrodynamics, consisting of superconducting artificial atoms coupled to on-chip resonators, represents a prime candidate to implement the scalable quantum computing architecture because of the presence of good tunability and controllability. Furthermore, recent advances have pushed the technology towards the ultrastrong coupling regime of light-matter interaction, where the qubit-resonator coupling strength reaches a considerable fraction of the resonator frequency. Here, we propose a qubit-resonator system operating in that regime, as a quantum memory device and study the storage and retrieval of quantum information in and from the Z2 parity-protected quantum memory, within experimentally feasible schemes. We are also convinced that our proposal might pave a way to realize a scalable quantum random-access memory due to its fast storage and readout performances.

  1. A comparison of three types of autobiographical memories in old-old age: first memories, pivotal memories and traumatic memories.

    Science.gov (United States)

    Cohen-Mansfield, Jiska; Shmotkin, Dov; Eyal, Nitza; Reichental, Yael; Hazan, Haim

    2010-01-01

    Autobiographical memory enables us to construct a personal narrative through which we identify ourselves. Especially important are memories of formative events. This study describes autobiographical memories of people who have reached old-old age (85 years and above), studying 3 types of memories of particular impact on identity and adaptation: first memories, pivotal memories and traumatic memories. In this paper, we examine the content, characteristic themes and environments, and structural characteristics of each of the 3 types of memory. The participants were 26 persons from a larger longitudinal study with an average age of 91 years; half were men and the other half women. The study integrated qualitative and quantitative tools. An open-ended questionnaire included questions about the participants' life story as well as questions about the 3 types of memories. The responses were rated by 3 independent judges on dimensions of central themes and structural characteristics. First memories had a more positive emotional tone, more references to characters from the participant's social circle, a stronger sense of group belonging, and a more narrative style than the other types of memories. Pivotal and traumatic memories were described as more personal than first memories. The 3 types of memories reflect different stages in life development, which together form a sense of identity. They present experiences from the past on select themes, which may assist in the complex task of coping with the difficulties and limitations that advanced old age presents. Future research should examine the functional role of those memories and whether they enable the old-old to support selfhood in the challenging period of last changes and losses. Copyright © 2010 S. Karger AG, Basel.

  2. A Computer in Your Lap.

    Science.gov (United States)

    Byers, Joseph W.

    1991-01-01

    The most useful feature of laptop computers is portability, as one elementary school principal notes. IBM and Apple are not leaders in laptop technology. Tandy and Toshiba market relatively inexpensive models offering durability, reliable software, and sufficient memory space. (MLH)

  3. Enhancing memory performance after organic brain disease relies on retrieval processes rather than encoding or consolidation

    NARCIS (Netherlands)

    Hildebrandt, H.; Gehrmann, A.; Mödden, C.; Eling, P.A.T.M.

    2011-01-01

    Neuropsychological rehabilitation of memory performance is still a controversial topic, and rehabilitation studies have not analyzed to which stage of memory processing (encoding, consolidation, or retrieval) enhancement may be attributed. We first examined the efficacy of a computer training

  4. Improving Outcome of Psychosocial Treatments by Enhancing Memory and Learning

    Science.gov (United States)

    Harvey, Allison G.; Lee, Jason; Williams, Joseph; Hollon, Steven D.; Walker, Matthew P.; Thompson, Monique A.; Smith, Rita

    2014-01-01

    Mental disorders are prevalent and lead to significant impairment. Progress toward establishing treatments has been good. However, effect sizes are small to moderate, gains may not persist, and many patients derive no benefit. Our goal is to highlight the potential for empirically-supported psychosocial treatments to be improved by incorporating insights from cognitive psychology and research on education. Our central question is: If it were possible to improve memory for content of sessions of psychosocial treatments, would outcome substantially improve? This question arises from five lines of evidence: (a) mental illness is often characterized by memory impairment, (b) memory impairment is modifiable, (c) psychosocial treatments often involve the activation of emotion, (d) emotion can bias memory and (e) memory for psychosocial treatment sessions is poor. Insights from scientific knowledge on learning and memory are leveraged to derive strategies for a transdiagnostic and transtreatment cognitive support intervention. These strategies can be applied within and between sessions and to interventions delivered via computer, the internet and text message. Additional novel pathways to improving memory include improving sleep, engaging in exercise and imagery. Given that memory processes change across the lifespan, services to children and older adults may benefit from cognitive support. PMID:25544856

  5. Long-term associative learning predicts verbal short-term memory performance.

    Science.gov (United States)

    Jones, Gary; Macken, Bill

    2018-02-01

    Studies using tests such as digit span and nonword repetition have implicated short-term memory across a range of developmental domains. Such tests ostensibly assess specialized processes for the short-term manipulation and maintenance of information that are often argued to enable long-term learning. However, there is considerable evidence for an influence of long-term linguistic learning on performance in short-term memory tasks that brings into question the role of a specialized short-term memory system separate from long-term knowledge. Using natural language corpora, we show experimentally and computationally that performance on three widely used measures of short-term memory (digit span, nonword repetition, and sentence recall) can be predicted from simple associative learning operating on the linguistic environment to which a typical child may have been exposed. The findings support the broad view that short-term verbal memory performance reflects the application of long-term language knowledge to the experimental setting.

  6. A chiral-based magnetic memory device without a permanent magnet.

    Science.gov (United States)

    Ben Dor, Oren; Yochelis, Shira; Mathew, Shinto P; Naaman, Ron; Paltiel, Yossi

    2013-01-01

    Several technologies are currently in use for computer memory devices. However, there is a need for a universal memory device that has high density, high speed and low power requirements. To this end, various types of magnetic-based technologies with a permanent magnet have been proposed. Recent charge-transfer studies indicate that chiral molecules act as an efficient spin filter. Here we utilize this effect to achieve a proof of concept for a new type of chiral-based magnetic-based Si-compatible universal memory device without a permanent magnet. More specifically, we use spin-selective charge transfer through a self-assembled monolayer of polyalanine to magnetize a Ni layer. This magnitude of magnetization corresponds to applying an external magnetic field of 0.4 T to the Ni layer. The readout is achieved using low currents. The presented technology has the potential to overcome the limitations of other magnetic-based memory technologies to allow fabricating inexpensive, high-density universal memory-on-chip devices.

  7. Performance, motivation and immersion within a suite of working memory games

    OpenAIRE

    Karlsen, Hanne Fagerjord

    2014-01-01

    Almost 20% of Norwegian children and youth struggle with behavioural and cognitive disability. Working memory deficiency is especially common among children with ADHD. Recent advances in developmental psychology suggest that people with ADHD might benefit from games designed to train working memory abilities. The motivating factor from computer games can be especially strong to those with ADHD, as they respond strongly to motivational reinforcement. This thesis investigates performance, ...

  8. Noise tolerant dendritic lattice associative memories

    Science.gov (United States)

    Ritter, Gerhard X.; Schmalz, Mark S.; Hayden, Eric; Tucker, Marc

    2011-09-01

    Linear classifiers based on computation over the real numbers R (e.g., with operations of addition and multiplication) denoted by (R, +, x), have been represented extensively in the literature of pattern recognition. However, a different approach to pattern classification involves the use of addition, maximum, and minimum operations over the reals in the algebra (R, +, maximum, minimum) These pattern classifiers, based on lattice algebra, have been shown to exhibit superior information storage capacity, fast training and short convergence times, high pattern classification accuracy, and low computational cost. Such attributes are not always found, for example, in classical neural nets based on the linear inner product. In a special type of lattice associative memory (LAM), called a dendritic LAM or DLAM, it is possible to achieve noise-tolerant pattern classification by varying the design of noise or error acceptance bounds. This paper presents theory and algorithmic approaches for the computation of noise-tolerant lattice associative memories (LAMs) under a variety of input constraints. Of particular interest are the classification of nonergodic data in noise regimes with time-varying statistics. DLAMs, which are a specialization of LAMs derived from concepts of biological neural networks, have successfully been applied to pattern classification from hyperspectral remote sensing data, as well as spatial object recognition from digital imagery. The authors' recent research in the development of DLAMs is overviewed, with experimental results that show utility for a wide variety of pattern classification applications. Performance results are presented in terms of measured computational cost, noise tolerance, classification accuracy, and throughput for a variety of input data and noise levels.

  9. Assistive technology for memory support in dementia.

    Science.gov (United States)

    Van der Roest, Henriëtte G; Wenborn, Jennifer; Pastink, Channah; Dröes, Rose-Marie; Orrell, Martin

    2017-06-11

    maintained by the Information Specialists of the CDCIG and contains studies in the areas of dementia prevention, dementia treatment and cognitive enhancement in healthy people. We also searched the following list of databases, adapting the search strategy as necessary: Centre for Reviews and Dissemination (CRD) Databases, up to May 2016; The Collection of Computer Science Bibliographies; DBLP Computer Science Bibliography; HCI Bibliography: Human-Computer Interaction Resources; and AgeInfo, all to June 2016; PiCarta; Inspec; Springer Link Lecture Notes; Social Care Online; and IEEE Computer Society Digital Library, all to October 2016; J-STAGE: Japan Science and Technology Information Aggregator, Electronic; and Networked Computer Science Technical Reference Library (NCSTRL), both to November 2016; Computing Research Repository (CoRR) up to December 2016; and OT seeker; and ADEAR, both to February 2017. In addition, we searched Google Scholar and OpenSIGLE for grey literature. We intended to review randomised controlled trials (RCTs) and clustered randomised trials with blinded assessment of outcomes that evaluated an electronic assistive device used with the single aim of supporting memory function in people diagnosed with dementia. The control interventions could either be 'care (or treatment) as usual' or non-technological psychosocial interventions (including interventions that use non-electronic assistive devices) also specifically aimed at supporting memory. Outcome measures included activities of daily living, level of dependency, clinical and care-related outcomes (for example admission to long-term care), perceived quality of life and well-being, and adverse events resulting from the use of AT; as well as the effects of AT on carers. Two review authors independently screened all titles and abstracts identified by the search. We identified no studies which met the inclusion criteria. This review highlights the current lack of high-quality evidence to determine

  10. Make Computer Learning Stick.

    Science.gov (United States)

    Casella, Vicki

    1985-01-01

    Teachers are using computer programs in conjunction with many classroom staples such as art supplies, math manipulatives, and science reference books. Twelve software programs and related activities are described which teach visual and auditory memory and spatial relations, as well as subject areas such as anatomy and geography. (MT)

  11. Aging memories: differential decay of episodic memory components.

    Science.gov (United States)

    Talamini, Lucia M; Gorree, Eva

    2012-05-17

    Some memories about events can persist for decades, even a lifetime. However, recent memories incorporate rich sensory information, including knowledge on the spatial and temporal ordering of event features, while old memories typically lack this "filmic" quality. We suggest that this apparent change in the nature of memories may reflect a preferential loss of hippocampus-dependent, configurational information over more cortically based memory components, including memory for individual objects. The current study systematically tests this hypothesis, using a new paradigm that allows the contemporaneous assessment of memory for objects, object pairings, and object-position conjunctions. Retention of each memory component was tested, at multiple intervals, up to 3 mo following encoding. The three memory subtasks adopted the same retrieval paradigm and were matched for initial difficulty. Results show differential decay of the tested episodic memory components, whereby memory for configurational aspects of a scene (objects' co-occurrence and object position) decays faster than memory for featured objects. Interestingly, memory requiring a visually detailed object representation decays at a similar rate as global object recognition, arguing against interpretations based on task difficulty and against the notion that (visual) detail is forgotten preferentially. These findings show that memories undergo qualitative changes as they age. More specifically, event memories become less configurational over time, preferentially losing some of the higher order associations that are dependent on the hippocampus for initial fast encoding. Implications for theories of long-term memory are discussed.

  12. Searching for New Double Stars with a Computer

    Science.gov (United States)

    Bryant, T. V.

    2015-04-01

    The advent of computers with large amounts of RAM memory and fast processors, as well as easy internet access to large online astronomical databases, has made computer searches based on astrometric data practicable for most researchers. This paper describes one such search that has uncovered hitherto unrecognized double stars.

  13. Energy efficient hybrid computing systems using spin devices

    Science.gov (United States)

    Sharad, Mrigank

    Emerging spin-devices like magnetic tunnel junctions (MTJ's), spin-valves and domain wall magnets (DWM) have opened new avenues for spin-based logic design. This work explored potential computing applications which can exploit such devices for higher energy-efficiency and performance. The proposed applications involve hybrid design schemes, where charge-based devices supplement the spin-devices, to gain large benefits at the system level. As an example, lateral spin valves (LSV) involve switching of nanomagnets using spin-polarized current injection through a metallic channel such as Cu. Such spin-torque based devices possess several interesting properties that can be exploited for ultra-low power computation. Analog characteristic of spin current facilitate non-Boolean computation like majority evaluation that can be used to model a neuron. The magneto-metallic neurons can operate at ultra-low terminal voltage of ˜20mV, thereby resulting in small computation power. Moreover, since nano-magnets inherently act as memory elements, these devices can facilitate integration of logic and memory in interesting ways. The spin based neurons can be integrated with CMOS and other emerging devices leading to different classes of neuromorphic/non-Von-Neumann architectures. The spin-based designs involve `mixed-mode' processing and hence can provide very compact and ultra-low energy solutions for complex computation blocks, both digital as well as analog. Such low-power, hybrid designs can be suitable for various data processing applications like cognitive computing, associative memory, and currentmode on-chip global interconnects. Simulation results for these applications based on device-circuit co-simulation framework predict more than ˜100x improvement in computation energy as compared to state of the art CMOS design, for optimal spin-device parameters.

  14. Improved look-up table method of computer-generated holograms.

    Science.gov (United States)

    Wei, Hui; Gong, Guanghong; Li, Ni

    2016-11-10

    Heavy computation load and vast memory requirements are major bottlenecks of computer-generated holograms (CGHs), which are promising and challenging in three-dimensional displays. To solve these problems, an improved look-up table (LUT) method suitable for arbitrarily sampled object points is proposed and implemented on a graphics processing unit (GPU) whose reconstructed object quality is consistent with that of the coherent ray-trace (CRT) method. The concept of distance factor is defined, and the distance factors are pre-computed off-line and stored in a look-up table. The results show that while reconstruction quality close to that of the CRT method is obtained, the on-line computation time is dramatically reduced compared with the LUT method on the GPU and the memory usage is lower than that of the novel-LUT considerably. Optical experiments are carried out to validate the effectiveness of the proposed method.

  15. Encoding, Consolidation, and Retrieval of Contextual Memory: Differential Involvement of Dorsal CA3 and CA1 Hippocampal Subregions

    Science.gov (United States)

    Daumas, Stephanie; Halley, Helene; Frances, Bernard; Lassalle, Jean-Michel

    2005-01-01

    Studies on human and animals shed light on the unique hippocampus contributions to relational memory. However, the particular role of each hippocampal subregion in memory processing is still not clear. Hippocampal computational models and theories have emphasized a unique function in memory for each hippocampal subregion, with the CA3 area acting…

  16. High performance graphics processor based computed tomography reconstruction algorithms for nuclear and other large scale applications.

    Energy Technology Data Exchange (ETDEWEB)

    Jimenez, Edward S. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Orr, Laurel J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Thompson, Kyle R. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2013-09-01

    The goal of this work is to develop a fast computed tomography (CT) reconstruction algorithm based on graphics processing units (GPU) that achieves significant improvement over traditional central processing unit (CPU) based implementations. The main challenge in developing a CT algorithm that is capable of handling very large datasets is parallelizing the algorithm in such a way that data transfer does not hinder performance of the reconstruction algorithm. General Purpose Graphics Processing (GPGPU) is a new technology that the Science and Technology (S&T) community is starting to adopt in many fields where CPU-based computing is the norm. GPGPU programming requires a new approach to algorithm development that utilizes massively multi-threaded environments. Multi-threaded algorithms in general are difficult to optimize since performance bottlenecks occur that are non-existent in single-threaded algorithms such as memory latencies. If an efficient GPU-based CT reconstruction algorithm can be developed; computational times could be improved by a factor of 20. Additionally, cost benefits will be realized as commodity graphics hardware could potentially replace expensive supercomputers and high-end workstations. This project will take advantage of the CUDA programming environment and attempt to parallelize the task in such a way that multiple slices of the reconstruction volume are computed simultaneously. This work will also take advantage of the GPU memory by utilizing asynchronous memory transfers, GPU texture memory, and (when possible) pinned host memory so that the memory transfer bottleneck inherent to GPGPU is amortized. Additionally, this work will take advantage of GPU-specific hardware (i.e. fast texture memory, pixel-pipelines, hardware interpolators, and varying memory hierarchy) that will allow for additional performance improvements.

  17. Episodic grammar: a computational model of the interaction between episodic and semantic memory in language processing

    NARCIS (Netherlands)

    Borensztajn, G.; Zuidema, W.; Carlson, L.; Hoelscher, C.; Shipley, T.F.

    2011-01-01

    We present a model of the interaction of semantic and episodic memory in language processing. Our work shows how language processing can be understood in terms of memory retrieval. We point out that the perceived dichotomy between rule-based versus exemplar-based language modelling can be

  18. Axially modulated arch resonator for logic and memory applications

    KAUST Repository

    Hafiz, Md Abdullah Al

    2018-01-17

    We demonstrate reconfigurable logic and random access memory devices based on an axially modulated clamped-guided arch resonator. The device is electrostatically actuated and the motional signal is capacitively sensed, while the resonance frequency is modulated through an axial electrostatic force from the guided side of the microbeam. A multi-physics finite element model is used to verify the effectiveness of the axial modulation. We present two case studies: first, a reconfigurable two-input logic gate based on the linear resonance frequency modulation, and second, a memory element based on the hysteretic frequency response of the resonator working in the nonlinear regime. The energy consumptions of the device for both logic and memory operations are in the range of picojoules, promising for energy efficient alternative computing paradigm.

  19. Lattice guage theories on a hypercube computer

    International Nuclear Information System (INIS)

    Otto, S.W.

    1984-01-01

    A report on the parallel computer effort underway at Caltech and the use of these machines for lattice gauge theories is given. The computational requirements of the Monte Carlos are, of course, enormous, so high Mflops (Million floating point operations per second) and large memories are required. Various calculations on the machines in regards to their programmability (a non-trivial issue on a parallel computer) and their efficiency in usage of the machine are discussed

  20. Modeling Coevolution between Language and Memory Capacity during Language Origin

    Science.gov (United States)

    Gong, Tao; Shuai, Lan

    2015-01-01

    Memory is essential to many cognitive tasks including language. Apart from empirical studies of memory effects on language acquisition and use, there lack sufficient evolutionary explorations on whether a high level of memory capacity is prerequisite for language and whether language origin could influence memory capacity. In line with evolutionary theories that natural selection refined language-related cognitive abilities, we advocated a coevolution scenario between language and memory capacity, which incorporated the genetic transmission of individual memory capacity, cultural transmission of idiolects, and natural and cultural selections on individual reproduction and language teaching. To illustrate the coevolution dynamics, we adopted a multi-agent computational model simulating the emergence of lexical items and simple syntax through iterated communications. Simulations showed that: along with the origin of a communal language, an initially-low memory capacity for acquired linguistic knowledge was boosted; and such coherent increase in linguistic understandability and memory capacities reflected a language-memory coevolution; and such coevolution stopped till memory capacities became sufficient for language communications. Statistical analyses revealed that the coevolution was realized mainly by natural selection based on individual communicative success in cultural transmissions. This work elaborated the biology-culture parallelism of language evolution, demonstrated the driving force of culturally-constituted factors for natural selection of individual cognitive abilities, and suggested that the degree difference in language-related cognitive abilities between humans and nonhuman animals could result from a coevolution with language. PMID:26544876

  1. Alan Turing's Automatic Computing Engine The Master Codebreaker's Struggle to build the Modern Computer

    CERN Document Server

    Copeland, B Jack

    2005-01-01

    The mathematical genius Alan Turing (1912-1954) was one of the greatest scientists and thinkers of the 20th century. Now well known for his crucial wartime role in breaking the ENIGMA code, he was the first to conceive of the fundamental principle of the modern computer-the idea of controlling a computing machine's operations by means of a program of coded instructions, stored in the machine's 'memory'. In 1945 Turing drew up his revolutionary design for an electronic computingmachine-his Automatic Computing Engine ('ACE'). A pilot model of the ACE ran its first program in 1950 and the product

  2. Computer vision camera with embedded FPGA processing

    Science.gov (United States)

    Lecerf, Antoine; Ouellet, Denis; Arias-Estrada, Miguel

    2000-03-01

    Traditional computer vision is based on a camera-computer system in which the image understanding algorithms are embedded in the computer. To circumvent the computational load of vision algorithms, low-level processing and imaging hardware can be integrated in a single compact module where a dedicated architecture is implemented. This paper presents a Computer Vision Camera based on an open architecture implemented in an FPGA. The system is targeted to real-time computer vision tasks where low level processing and feature extraction tasks can be implemented in the FPGA device. The camera integrates a CMOS image sensor, an FPGA device, two memory banks, and an embedded PC for communication and control tasks. The FPGA device is a medium size one equivalent to 25,000 logic gates. The device is connected to two high speed memory banks, an IS interface, and an imager interface. The camera can be accessed for architecture programming, data transfer, and control through an Ethernet link from a remote computer. A hardware architecture can be defined in a Hardware Description Language (like VHDL), simulated and synthesized into digital structures that can be programmed into the FPGA and tested on the camera. The architecture of a classical multi-scale edge detection algorithm based on a Laplacian of Gaussian convolution has been developed to show the capabilities of the system.

  3. Working memory training and semantic structuring improves remembering future events, not past events.

    Science.gov (United States)

    Richter, Kim Merle; Mödden, Claudia; Eling, Paul; Hildebrandt, Helmut

    2015-01-01

    Objectives. Memory training in combination with practice in semantic structuring and word fluency has been shown to improve memory performance. This study investigated the efficacy of a working memory training combined with exercises in semantic structuring and word fluency and examined whether training effects generalize to other cognitive tasks. Methods. In this double-blind randomized control study, 36 patients with memory impairments following brain damage were allocated to either the experimental or the active control condition, with both groups receiving 9 hours of therapy. The experimental group received a computer-based working memory training and exercises in word fluency and semantic structuring. The control group received the standard memory therapy provided in the rehabilitation center. Patients were tested on a neuropsychological test battery before and after therapy, resulting in composite scores for working memory; immediate, delayed, and prospective memory; word fluency; and attention. Results. The experimental group improved significantly in working memory and word fluency. The training effects also generalized to prospective memory tasks. No specific effect on episodic memory could be demonstrated. Conclusion. Combined treatment of working memory training with exercises in semantic structuring is an effective method for cognitive rehabilitation of organic memory impairment. © The Author(s) 2014.

  4. Design of multiple sequence alignment algorithms on parallel, distributed memory supercomputers.

    Science.gov (United States)

    Church, Philip C; Goscinski, Andrzej; Holt, Kathryn; Inouye, Michael; Ghoting, Amol; Makarychev, Konstantin; Reumann, Matthias

    2011-01-01

    The challenge of comparing two or more genomes that have undergone recombination and substantial amounts of segmental loss and gain has recently been addressed for small numbers of genomes. However, datasets of hundreds of genomes are now common and their sizes will only increase in the future. Multiple sequence alignment of hundreds of genomes remains an intractable problem due to quadratic increases in compute time and memory footprint. To date, most alignment algorithms are designed for commodity clusters without parallelism. Hence, we propose the design of a multiple sequence alignment algorithm on massively parallel, distributed memory supercomputers to enable research into comparative genomics on large data sets. Following the methodology of the sequential progressiveMauve algorithm, we design data structures including sequences and sorted k-mer lists on the IBM Blue Gene/P supercomputer (BG/P). Preliminary results show that we can reduce the memory footprint so that we can potentially align over 250 bacterial genomes on a single BG/P compute node. We verify our results on a dataset of E.coli, Shigella and S.pneumoniae genomes. Our implementation returns results matching those of the original algorithm but in 1/2 the time and with 1/4 the memory footprint for scaffold building. In this study, we have laid the basis for multiple sequence alignment of large-scale datasets on a massively parallel, distributed memory supercomputer, thus enabling comparison of hundreds instead of a few genome sequences within reasonable time.

  5. Exploiting graphics processing units for computational biology and bioinformatics.

    Science.gov (United States)

    Payne, Joshua L; Sinnott-Armstrong, Nicholas A; Moore, Jason H

    2010-09-01

    Advances in the video gaming industry have led to the production of low-cost, high-performance graphics processing units (GPUs) that possess more memory bandwidth and computational capability than central processing units (CPUs), the standard workhorses of scientific computing. With the recent release of generalpurpose GPUs and NVIDIA's GPU programming language, CUDA, graphics engines are being adopted widely in scientific computing applications, particularly in the fields of computational biology and bioinformatics. The goal of this article is to concisely present an introduction to GPU hardware and programming, aimed at the computational biologist or bioinformaticist. To this end, we discuss the primary differences between GPU and CPU architecture, introduce the basics of the CUDA programming language, and discuss important CUDA programming practices, such as the proper use of coalesced reads, data types, and memory hierarchies. We highlight each of these topics in the context of computing the all-pairs distance between instances in a dataset, a common procedure in numerous disciplines of scientific computing. We conclude with a runtime analysis of the GPU and CPU implementations of the all-pairs distance calculation. We show our final GPU implementation to outperform the CPU implementation by a factor of 1700.

  6. Multilevel radiative thermal memory realized by the hysteretic metal-insulator transition of vanadium dioxide

    International Nuclear Information System (INIS)

    Ito, Kota; Nishikawa, Kazutaka; Iizuka, Hideo

    2016-01-01

    Thermal information processing is attracting much interest as an analog of electronic computing. We experimentally demonstrated a radiative thermal memory utilizing a phase change material. The hysteretic metal-insulator transition of vanadium dioxide (VO 2 ) allows us to obtain a multilevel memory. We developed a Preisach model to explain the hysteretic radiative heat transfer between a VO 2 film and a fused quartz substrate. The transient response of our memory predicted by the Preisach model agrees well with the measured response. Our multilevel thermal memory paves the way for thermal information processing as well as contactless thermal management

  7. Multilevel radiative thermal memory realized by the hysteretic metal-insulator transition of vanadium dioxide

    Energy Technology Data Exchange (ETDEWEB)

    Ito, Kota, E-mail: kotaito@mosk.tytlabs.co.jp; Nishikawa, Kazutaka; Iizuka, Hideo [Toyota Central Research and Development Labs, Nagakute, Aichi 480-1192 (Japan)

    2016-02-01

    Thermal information processing is attracting much interest as an analog of electronic computing. We experimentally demonstrated a radiative thermal memory utilizing a phase change material. The hysteretic metal-insulator transition of vanadium dioxide (VO{sub 2}) allows us to obtain a multilevel memory. We developed a Preisach model to explain the hysteretic radiative heat transfer between a VO{sub 2} film and a fused quartz substrate. The transient response of our memory predicted by the Preisach model agrees well with the measured response. Our multilevel thermal memory paves the way for thermal information processing as well as contactless thermal management.

  8. Efficient external memory structures for range-aggregate queries

    DEFF Research Database (Denmark)

    Agarwal, P.K.; Yang, J.; Arge, L.

    2013-01-01

    We present external memory data structures for efficiently answering range-aggregate queries. The range-aggregate problem is defined as follows: Given a set of weighted points in Rd, compute the aggregate of the weights of the points that lie inside a d-dimensional orthogonal query rectangle. The...

  9. The use of computers for instruction in fluid dynamics

    Science.gov (United States)

    Watson, Val

    1987-01-01

    Applications for computers which improve instruction in fluid dynamics are examined. Computers can be used to illustrate three-dimensional flow fields and simple fluid dynamics mechanisms, to solve fluid dynamics problems, and for electronic sketching. The usefulness of computer applications is limited by computer speed, memory, and software and the clarity and field of view of the projected display. Proposed advances in personal computers which will address these limitations are discussed. Long range applications for computers in education are considered.

  10. A computational model for evaluating the effects of attention, memory, and mental models on situation assessment of nuclear power plant operators

    International Nuclear Information System (INIS)

    Lee, Hyun-Chul; Seong, Poong-Hyun

    2009-01-01

    Operators in nuclear power plants have to acquire information from human system interfaces (HSIs) and the environment in order to create, update, and confirm their understanding of a plant state, as failures of situation assessment may cause wrong decisions for process control and finally errors of commission in nuclear power plants. A few computational models that can be used to predict and quantify the situation awareness of operators have been suggested. However, these models do not sufficiently consider human characteristics for nuclear power plant operators. In this paper, we propose a computational model for situation assessment of nuclear power plant operators using a Bayesian network. This model incorporates human factors significantly affecting operators' situation assessment, such as attention, working memory decay, and mental model. As this proposed model provides quantitative results of situation assessment and diagnostic performance, we expect that this model can be used in the design and evaluation of human system interfaces as well as the prediction of situation awareness errors in the human reliability analysis.

  11. A computational model for evaluating the effects of attention, memory, and mental models on situation assessment of nuclear power plant operators

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Hyun-Chul [Instrumentation and Control/Human Factors Division, Korea Atomic Energy Research Institute, 1045 Daedeok-daero, Yuseong-gu, Daejeon 305-353 (Korea, Republic of)], E-mail: leehc@kaeri.re.kr; Seong, Poong-Hyun [Department of Nuclear and Quantum Engineering, Korea Advanced Institute of Science and Technology, 373-1, Guseong-dong, Yuseong-gu, Daejeon 305-701 (Korea, Republic of)

    2009-11-15

    Operators in nuclear power plants have to acquire information from human system interfaces (HSIs) and the environment in order to create, update, and confirm their understanding of a plant state, as failures of situation assessment may cause wrong decisions for process control and finally errors of commission in nuclear power plants. A few computational models that can be used to predict and quantify the situation awareness of operators have been suggested. However, these models do not sufficiently consider human characteristics for nuclear power plant operators. In this paper, we propose a computational model for situation assessment of nuclear power plant operators using a Bayesian network. This model incorporates human factors significantly affecting operators' situation assessment, such as attention, working memory decay, and mental model. As this proposed model provides quantitative results of situation assessment and diagnostic performance, we expect that this model can be used in the design and evaluation of human system interfaces as well as the prediction of situation awareness errors in the human reliability analysis.

  12. Processor-in-memory-and-storage architecture

    Science.gov (United States)

    DeBenedictis, Erik

    2018-01-02

    A method and apparatus for performing reliable general-purpose computing. Each sub-core of a plurality of sub-cores of a processor core processes a same instruction at a same time. A code analyzer receives a plurality of residues that represents a code word corresponding to the same instruction and an indication of whether the code word is a memory address code or a data code from the plurality of sub-cores. The code analyzer determines whether the plurality of residues are consistent or inconsistent. The code analyzer and the plurality of sub-cores perform a set of operations based on whether the code word is a memory address code or a data code and a determination of whether the plurality of residues are consistent or inconsistent.

  13. Note on soft theorems and memories in even dimensions

    Science.gov (United States)

    Mao, Pujian; Ouyang, Hao

    2017-11-01

    Recently, it has been shown that the Weinberg's formula for soft graviton production is essentially a Fourier transformation of the formula for gravitational memory which provides an effective way to understand how the classical calculation arises as a limiting case of the quantum result. In this note, we propose a general framework that connects the soft theorems to the radiation fields obtained from classical computation for different theories in even dimensions. We show that the latter is nothing but Fourier transformation of the former. The memory formulas can be derived from radiation fields explicitly.

  14. Computer architecture evaluation for structural dynamics computations: Project summary

    Science.gov (United States)

    Standley, Hilda M.

    1989-01-01

    The intent of the proposed effort is the examination of the impact of the elements of parallel architectures on the performance realized in a parallel computation. To this end, three major projects are developed: a language for the expression of high level parallelism, a statistical technique for the synthesis of multicomputer interconnection networks based upon performance prediction, and a queueing model for the analysis of shared memory hierarchies.

  15. Limbic systems for emotion and for memory, but no single limbic system.

    Science.gov (United States)

    Rolls, Edmund T

    2015-01-01

    The concept of a (single) limbic system is shown to be outmoded. Instead, anatomical, neurophysiological, functional neuroimaging, and neuropsychological evidence is described that anterior limbic and related structures including the orbitofrontal cortex and amygdala are involved in emotion, reward valuation, and reward-related decision-making (but not memory), with the value representations transmitted to the anterior cingulate cortex for action-outcome learning. In this 'emotion limbic system' a computational principle is that feedforward pattern association networks learn associations from visual, olfactory and auditory stimuli, to primary reinforcers such as taste, touch, and pain. In primates including humans this learning can be very rapid and rule-based, with the orbitofrontal cortex overshadowing the amygdala in this learning important for social and emotional behaviour. Complementary evidence is described showing that the hippocampus and limbic structures to which it is connected including the posterior cingulate cortex and the fornix-mammillary body-anterior thalamus-posterior cingulate circuit are involved in episodic or event memory, but not emotion. This 'hippocampal system' receives information from neocortical areas about spatial location, and objects, and can rapidly associate this information together by the different computational principle of autoassociation in the CA3 region of the hippocampus involving feedback. The system can later recall the whole of this information in the CA3 region from any component, a feedback process, and can recall the information back to neocortical areas, again a feedback (to neocortex) recall process. Emotion can enter this memory system from the orbitofrontal cortex etc., and be recalled back to the orbitofrontal cortex etc. during memory recall, but the emotional and hippocampal networks or 'limbic systems' operate by different computational principles, and operate independently of each other except insofar as an

  16. Coherent oscillatory networks supporting short-term memory retention.

    Science.gov (United States)

    Payne, Lisa; Kounios, John

    2009-01-09

    Accumulating evidence suggests that top-down processes, reflected by frontal-midline theta-band (4-8 Hz) electroencephalogram (EEG) oscillations, strengthen the activation of a memory set during short-term memory (STM) retention. In addition, the amplitude of posterior alpha-band (8-13 Hz) oscillations during STM retention is thought to reflect a mechanism that protects fragile STM activations from interference by gating bottom-up sensory inputs. The present study addressed two important questions about these phenomena. First, why have previous studies not consistently found memory set-size effects on frontal-midline theta? Second, how does posterior alpha participate in STM retention? To answer these questions, large-scale network connectivity during STM retention was examined by computing EEG wavelet coherence during the retention period of a modified Sternberg task using visually-presented letters as stimuli. The results showed (a) increasing theta-band coherence between frontal-midline and left temporal-parietal sites with increasing memory load, and (b) increasing alpha-band coherence between midline parietal and left temporal/parietal sites with increasing memory load. These findings support the view that theta-band coherence, rather than amplitude, is the key factor in selective top-down strengthening of the memory set and demonstrate that posterior alpha-band oscillations associated with sensory gating are involved in STM retention by participating in the STM network.

  17. Modeling spatial-temporal operations with context-dependent associative memories.

    Science.gov (United States)

    Mizraji, Eduardo; Lin, Juan

    2015-10-01

    We organize our behavior and store structured information with many procedures that require the coding of spatial and temporal order in specific neural modules. In the simplest cases, spatial and temporal relations are condensed in prepositions like "below" and "above", "behind" and "in front of", or "before" and "after", etc. Neural operators lie beneath these words, sharing some similarities with logical gates that compute spatial and temporal asymmetric relations. We show how these operators can be modeled by means of neural matrix memories acting on Kronecker tensor products of vectors. The complexity of these memories is further enhanced by their ability to store episodes unfolding in space and time. How does the brain scale up from the raw plasticity of contingent episodic memories to the apparent stable connectivity of large neural networks? We clarify this transition by analyzing a model that flexibly codes episodic spatial and temporal structures into contextual markers capable of linking different memory modules.

  18. Dopaminergic inputs in the dentate gyrus direct the choice of memory encoding

    International Nuclear Information System (INIS)

    Du, Huiyun; Deng, Wei; Aimone, James B.; Ge, Minyan; Parylak, Sarah

    2016-01-01

    Rewarding experiences are often well remembered, and such memory formation is known to be dependent on dopamine modulation of the neural substrates engaged in learning and memory; however, it is unknown how and where in the brain dopamine signals bias episodic memory toward preceding rather than subsequent events. Here we found that photostimulation of channelrhodopsin-2–expressing dopaminergic fibers in the dentate gyrus induced a long-term depression of cortical inputs, diminished theta oscillations, and impaired subsequent contextual learning. Computational modeling based on this dopamine modulation indicated an asymmetric association of events occurring before and after reward in memory tasks. In subsequent behavioral experiments, preexposure to a natural reward suppressed hippocampus-dependent memory formation, with an effective time window consistent with the duration of dopamine-induced changes of dentate activity. Altogether, our results suggest a mechanism by which dopamine enables the hippocampus to encode memory with reduced interference from subsequent experience.

  19. Performing a local reduction operation on a parallel computer

    Science.gov (United States)

    Blocksome, Michael A.; Faraj, Daniel A.

    2012-12-11

    A parallel computer including compute nodes, each including two reduction processing cores, a network write processing core, and a network read processing core, each processing core assigned an input buffer. Copying, in interleaved chunks by the reduction processing cores, contents of the reduction processing cores' input buffers to an interleaved buffer in shared memory; copying, by one of the reduction processing cores, contents of the network write processing core's input buffer to shared memory; copying, by another of the reduction processing cores, contents of the network read processing core's input buffer to shared memory; and locally reducing in parallel by the reduction processing cores: the contents of the reduction processing core's input buffer; every other interleaved chunk of the interleaved buffer; the copied contents of the network write processing core's input buffer; and the copied contents of the network read processing core's input buffer.

  20. MEMORY MODULATION

    Science.gov (United States)

    Roozendaal, Benno; McGaugh, James L.

    2011-01-01

    Our memories are not all created equally strong: Some experiences are well remembered while others are remembered poorly, if at all. Research on memory modulation investigates the neurobiological processes and systems that contribute to such differences in the strength of our memories. Extensive evidence from both animal and human research indicates that emotionally significant experiences activate hormonal and brain systems that regulate the consolidation of newly acquired memories. These effects are integrated through noradrenergic activation of the basolateral amygdala which regulates memory consolidation via interactions with many other brain regions involved in consolidating memories of recent experiences. Modulatory systems not only influence neurobiological processes underlying the consolidation of new information, but also affect other mnemonic processes, including memory extinction, memory recall and working memory. In contrast to their enhancing effects on consolidation, adrenal stress hormones impair memory retrieval and working memory. Such effects, as with memory consolidation, require noradrenergic activation of the basolateral amygdala and interactions with other brain regions. PMID:22122145