WorldWideScience

Sample records for computer memory

  1. Self-Testing Computer Memory

    Science.gov (United States)

    Chau, Savio, N.; Rennels, David A.

    1988-01-01

    Memory system for computer repeatedly tests itself during brief, regular interruptions of normal processing of data. Detects and corrects transient faults as single-event upsets (changes in bits due to ionizing radiation) within milliseconds after occuring. Self-testing concept surpasses conventional by actively flushing latent defects out of memory and attempting to correct before accumulating beyond capacity for self-correction or detection. Cost of improvement modest increase in complexity of circuitry and operating time.

  2. Distributed-memory matrix computations

    DEFF Research Database (Denmark)

    Balle, Susanne Mølleskov

    1995-01-01

    The main goal of this project is to investigate, develop, and implement algorithms for numerical linear algebra on parallel computers in order to acquire expertise in methods for parallel computations. An important motivation for analyzaing and investigating the potential for parallelism in these......The main goal of this project is to investigate, develop, and implement algorithms for numerical linear algebra on parallel computers in order to acquire expertise in methods for parallel computations. An important motivation for analyzaing and investigating the potential for parallelism...... in these algorithms is that many scientific applications rely heavily on the performance of the involved dense linear algebra building blocks. Even though we consider the distributed-memory as well as the shared-memory programming paradigm, the major part of the thesis is dedicated to distributed-memory architectures....... We emphasize distributed-memory massively parallel computers - such as the Connection Machines model CM-200 and model CM-5/CM-5E - available to us at UNI-C and at Thinking Machines Corporation. The CM-200 was at the time this project started one of the few existing massively parallel computers...

  3. Dynamic computing random access memory

    International Nuclear Information System (INIS)

    Traversa, F L; Bonani, F; Pershin, Y V; Di Ventra, M

    2014-01-01

    The present von Neumann computing paradigm involves a significant amount of information transfer between a central processing unit and memory, with concomitant limitations in the actual execution speed. However, it has been recently argued that a different form of computation, dubbed memcomputing (Di Ventra and Pershin 2013 Nat. Phys. 9 200–2) and inspired by the operation of our brain, can resolve the intrinsic limitations of present day architectures by allowing for computing and storing of information on the same physical platform. Here we show a simple and practical realization of memcomputing that utilizes easy-to-build memcapacitive systems. We name this architecture dynamic computing random access memory (DCRAM). We show that DCRAM provides massively-parallel and polymorphic digital logic, namely it allows for different logic operations with the same architecture, by varying only the control signals. In addition, by taking into account realistic parameters, its energy expenditures can be as low as a few fJ per operation. DCRAM is fully compatible with CMOS technology, can be realized with current fabrication facilities, and therefore can really serve as an alternative to the present computing technology. (paper)

  4. The computational nature of memory modification.

    Science.gov (United States)

    Gershman, Samuel J; Monfils, Marie-H; Norman, Kenneth A; Niv, Yael

    2017-03-15

    Retrieving a memory can modify its influence on subsequent behavior. We develop a computational theory of memory modification, according to which modification of a memory trace occurs through classical associative learning, but which memory trace is eligible for modification depends on a structure learning mechanism that discovers the units of association by segmenting the stream of experience into statistically distinct clusters (latent causes). New memories are formed when the structure learning mechanism infers that a new latent cause underlies current sensory observations. By the same token, old memories are modified when old and new sensory observations are inferred to have been generated by the same latent cause. We derive this framework from probabilistic principles, and present a computational implementation. Simulations demonstrate that our model can reproduce the major experimental findings from studies of memory modification in the Pavlovian conditioning literature.

  5. The computational nature of memory modification

    Science.gov (United States)

    Gershman, Samuel J; Monfils, Marie-H; Norman, Kenneth A; Niv, Yael

    2017-01-01

    Retrieving a memory can modify its influence on subsequent behavior. We develop a computational theory of memory modification, according to which modification of a memory trace occurs through classical associative learning, but which memory trace is eligible for modification depends on a structure learning mechanism that discovers the units of association by segmenting the stream of experience into statistically distinct clusters (latent causes). New memories are formed when the structure learning mechanism infers that a new latent cause underlies current sensory observations. By the same token, old memories are modified when old and new sensory observations are inferred to have been generated by the same latent cause. We derive this framework from probabilistic principles, and present a computational implementation. Simulations demonstrate that our model can reproduce the major experimental findings from studies of memory modification in the Pavlovian conditioning literature. DOI: http://dx.doi.org/10.7554/eLife.23763.001 PMID:28294944

  6. Parallel structures in human and computer memory

    Science.gov (United States)

    Kanerva, Pentti

    1986-08-01

    If we think of our experiences as being recorded continuously on film, then human memory can be compared to a film library that is indexed by the contents of the film strips stored in it. Moreover, approximate retrieval cues suffice to retrieve information stored in this library: We recognize a familiar person in a fuzzy photograph or a familiar tune played on a strange instrument. This paper is about how to construct a computer memory that would allow a computer to recognize patterns and to recall sequences the way humans do. Such a memory is remarkably similar in structure to a conventional computer memory and also to the neural circuits in the cortex of the cerebellum of the human brain. The paper concludes that the frame problem of artificial intelligence could be solved by the use of such a memory if we were able to encode information about the world properly.

  7. Human Memory Organization for Computer Programs.

    Science.gov (United States)

    Norcio, A. F.; Kerst, Stephen M.

    1983-01-01

    Results of study investigating human memory organization in processing of computer programming languages indicate that algorithmic logic segments form a cognitive organizational structure in memory for programs. Statement indentation and internal program documentation did not enhance organizational process of recall of statements in five Fortran…

  8. Computing betweenness centrality in external memory

    DEFF Research Database (Denmark)

    Arge, Lars; Goodrich, Michael T.; Walderveen, Freek van

    2013-01-01

    Betweenness centrality is one of the most well-known measures of the importance of nodes in a social-network graph. In this paper we describe the first known external-memory and cache-oblivious algorithms for computing betweenness centrality. We present four different external-memory algorithms...

  9. Computer Icons and the Art of Memory.

    Science.gov (United States)

    McNair, John R.

    1996-01-01

    States that key aspects of "memoria," the ancient Art of Memory, especially its focus on vivid representational images set against distinct backgrounds, can be helpful in creating memorable, universal, and easily retrievable computer icons. (PA)

  10. Memory Reconsolidation and Computational Learning

    Science.gov (United States)

    2010-03-01

    Siegelmann-Danieli and H.T. Siegelmann, "Robust Artificial Life Via Artificial Programmed Death," Artificial Inteligence 172(6-7), April 2008: 884-898. F...STATEMENT Unrestricted 13. SUPPLEMENTARY NOTES 20100402019 14. ABSTRACT Memory models are central to Artificial Intelligence and Machine...beyond [1]. The advances cited are a significant step toward creating Artificial Intelligence via neural networks at the human level. Our network

  11. Resistive content addressable memory based in-memory computation architecture

    KAUST Repository

    Salama, Khaled N.; Zidan, Mohammed A.; Kurdahi, Fadi; Eltawil, Ahmed M.

    2016-01-01

    Various examples are provided examples related to resistive content addressable memory (RCAM) based in-memory computation architectures. In one example, a system includes a content addressable memory (CAM) including an array of cells having a memristor based crossbar and an interconnection switch matrix having a gateless memristor array, which is coupled to an output of the CAM. In another example, a method, includes comparing activated bit values stored a key register with corresponding bit values in a row of a CAM, setting a tag bit value to indicate that the activated bit values match the corresponding bit values, and writing masked key bit values to corresponding bit locations in the row of the CAM based on the tag bit value.

  12. Resistive content addressable memory based in-memory computation architecture

    KAUST Repository

    Salama, Khaled N.

    2016-12-08

    Various examples are provided examples related to resistive content addressable memory (RCAM) based in-memory computation architectures. In one example, a system includes a content addressable memory (CAM) including an array of cells having a memristor based crossbar and an interconnection switch matrix having a gateless memristor array, which is coupled to an output of the CAM. In another example, a method, includes comparing activated bit values stored a key register with corresponding bit values in a row of a CAM, setting a tag bit value to indicate that the activated bit values match the corresponding bit values, and writing masked key bit values to corresponding bit locations in the row of the CAM based on the tag bit value.

  13. Associative Memory Computing Power and Its Simulation

    CERN Document Server

    Volpi, G; The ATLAS collaboration

    2014-01-01

    The associative memory (AM) system is a computing device made of hundreds of AM ASICs chips designed to perform “pattern matching” at very high speed. Since each AM chip stores a data base of 130000 pre-calculated patterns and large numbers of chips can be easily assembled together, it is possible to produce huge AM banks. Speed and size of the system are crucial for real-time High Energy Physics applications, such as the ATLAS Fast TracKer (FTK) Processor. Using 80 million channels of the ATLAS tracker, FTK finds tracks within 100 micro seconds. The simulation of such a parallelized system is an extremely complex task if executed in commercial computers based on normal CPUs. The algorithm performance is limited, due to the lack of parallelism, and in addition the memory requirement is very large. In fact the AM chip uses a content addressable memory (CAM) architecture. Any data inquiry is broadcast to all memory elements simultaneously, thus data retrieval time is independent of the database size. The gr...

  14. Associative Memory computing power and its simulation

    CERN Document Server

    Ancu, L S; The ATLAS collaboration; Britzger, D; Giannetti, P; Howarth, J W; Luongo, C; Pandini, C; Schmitt, S; Volpi, G

    2014-01-01

    The associative memory (AM) system is a computing device made of hundreds of AM ASICs chips designed to perform “pattern matching” at very high speed. Since each AM chip stores a data base of 130000 pre-calculated patterns and large numbers of chips can be easily assembled together, it is possible to produce huge AM banks. Speed and size of the system are crucial for real-time High Energy Physics applications, such as the ATLAS Fast TracKer (FTK) Processor. Using 80 million channels of the ATLAS tracker, FTK finds tracks within 100 micro seconds. The simulation of such a parallelized system is an extremely complex task if executed in commercial computers based on normal CPUs. The algorithm performance is limited, due to the lack of parallelism, and in addition the memory requirement is very large. In fact the AM chip uses a content addressable memory (CAM) architecture. Any data inquiry is broadcast to all memory elements simultaneously, thus data retrieval time is independent of the database size. The gr...

  15. Distributed Memory Parallel Computing with SEAWAT

    Science.gov (United States)

    Verkaik, J.; Huizer, S.; van Engelen, J.; Oude Essink, G.; Ram, R.; Vuik, K.

    2017-12-01

    Fresh groundwater reserves in coastal aquifers are threatened by sea-level rise, extreme weather conditions, increasing urbanization and associated groundwater extraction rates. To counteract these threats, accurate high-resolution numerical models are required to optimize the management of these precious reserves. The major model drawbacks are long run times and large memory requirements, limiting the predictive power of these models. Distributed memory parallel computing is an efficient technique for reducing run times and memory requirements, where the problem is divided over multiple processor cores. A new Parallel Krylov Solver (PKS) for SEAWAT is presented. PKS has recently been applied to MODFLOW and includes Conjugate Gradient (CG) and Biconjugate Gradient Stabilized (BiCGSTAB) linear accelerators. Both accelerators are preconditioned by an overlapping additive Schwarz preconditioner in a way that: a) subdomains are partitioned using Recursive Coordinate Bisection (RCB) load balancing, b) each subdomain uses local memory only and communicates with other subdomains by Message Passing Interface (MPI) within the linear accelerator, c) it is fully integrated in SEAWAT. Within SEAWAT, the PKS-CG solver replaces the Preconditioned Conjugate Gradient (PCG) solver for solving the variable-density groundwater flow equation and the PKS-BiCGSTAB solver replaces the Generalized Conjugate Gradient (GCG) solver for solving the advection-diffusion equation. PKS supports the third-order Total Variation Diminishing (TVD) scheme for computing advection. Benchmarks were performed on the Dutch national supercomputer (https://userinfo.surfsara.nl/systems/cartesius) using up to 128 cores, for a synthetic 3D Henry model (100 million cells) and the real-life Sand Engine model ( 10 million cells). The Sand Engine model was used to investigate the potential effect of the long-term morphological evolution of a large sand replenishment and climate change on fresh groundwater resources

  16. Paging memory from random access memory to backing storage in a parallel computer

    Science.gov (United States)

    Archer, Charles J; Blocksome, Michael A; Inglett, Todd A; Ratterman, Joseph D; Smith, Brian E

    2013-05-21

    Paging memory from random access memory (`RAM`) to backing storage in a parallel computer that includes a plurality of compute nodes, including: executing a data processing application on a virtual machine operating system in a virtual machine on a first compute node; providing, by a second compute node, backing storage for the contents of RAM on the first compute node; and swapping, by the virtual machine operating system in the virtual machine on the first compute node, a page of memory from RAM on the first compute node to the backing storage on the second compute node.

  17. Memory systems, computation, and the second law of thermodynamics

    International Nuclear Information System (INIS)

    Wolpert, D.H.

    1992-01-01

    A memory is a physical system for transferring information form one moment in time to another, where that information concerns something external to the system itself. This paper argues on information-theoretic and statistical mechanical grounds that useful memories must be of one of two types, exemplified by memory in abstract computer programs and by memory in photographs. Photograph-type memories work by exploring a collapse of state space flow to an attractor state. (This attractor state is the open-quotes initializedclose quotes state of the memory.) The central assumption of the theory of reversible computation tells us that in any such collapsing, regardless of whether the collapsing must increase in entropy of the system. In concert with the second law, this establishes the logical necessity of the empirical observation that photograph-type memories are temporally asymmetric (they can tell us about the past but not about the future). Under the assumption that human memory is a photograph-type memory, this result also explains why we humans can remember only our past and not our future. In contrast to photo-graph-type memories, computer-type memories do not require any initialization, and therefore are not directly affected by the second law. As a result, computer memories can be of the future as easily as of the past, even if the program running on the computer is logically irreversible. This is entirely in accord with the well-known temporal reversibility of the process of computation. This paper ends by arguing that the asymmetry of the psychological arrow of time is a direct consequence of the asymmetry of human memory. With the rest of this paper, this explains, explicitly and rigorously, why the psychological and thermodynamic arrows of time are correlated with one another. 24 refs

  18. Amorphous Semiconductors: From Photocatalyst to Computer Memory

    Science.gov (United States)

    Sundararajan, Mayur

    encouraging but inconclusive. Then the method was successfully demonstrated on mesoporous TiO2SiO 2 by showing a shift in its optical bandgap. One of the special class of amorphous semiconductors is chalcogenide glasses, which exhibit high ionic conductivity even at room temperature. When metal doped chalcogenide glasses are under an electric field, they become electronically conductive. These properties are exploited in the computer memory storage application of Conductive Bridging Random Access Memory (CBRAM). CBRAM is a non-volatile memory that is a strong contender to replace conventional volatile RAMs such as DRAM, SRAM, etc. This technology has already been commercialized, but the working mechanism is still not clearly understood especially the nature of the conductive bridge filament. In this project, the CBRAM memory cells are fabricated by thermal evaporation method with Agx(GeSe 2)1-x as the solid electrolyte layer, Ag as the active electrode and Au as the inert electrode. By careful use of cyclic voltammetry, the conductive filaments were grown on the surface and the bulk of the solid electrolyte. The comparison between the two filaments revealed major differences leading to contradiction with the existing working mechanism. After compiling all the results, a modified working mechanism is proposed. SAXS is a powerful tool to characterize nanostructure of glasses. The analysis of the SAXS data to get useful information are usually performed by different programs. In this project, Irena and GIFT programs were compared by performing the analysis of the SAXS data of glass and glass ceramics samples. Irena was shown to be not suitable for the analysis of SAXS data that has a significant contribution from interparticle interactions. GIFT was demonstrated to be better suited for such analysis. Additionally, the results obtained by programs for samples with low interparticle interactions were shown to be consistent.

  19. Large scale particle simulations in a virtual memory computer

    International Nuclear Information System (INIS)

    Gray, P.C.; Million, R.; Wagner, J.S.; Tajima, T.

    1983-01-01

    Virtual memory computers are capable of executing large-scale particle simulations even when the memory requirements exceeds the computer core size. The required address space is automatically mapped onto slow disc memory the the operating system. When the simulation size is very large, frequent random accesses to slow memory occur during the charge accumulation and particle pushing processes. Assesses to slow memory significantly reduce the excecution rate of the simulation. We demonstrate in this paper that with the proper choice of sorting algorithm, a nominal amount of sorting to keep physically adjacent particles near particles with neighboring array indices can reduce random access to slow memory, increase the efficiency of the I/O system, and hence, reduce the required computing time. (orig.)

  20. A simplified computational memory model from information processing

    Science.gov (United States)

    Zhang, Lanhua; Zhang, Dongsheng; Deng, Yuqin; Ding, Xiaoqian; Wang, Yan; Tang, Yiyuan; Sun, Baoliang

    2016-01-01

    This paper is intended to propose a computational model for memory from the view of information processing. The model, called simplified memory information retrieval network (SMIRN), is a bi-modular hierarchical functional memory network by abstracting memory function and simulating memory information processing. At first meta-memory is defined to express the neuron or brain cortices based on the biology and graph theories, and we develop an intra-modular network with the modeling algorithm by mapping the node and edge, and then the bi-modular network is delineated with intra-modular and inter-modular. At last a polynomial retrieval algorithm is introduced. In this paper we simulate the memory phenomena and functions of memorization and strengthening by information processing algorithms. The theoretical analysis and the simulation results show that the model is in accordance with the memory phenomena from information processing view. PMID:27876847

  1. A simplified computational memory model from information processing.

    Science.gov (United States)

    Zhang, Lanhua; Zhang, Dongsheng; Deng, Yuqin; Ding, Xiaoqian; Wang, Yan; Tang, Yiyuan; Sun, Baoliang

    2016-11-23

    This paper is intended to propose a computational model for memory from the view of information processing. The model, called simplified memory information retrieval network (SMIRN), is a bi-modular hierarchical functional memory network by abstracting memory function and simulating memory information processing. At first meta-memory is defined to express the neuron or brain cortices based on the biology and graph theories, and we develop an intra-modular network with the modeling algorithm by mapping the node and edge, and then the bi-modular network is delineated with intra-modular and inter-modular. At last a polynomial retrieval algorithm is introduced. In this paper we simulate the memory phenomena and functions of memorization and strengthening by information processing algorithms. The theoretical analysis and the simulation results show that the model is in accordance with the memory phenomena from information processing view.

  2. The Spacetime Memory of Geometric Phases and Quantum Computing

    CERN Document Server

    Binder, B

    2002-01-01

    Spacetime memory is defined with a holonomic approach to information processing, where multi-state stability is introduced by a non-linear phase-locked loop. Geometric phases serve as the carrier of physical information and geometric memory (of orientation) given by a path integral measure of curvature that is periodically refreshed. Regarding the resulting spin-orbit coupling and gauge field, the geometric nature of spacetime memory suggests to assign intrinsic computational properties to the electromagnetic field.

  3. Computational modelling of memory retention from synapse to behaviour

    Science.gov (United States)

    van Rossum, Mark C. W.; Shippi, Maria

    2013-03-01

    One of our most intriguing mental abilities is the capacity to store information and recall it from memory. Computational neuroscience has been influential in developing models and concepts of learning and memory. In this tutorial review we focus on the interplay between learning and forgetting. We discuss recent advances in the computational description of the learning and forgetting processes on synaptic, neuronal, and systems levels, as well as recent data that open up new challenges for statistical physicists.

  4. Computational modelling of memory retention from synapse to behaviour

    International Nuclear Information System (INIS)

    Van Rossum, Mark C W; Shippi, Maria

    2013-01-01

    One of our most intriguing mental abilities is the capacity to store information and recall it from memory. Computational neuroscience has been influential in developing models and concepts of learning and memory. In this tutorial review we focus on the interplay between learning and forgetting. We discuss recent advances in the computational description of the learning and forgetting processes on synaptic, neuronal, and systems levels, as well as recent data that open up new challenges for statistical physicists. (paper)

  5. Static Memory Deduplication for Performance Optimization in Cloud Computing

    Directory of Open Access Journals (Sweden)

    Gangyong Jia

    2017-04-01

    Full Text Available In a cloud computing environment, the number of virtual machines (VMs on a single physical server and the number of applications running on each VM are continuously growing. This has led to an enormous increase in the demand of memory capacity and subsequent increase in the energy consumption in the cloud. Lack of enough memory has become a major bottleneck for scalability and performance of virtualization interfaces in cloud computing. To address this problem, memory deduplication techniques which reduce memory demand through page sharing are being adopted. However, such techniques suffer from overheads in terms of number of online comparisons required for the memory deduplication. In this paper, we propose a static memory deduplication (SMD technique which can reduce memory capacity requirement and provide performance optimization in cloud computing. The main innovation of SMD is that the process of page detection is performed offline, thus potentially reducing the performance cost, especially in terms of response time. In SMD, page comparisons are restricted to the code segment, which has the highest shared content. Our experimental results show that SMD efficiently reduces memory capacity requirement and improves performance. We demonstrate that, compared to other approaches, the cost in terms of the response time is negligible.

  6. Static Memory Deduplication for Performance Optimization in Cloud Computing.

    Science.gov (United States)

    Jia, Gangyong; Han, Guangjie; Wang, Hao; Yang, Xuan

    2017-04-27

    In a cloud computing environment, the number of virtual machines (VMs) on a single physical server and the number of applications running on each VM are continuously growing. This has led to an enormous increase in the demand of memory capacity and subsequent increase in the energy consumption in the cloud. Lack of enough memory has become a major bottleneck for scalability and performance of virtualization interfaces in cloud computing. To address this problem, memory deduplication techniques which reduce memory demand through page sharing are being adopted. However, such techniques suffer from overheads in terms of number of online comparisons required for the memory deduplication. In this paper, we propose a static memory deduplication (SMD) technique which can reduce memory capacity requirement and provide performance optimization in cloud computing. The main innovation of SMD is that the process of page detection is performed offline, thus potentially reducing the performance cost, especially in terms of response time. In SMD, page comparisons are restricted to the code segment, which has the highest shared content. Our experimental results show that SMD efficiently reduces memory capacity requirement and improves performance. We demonstrate that, compared to other approaches, the cost in terms of the response time is negligible.

  7. Associative Memory computing power and its simulation.

    CERN Document Server

    Volpi, G; The ATLAS collaboration

    2014-01-01

    The associative memory (AM) chip is ASIC device specifically designed to perform ``pattern matching'' at very high speed and with parallel access to memory locations. The most extensive use for such device will be the ATLAS Fast Tracker (FTK) processor, where more than 8000 chips will be installed in 128 VME boards, specifically designed for high throughput in order to exploit the chip's features. Each AM chip will store a database of about 130000 pre-calculated patterns, allowing FTK to use about 1 billion patterns for the whole system, with any data inquiry broadcast to all memory elements simultaneously within the same clock cycle (10 ns), thus data retrieval time is independent of the database size. Speed and size of the system are crucial for real-time High Energy Physics applications, such as the ATLAS FTK processor. Using 80 million channels of the ATLAS tracker, FTK finds tracks within 100 $\\mathrm{\\mu s}$. The simulation of such a parallelized system is an extremely complex task when executed in comm...

  8. Computing visibility on terrains in external memory

    NARCIS (Netherlands)

    Haverkort, H.J.; Toma, L.; Zhuang, Yi

    2007-01-01

    We describe a novel application of the distribution sweeping technique to computing visibility on terrains. Given an arbitrary viewpoint v, the basic problem we address is computing the visibility map or viewshed of v, which is the set of points in the terrain that are visible from v. We give the

  9. Hybrid computing using a neural network with dynamic external memory.

    Science.gov (United States)

    Graves, Alex; Wayne, Greg; Reynolds, Malcolm; Harley, Tim; Danihelka, Ivo; Grabska-Barwińska, Agnieszka; Colmenarejo, Sergio Gómez; Grefenstette, Edward; Ramalho, Tiago; Agapiou, John; Badia, Adrià Puigdomènech; Hermann, Karl Moritz; Zwols, Yori; Ostrovski, Georg; Cain, Adam; King, Helen; Summerfield, Christopher; Blunsom, Phil; Kavukcuoglu, Koray; Hassabis, Demis

    2016-10-27

    Artificial neural networks are remarkably adept at sensory processing, sequence learning and reinforcement learning, but are limited in their ability to represent variables and data structures and to store data over long timescales, owing to the lack of an external memory. Here we introduce a machine learning model called a differentiable neural computer (DNC), which consists of a neural network that can read from and write to an external memory matrix, analogous to the random-access memory in a conventional computer. Like a conventional computer, it can use its memory to represent and manipulate complex data structures, but, like a neural network, it can learn to do so from data. When trained with supervised learning, we demonstrate that a DNC can successfully answer synthetic questions designed to emulate reasoning and inference problems in natural language. We show that it can learn tasks such as finding the shortest path between specified points and inferring the missing links in randomly generated graphs, and then generalize these tasks to specific graphs such as transport networks and family trees. When trained with reinforcement learning, a DNC can complete a moving blocks puzzle in which changing goals are specified by sequences of symbols. Taken together, our results demonstrate that DNCs have the capacity to solve complex, structured tasks that are inaccessible to neural networks without external read-write memory.

  10. Computing with memory for energy-efficient robust systems

    CERN Document Server

    Paul, Somnath

    2013-01-01

    This book analyzes energy and reliability as major challenges faced by designers of computing frameworks in the nanometer technology regime.  The authors describe the existing solutions to address these challenges and then reveal a new reconfigurable computing platform, which leverages high-density nanoscale memory for both data storage and computation to maximize the energy-efficiency and reliability. The energy and reliability benefits of this new paradigm are illustrated and the design challenges are discussed. Various hardware and software aspects of this exciting computing paradigm are de

  11. Multiple-User, Multitasking, Virtual-Memory Computer System

    Science.gov (United States)

    Generazio, Edward R.; Roth, Don J.; Stang, David B.

    1993-01-01

    Computer system designed and programmed to serve multiple users in research laboratory. Provides for computer control and monitoring of laboratory instruments, acquisition and anlaysis of data from those instruments, and interaction with users via remote terminals. System provides fast access to shared central processing units and associated large (from megabytes to gigabytes) memories. Underlying concept of system also applicable to monitoring and control of industrial processes.

  12. Graphical Visualization on Computational Simulation Using Shared Memory

    International Nuclear Information System (INIS)

    Lima, A B; Correa, Eberth

    2014-01-01

    The Shared Memory technique is a powerful tool for parallelizing computer codes. In particular it can be used to visualize the results ''on the fly'' without stop running the simulation. In this presentation we discuss and show how to use the technique conjugated with a visualization code using openGL

  13. Persistent Memory in Single Node Delay-Coupled Reservoir Computing.

    Directory of Open Access Journals (Sweden)

    André David Kovac

    Full Text Available Delays are ubiquitous in biological systems, ranging from genetic regulatory networks and synaptic conductances, to predator/pray population interactions. The evidence is mounting, not only to the presence of delays as physical constraints in signal propagation speed, but also to their functional role in providing dynamical diversity to the systems that comprise them. The latter observation in biological systems inspired the recent development of a computational architecture that harnesses this dynamical diversity, by delay-coupling a single nonlinear element to itself. This architecture is a particular realization of Reservoir Computing, where stimuli are injected into the system in time rather than in space as is the case with classical recurrent neural network realizations. This architecture also exhibits an internal memory which fades in time, an important prerequisite to the functioning of any reservoir computing device. However, fading memory is also a limitation to any computation that requires persistent storage. In order to overcome this limitation, the current work introduces an extended version to the single node Delay-Coupled Reservoir, that is based on trained linear feedback. We show by numerical simulations that adding task-specific linear feedback to the single node Delay-Coupled Reservoir extends the class of solvable tasks to those that require nonfading memory. We demonstrate, through several case studies, the ability of the extended system to carry out complex nonlinear computations that depend on past information, whereas the computational power of the system with fading memory alone quickly deteriorates. Our findings provide the theoretical basis for future physical realizations of a biologically-inspired ultrafast computing device with extended functionality.

  14. Persistent Memory in Single Node Delay-Coupled Reservoir Computing.

    Science.gov (United States)

    Kovac, André David; Koall, Maximilian; Pipa, Gordon; Toutounji, Hazem

    2016-01-01

    Delays are ubiquitous in biological systems, ranging from genetic regulatory networks and synaptic conductances, to predator/pray population interactions. The evidence is mounting, not only to the presence of delays as physical constraints in signal propagation speed, but also to their functional role in providing dynamical diversity to the systems that comprise them. The latter observation in biological systems inspired the recent development of a computational architecture that harnesses this dynamical diversity, by delay-coupling a single nonlinear element to itself. This architecture is a particular realization of Reservoir Computing, where stimuli are injected into the system in time rather than in space as is the case with classical recurrent neural network realizations. This architecture also exhibits an internal memory which fades in time, an important prerequisite to the functioning of any reservoir computing device. However, fading memory is also a limitation to any computation that requires persistent storage. In order to overcome this limitation, the current work introduces an extended version to the single node Delay-Coupled Reservoir, that is based on trained linear feedback. We show by numerical simulations that adding task-specific linear feedback to the single node Delay-Coupled Reservoir extends the class of solvable tasks to those that require nonfading memory. We demonstrate, through several case studies, the ability of the extended system to carry out complex nonlinear computations that depend on past information, whereas the computational power of the system with fading memory alone quickly deteriorates. Our findings provide the theoretical basis for future physical realizations of a biologically-inspired ultrafast computing device with extended functionality.

  15. Optical computing, optical memory, and SBIRs at Foster-Miller

    Science.gov (United States)

    Domash, Lawrence H.

    1994-03-01

    A desktop design and manufacturing system for binary diffractive elements, MacBEEP, was developed with the optical researcher in mind. Optical processing systems for specialized tasks such as cellular automation computation and fractal measurement were constructed. A new family of switchable holograms has enabled several applications for control of laser beams in optical memories. New spatial light modulators and optical logic elements have been demonstrated based on a more manufacturable semiconductor technology. Novel synthetic and polymeric nonlinear materials for optical storage are under development in an integrated memory architecture. SBIR programs enable creative contributions from smaller companies, both product oriented and technology oriented, and support advances that might not otherwise be developed.

  16. emMAW: computing minimal absent words in external memory.

    Science.gov (United States)

    Héliou, Alice; Pissis, Solon P; Puglisi, Simon J

    2017-09-01

    The biological significance of minimal absent words has been investigated in genomes of organisms from all domains of life. For instance, three minimal absent words of the human genome were found in Ebola virus genomes. There exists an O(n) -time and O(n) -space algorithm for computing all minimal absent words of a sequence of length n on a fixed-sized alphabet based on suffix arrays. A standard implementation of this algorithm, when applied to a large sequence of length n , requires more than 20 n  bytes of RAM. Such memory requirements are a significant hurdle to the computation of minimal absent words in large datasets. We present emMAW, the first external-memory algorithm for computing minimal absent words. A free open-source implementation of our algorithm is made available. This allows for computation of minimal absent words on far bigger data sets than was previously possible. Our implementation requires less than 3 h on a standard workstation to process the full human genome when as little as 1 GB of RAM is made available. We stress that our implementation, despite making use of external memory, is fast; indeed, even on relatively smaller datasets when enough RAM is available to hold all necessary data structures, it is less than two times slower than state-of-the-art internal-memory implementations. https://github.com/solonas13/maw (free software under the terms of the GNU GPL). alice.heliou@lix.polytechnique.fr or solon.pissis@kcl.ac.uk. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  17. A compositional reservoir simulator on distributed memory parallel computers

    International Nuclear Information System (INIS)

    Rame, M.; Delshad, M.

    1995-01-01

    This paper presents the application of distributed memory parallel computes to field scale reservoir simulations using a parallel version of UTCHEM, The University of Texas Chemical Flooding Simulator. The model is a general purpose highly vectorized chemical compositional simulator that can simulate a wide range of displacement processes at both field and laboratory scales. The original simulator was modified to run on both distributed memory parallel machines (Intel iPSC/960 and Delta, Connection Machine 5, Kendall Square 1 and 2, and CRAY T3D) and a cluster of workstations. A domain decomposition approach has been taken towards parallelization of the code. A portion of the discrete reservoir model is assigned to each processor by a set-up routine that attempts a data layout as even as possible from the load-balance standpoint. Each of these subdomains is extended so that data can be shared between adjacent processors for stencil computation. The added routines that make parallel execution possible are written in a modular fashion that makes the porting to new parallel platforms straight forward. Results of the distributed memory computing performance of Parallel simulator are presented for field scale applications such as tracer flood and polymer flood. A comparison of the wall-clock times for same problems on a vector supercomputer is also presented

  18. Injecting Artificial Memory Errors Into a Running Computer Program

    Science.gov (United States)

    Bornstein, Benjamin J.; Granat, Robert A.; Wagstaff, Kiri L.

    2008-01-01

    Single-event upsets (SEUs) or bitflips are computer memory errors caused by radiation. BITFLIPS (Basic Instrumentation Tool for Fault Localized Injection of Probabilistic SEUs) is a computer program that deliberately injects SEUs into another computer program, while the latter is running, for the purpose of evaluating the fault tolerance of that program. BITFLIPS was written as a plug-in extension of the open-source Valgrind debugging and profiling software. BITFLIPS can inject SEUs into any program that can be run on the Linux operating system, without needing to modify the program s source code. Further, if access to the original program source code is available, BITFLIPS offers fine-grained control over exactly when and which areas of memory (as specified via program variables) will be subjected to SEUs. The rate of injection of SEUs is controlled by specifying either a fault probability or a fault rate based on memory size and radiation exposure time, in units of SEUs per byte per second. BITFLIPS can also log each SEU that it injects and, if program source code is available, report the magnitude of effect of the SEU on a floating-point value or other program variable.

  19. Reprogrammable logic in memristive crossbar for in-memory computing

    Science.gov (United States)

    Cheng, Long; Zhang, Mei-Yun; Li, Yi; Zhou, Ya-Xiong; Wang, Zhuo-Rui; Hu, Si-Yu; Long, Shi-Bing; Liu, Ming; Miao, Xiang-Shui

    2017-12-01

    Memristive stateful logic has emerged as a promising next-generation in-memory computing paradigm to address escalating computing-performance pressures in traditional von Neumann architecture. Here, we present a nonvolatile reprogrammable logic method that can process data between different rows and columns in a memristive crossbar array based on material implication (IMP) logic. Arbitrary Boolean logic can be executed with a reprogrammable cell containing four memristors in a crossbar array. In the fabricated Ti/HfO2/W memristive array, some fundamental functions, such as universal NAND logic and data transfer, were experimentally implemented. Moreover, using eight memristors in a 2  ×  4 array, a one-bit full adder was theoretically designed and verified by simulation to exhibit the feasibility of our method to accomplish complex computing tasks. In addition, some critical logic-related performances were further discussed, such as the flexibility of data processing, cascading problem and bit error rate. Such a method could be a step forward in developing IMP-based memristive nonvolatile logic for large-scale in-memory computing architecture.

  20. Towards Modeling False Memory With Computational Knowledge Bases.

    Science.gov (United States)

    Li, Justin; Kohanyi, Emma

    2017-01-01

    One challenge to creating realistic cognitive models of memory is the inability to account for the vast common-sense knowledge of human participants. Large computational knowledge bases such as WordNet and DBpedia may offer a solution to this problem but may pose other challenges. This paper explores some of these difficulties through a semantic network spreading activation model of the Deese-Roediger-McDermott false memory task. In three experiments, we show that these knowledge bases only capture a subset of human associations, while irrelevant information introduces noise and makes efficient modeling difficult. We conclude that the contents of these knowledge bases must be augmented and, more important, that the algorithms must be refined and optimized, before large knowledge bases can be widely used for cognitive modeling. Copyright © 2016 Cognitive Science Society, Inc.

  1. Translation Memory and Computer Assisted Translation Tool for Medieval Texts

    Directory of Open Access Journals (Sweden)

    Törcsvári Attila

    2013-05-01

    Full Text Available Translation memories (TMs, as part of Computer Assisted Translation (CAT tools, support translators reusing portions of formerly translated text. Fencing books are good candidates for using TMs due to the high number of repeated terms. Medieval texts suffer a number of drawbacks that make hard even “simple” rewording to the modern version of the same language. The analyzed difficulties are: lack of systematic spelling, unusual word orders and typos in the original. A hypothesis is made and verified that even simple modernization increases legibility and it is feasible, also it is worthwhile to apply translation memories due to the numerous and even extremely long repeated terms. Therefore, methods and algorithms are presented 1. for automated transcription of medieval texts (when a limited training set is available, and 2. collection of repeated patterns. The efficiency of the algorithms is analyzed for recall and precision.

  2. An Alternative Algorithm for Computing Watersheds on Shared Memory Parallel Computers

    NARCIS (Netherlands)

    Meijster, A.; Roerdink, J.B.T.M.

    1995-01-01

    In this paper a parallel implementation of a watershed algorithm is proposed. The algorithm can easily be implemented on shared memory parallel computers. The watershed transform is generally considered to be inherently sequential since the discrete watershed of an image is defined using recursion.

  3. A memory-array architecture for computer vision

    Energy Technology Data Exchange (ETDEWEB)

    Balsara, P.T.

    1989-01-01

    With the fast advances in the area of computer vision and robotics there is a growing need for machines that can understand images at a very high speed. A conventional von Neumann computer is not suited for this purpose because it takes a tremendous amount of time to solve most typical image processing problems. Exploiting the inherent parallelism present in various vision tasks can significantly reduce the processing time. Fortunately, parallelism is increasingly affordable as hardware gets cheaper. Thus it is now imperative to study computer vision in a parallel processing framework. The author should first design a computational structure which is well suited for a wide range of vision tasks and then develop parallel algorithms which can run efficiently on this structure. Recent advances in VLSI technology have led to several proposals for parallel architectures for computer vision. In this thesis he demonstrates that a memory array architecture with efficient local and global communication capabilities can be used for high speed execution of a wide range of computer vision tasks. This architecture, called the Access Constrained Memory Array Architecture (ACMAA), is efficient for VLSI implementation because of its modular structure, simple interconnect and limited global control. Several parallel vision algorithms have been designed for this architecture. The choice of vision problems demonstrates the versatility of ACMAA for a wide range of vision tasks. These algorithms were simulated on a high level ACMAA simulator running on the Intel iPSC/2 hypercube, a parallel architecture. The results of this simulation are compared with those of sequential algorithms running on a single hypercube node. Details of the ACMAA processor architecture are also presented.

  4. Retrieval and organizational strategies in conceptual memory a computer model

    CERN Document Server

    Kolodner, Janet L

    2014-01-01

    'Someday we expect that computers will be able to keep us informed about the news. People have imagined being able to ask their home computers questions such as "What's going on in the world?"…'. Originally published in 1984, this book is a fascinating look at the world of memory and computers before the internet became the mainstream phenomenon it is today. It looks at the early development of a computer system that could keep us informed in a way that we now take for granted. Presenting a theory of remembering, based on human information processing, it begins to address many of the hard problems implicated in the quest to make computers remember. The book had two purposes in presenting this theory of remembering. First, to be used in implementing intelligent computer systems, including fact retrieval systems and intelligent systems in general. Any intelligent program needs to use and store and use a great deal of knowledge. The strategies and structures in the book were designed to be used for that purpos...

  5. Robust dynamical decoupling for quantum computing and quantum memory.

    Science.gov (United States)

    Souza, Alexandre M; Alvarez, Gonzalo A; Suter, Dieter

    2011-06-17

    Dynamical decoupling (DD) is a popular technique for protecting qubits from the environment. However, unless special care is taken, experimental errors in the control pulses used in this technique can destroy the quantum information instead of preserving it. Here, we investigate techniques for making DD sequences robust against different types of experimental errors while retaining good decoupling efficiency in a fluctuating environment. We present experimental data from solid-state nuclear spin qubits and introduce a new DD sequence that is suitable for quantum computing and quantum memory.

  6. Computational performance of a smoothed particle hydrodynamics simulation for shared-memory parallel computing

    Science.gov (United States)

    Nishiura, Daisuke; Furuichi, Mikito; Sakaguchi, Hide

    2015-09-01

    The computational performance of a smoothed particle hydrodynamics (SPH) simulation is investigated for three types of current shared-memory parallel computer devices: many integrated core (MIC) processors, graphics processing units (GPUs), and multi-core CPUs. We are especially interested in efficient shared-memory allocation methods for each chipset, because the efficient data access patterns differ between compute unified device architecture (CUDA) programming for GPUs and OpenMP programming for MIC processors and multi-core CPUs. We first introduce several parallel implementation techniques for the SPH code, and then examine these on our target computer architectures to determine the most effective algorithms for each processor unit. In addition, we evaluate the effective computing performance and power efficiency of the SPH simulation on each architecture, as these are critical metrics for overall performance in a multi-device environment. In our benchmark test, the GPU is found to produce the best arithmetic performance as a standalone device unit, and gives the most efficient power consumption. The multi-core CPU obtains the most effective computing performance. The computational speed of the MIC processor on Xeon Phi approached that of two Xeon CPUs. This indicates that using MICs is an attractive choice for existing SPH codes on multi-core CPUs parallelized by OpenMP, as it gains computational acceleration without the need for significant changes to the source code.

  7. A Case Study on Neural Inspired Dynamic Memory Management Strategies for High Performance Computing.

    Energy Technology Data Exchange (ETDEWEB)

    Vineyard, Craig Michael [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Verzi, Stephen Joseph [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-09-01

    As high performance computing architectures pursue more computational power there is a need for increased memory capacity and bandwidth as well. A multi-level memory (MLM) architecture addresses this need by combining multiple memory types with different characteristics as varying levels of the same architecture. How to efficiently utilize this memory infrastructure is an unknown challenge, and in this research we sought to investigate whether neural inspired approaches can meaningfully help with memory management. In particular we explored neurogenesis inspired re- source allocation, and were able to show a neural inspired mixed controller policy can beneficially impact how MLM architectures utilize memory.

  8. Perspective: Memcomputing: Leveraging memory and physics to compute efficiently

    Science.gov (United States)

    Di Ventra, Massimiliano; Traversa, Fabio L.

    2018-05-01

    It is well known that physical phenomena may be of great help in computing some difficult problems efficiently. A typical example is prime factorization that may be solved in polynomial time by exploiting quantum entanglement on a quantum computer. There are, however, other types of (non-quantum) physical properties that one may leverage to compute efficiently a wide range of hard problems. In this perspective, we discuss how to employ one such property, memory (time non-locality), in a novel physics-based approach to computation: Memcomputing. In particular, we focus on digital memcomputing machines (DMMs) that are scalable. DMMs can be realized with non-linear dynamical systems with memory. The latter property allows the realization of a new type of Boolean logic, one that is self-organizing. Self-organizing logic gates are "terminal-agnostic," namely, they do not distinguish between the input and output terminals. When appropriately assembled to represent a given combinatorial/optimization problem, the corresponding self-organizing circuit converges to the equilibrium points that express the solutions of the problem at hand. In doing so, DMMs take advantage of the long-range order that develops during the transient dynamics. This collective dynamical behavior, reminiscent of a phase transition, or even the "edge of chaos," is mediated by families of classical trajectories (instantons) that connect critical points of increasing stability in the system's phase space. The topological character of the solution search renders DMMs robust against noise and structural disorder. Since DMMs are non-quantum systems described by ordinary differential equations, not only can they be built in hardware with the available technology, they can also be simulated efficiently on modern classical computers. As an example, we will show the polynomial-time solution of the subset-sum problem for the worst cases, and point to other types of hard problems where simulations of DMMs

  9. Computational Model-Based Prediction of Human Episodic Memory Performance Based on Eye Movements

    Science.gov (United States)

    Sato, Naoyuki; Yamaguchi, Yoko

    Subjects' episodic memory performance is not simply reflected by eye movements. We use a ‘theta phase coding’ model of the hippocampus to predict subjects' memory performance from their eye movements. Results demonstrate the ability of the model to predict subjects' memory performance. These studies provide a novel approach to computational modeling in the human-machine interface.

  10. Computer Use and Its Effect on the Memory Process in Young and Adults

    Science.gov (United States)

    Alliprandini, Paula Mariza Zedu; Straub, Sandra Luzia Wrobel; Brugnera, Elisangela; de Oliveira, Tânia Pitombo; Souza, Isabela Augusta Andrade

    2013-01-01

    This work investigates the effect of computer use in the memory process in young and adults under the Perceptual and Memory experimental conditions. The memory condition involved the phases acquisition of information and recovery, on time intervals (2 min, 24 hours and 1 week) on situations of pre and post-test (before and after the participants…

  11. Energy-aware memory management for embedded multimedia systems a computer-aided design approach

    CERN Document Server

    Balasa, Florin

    2011-01-01

    Energy-Aware Memory Management for Embedded Multimedia Systems: A Computer-Aided Design Approach presents recent computer-aided design (CAD) ideas that address memory management tasks, particularly the optimization of energy consumption in the memory subsystem. It explains how to efficiently implement CAD solutions, including theoretical methods and novel algorithms. The book covers various energy-aware design techniques, including data-dependence analysis techniques, memory size estimation methods, extensions of mapping approaches, and memory banking approaches. It shows how these techniques

  12. Providing for organizational memory in computer supported meetings

    OpenAIRE

    Schwabe, Gerhard

    1994-01-01

    Meeting memory features are poorly integrated into current group support systems (GSS). In this article, I discuss how to introduce meeting memory functionality into a GSS. The article first introduces the benefits of effective meetings and organizational memory to an organization. Then, the following challenges to design are discussed: How to store semantically rich output, how to build up the meeting memory with a minimum of additional effort, how to integrate meeting memory into organizati...

  13. All-spin logic operations: Memory device and reconfigurable computing

    Science.gov (United States)

    Patra, Moumita; Maiti, Santanu K.

    2018-02-01

    Exploiting spin degree of freedom of electron a new proposal is given to characterize spin-based logical operations using a quantum interferometer that can be utilized as a programmable spin logic device (PSLD). The ON and OFF states of both inputs and outputs are described by spin state only, circumventing spin-to-charge conversion at every stage as often used in conventional devices with the inclusion of extra hardware that can eventually diminish the efficiency. All possible logic functions can be engineered from a single device without redesigning the circuit which certainly offers the opportunities of designing new generation spintronic devices. Moreover, we also discuss the utilization of the present model as a memory device and suitable computing operations with proposed experimental setups.

  14. Irrelevant sensory stimuli interfere with working memory storage: evidence from a computational model of prefrontal neurons.

    Science.gov (United States)

    Bancroft, Tyler D; Hockley, William E; Servos, Philip

    2013-03-01

    The encoding of irrelevant stimuli into the memory store has previously been suggested as a mechanism of interference in working memory (e.g., Lange & Oberauer, Memory, 13, 333-339, 2005; Nairne, Memory & Cognition, 18, 251-269, 1990). Recently, Bancroft and Servos (Experimental Brain Research, 208, 529-532, 2011) used a tactile working memory task to provide experimental evidence that irrelevant stimuli were, in fact, encoded into working memory. In the present study, we replicated Bancroft and Servos's experimental findings using a biologically based computational model of prefrontal neurons, providing a neurocomputational model of overwriting in working memory. Furthermore, our modeling results show that inhibition acts to protect the contents of working memory, and they suggest a need for further experimental research into the capacity of vibrotactile working memory.

  15. Computational and empirical simulations of selective memory impairments: Converging evidence for a single-system account of memory dissociations.

    Science.gov (United States)

    Curtis, Evan T; Jamieson, Randall K

    2018-04-01

    Current theory has divided memory into multiple systems, resulting in a fractionated account of human behaviour. By an alternative perspective, memory is a single system. However, debate over the details of different single-system theories has overshadowed the converging agreement among them, slowing the reunification of memory. Evidence in favour of dividing memory often takes the form of dissociations observed in amnesia, where amnesic patients are impaired on some memory tasks but not others. The dissociations are taken as evidence for separate explicit and implicit memory systems. We argue against this perspective. We simulate two key dissociations between classification and recognition in a computational model of memory, A Theory of Nonanalytic Association. We assume that amnesia reflects a quantitative difference in the quality of encoding. We also present empirical evidence that replicates the dissociations in healthy participants, simulating amnesic behaviour by reducing study time. In both analyses, we successfully reproduce the dissociations. We integrate our computational and empirical successes with the success of alternative models and manipulations and argue that our demonstrations, taken in concert with similar demonstrations with similar models, provide converging evidence for a more general set of single-system analyses that support the conclusion that a wide variety of memory phenomena can be explained by a unified and coherent set of principles.

  16. Spin-transfer torque magnetoresistive random-access memory technologies for normally off computing (invited)

    International Nuclear Information System (INIS)

    Ando, K.; Yuasa, S.; Fujita, S.; Ito, J.; Yoda, H.; Suzuki, Y.; Nakatani, Y.; Miyazaki, T.

    2014-01-01

    Most parts of present computer systems are made of volatile devices, and the power to supply them to avoid information loss causes huge energy losses. We can eliminate this meaningless energy loss by utilizing the non-volatile function of advanced spin-transfer torque magnetoresistive random-access memory (STT-MRAM) technology and create a new type of computer, i.e., normally off computers. Critical tasks to achieve normally off computers are implementations of STT-MRAM technologies in the main memory and low-level cache memories. STT-MRAM technology for applications to the main memory has been successfully developed by using perpendicular STT-MRAMs, and faster STT-MRAM technologies for applications to the cache memory are now being developed. The present status of STT-MRAMs and challenges that remain for normally off computers are discussed

  17. Programs for Testing Processor-in-Memory Computing Systems

    Science.gov (United States)

    Katz, Daniel S.

    2006-01-01

    The Multithreaded Microbenchmarks for Processor-In-Memory (PIM) Compilers, Simulators, and Hardware are computer programs arranged in a series for use in testing the performances of PIM computing systems, including compilers, simulators, and hardware. The programs at the beginning of the series test basic functionality; the programs at subsequent positions in the series test increasingly complex functionality. The programs are intended to be used while designing a PIM system, and can be used to verify that compilers, simulators, and hardware work correctly. The programs can also be used to enable designers of these system components to examine tradeoffs in implementation. Finally, these programs can be run on non-PIM hardware (either single-threaded or multithreaded) using the POSIX pthreads standard to verify that the benchmarks themselves operate correctly. [POSIX (Portable Operating System Interface for UNIX) is a set of standards that define how programs and operating systems interact with each other. pthreads is a library of pre-emptive thread routines that comply with one of the POSIX standards.

  18. Large-scale particle simulations in a virtual-memory computer

    International Nuclear Information System (INIS)

    Gray, P.C.; Wagner, J.S.; Tajima, T.; Million, R.

    1982-08-01

    Virtual memory computers are capable of executing large-scale particle simulations even when the memory requirements exceed the computer core size. The required address space is automatically mapped onto slow disc memory by the operating system. When the simulation size is very large, frequent random accesses to slow memory occur during the charge accumulation and particle pushing processes. Accesses to slow memory significantly reduce the execution rate of the simulation. We demonstrate in this paper that with the proper choice of sorting algorithm, a nominal amount of sorting to keep physically adjacent particles near particles with neighboring array indices can reduce random access to slow memory, increase the efficiency of the I/O system, and hence, reduce the required computing time

  19. Working Memory Interventions with Children: Classrooms or Computers?

    Science.gov (United States)

    Colmar, Susan; Double, Kit

    2017-01-01

    The importance of working memory to classroom functioning and academic outcomes has led to the development of many interventions designed to enhance students' working memory. In this article we briefly review the evidence for the relative effectiveness of classroom and computerised working memory interventions in bringing about measurable and…

  20. Evaluation of External Memory Access Performance on a High-End FPGA Hybrid Computer

    Directory of Open Access Journals (Sweden)

    Konstantinos Kalaitzis

    2016-10-01

    Full Text Available The motivation of this research was to evaluate the main memory performance of a hybrid super computer such as the Convey HC-x, and ascertain how the controller performs in several access scenarios, vis-à-vis hand-coded memory prefetches. Such memory patterns are very useful in stencil computations. The theoretical bandwidth of the memory of the Convey is compared with the results of our measurements. The accurate study of the memory subsystem is particularly useful for users when they are developing their application-specific personality. Experiments were performed to measure the bandwidth between the coprocessor and the memory subsystem. The experiments aimed mainly at measuring the reading access speed of the memory from Application Engines (FPGAs. Different ways of accessing data were used in order to find the most efficient way to access memory. This way was proposed for future work in the Convey HC-x. When performing a series of accesses to memory, non-uniform latencies occur. The Memory Controller of the Convey HC-x in the coprocessor attempts to cover this latency. We measure memory efficiency as a ratio of the number of memory accesses and the number of execution cycles. The result of this measurement converges to one in most cases. In addition, we performed experiments with hand-coded memory accesses. The analysis of the experimental results shows how the memory subsystem and Memory Controllers work. From this work we conclude that the memory controllers do an excellent job, largely because (transparently to the user they seem to cache large amounts of data, and hence hand-coding is not needed in most situations.

  1. Continuous-variable quantum computing in optical time-frequency modes using quantum memories.

    Science.gov (United States)

    Humphreys, Peter C; Kolthammer, W Steven; Nunn, Joshua; Barbieri, Marco; Datta, Animesh; Walmsley, Ian A

    2014-09-26

    We develop a scheme for time-frequency encoded continuous-variable cluster-state quantum computing using quantum memories. In particular, we propose a method to produce, manipulate, and measure two-dimensional cluster states in a single spatial mode by exploiting the intrinsic time-frequency selectivity of Raman quantum memories. Time-frequency encoding enables the scheme to be extremely compact, requiring a number of memories that are a linear function of only the number of different frequencies in which the computational state is encoded, independent of its temporal duration. We therefore show that quantum memories can be a powerful component for scalable photonic quantum information processing architectures.

  2. A 32-bit computer for large memory applications on the FASTBUS

    International Nuclear Information System (INIS)

    Kellner, R.; Blossom, J.M.; Hung, J.P.

    1985-01-01

    A FASTBUS based 32-bit computer is being built at Los Alamos National Laboratory for use in systems requiring large fast memory in the FASTBUS environment. A separate local execution bus allows data reduction to proceed concurrently with other FASTBUS operations. The computer, which can operate in either master or slave mode, includes the National Semiconductor NS32032 chip set with demand paged memory management, floating point slave processor, interrupt control unit, timers, and time-of-day clock. The 16.0 megabytes of random access memory are interleaved to allow windowed direct memory access on and off the FASTBUS at 80 megabytes per second

  3. Adaptive Dynamic Process Scheduling on Distributed Memory Parallel Computers

    Directory of Open Access Journals (Sweden)

    Wei Shu

    1994-01-01

    Full Text Available One of the challenges in programming distributed memory parallel machines is deciding how to allocate work to processors. This problem is particularly important for computations with unpredictable dynamic behaviors or irregular structures. We present a scheme for dynamic scheduling of medium-grained processes that is useful in this context. The adaptive contracting within neighborhood (ACWN is a dynamic, distributed, load-dependent, and scalable scheme. It deals with dynamic and unpredictable creation of processes and adapts to different systems. The scheme is described and contrasted with two other schemes that have been proposed in this context, namely the randomized allocation and the gradient model. The performance of the three schemes on an Intel iPSC/2 hypercube is presented and analyzed. The experimental results show that even though the ACWN algorithm incurs somewhat larger overhead than the randomized allocation, it achieves better performance in most cases due to its adaptiveness. Its feature of quickly spreading the work helps it outperform the gradient model in performance and scalability.

  4. Wearable Intrinsically Soft, Stretchable, Flexible Devices for Memories and Computing.

    Science.gov (United States)

    Rajan, Krishna; Garofalo, Erik; Chiolerio, Alessandro

    2018-01-27

    A recent trend in the development of high mass consumption electron devices is towards electronic textiles (e-textiles), smart wearable devices, smart clothes, and flexible or printable electronics. Intrinsically soft, stretchable, flexible, Wearable Memories and Computing devices (WMCs) bring us closer to sci-fi scenarios, where future electronic systems are totally integrated in our everyday outfits and help us in achieving a higher comfort level, interacting for us with other digital devices such as smartphones and domotics, or with analog devices, such as our brain/peripheral nervous system. WMC will enable each of us to contribute to open and big data systems as individual nodes, providing real-time information about physical and environmental parameters (including air pollution monitoring, sound and light pollution, chemical or radioactive fallout alert, network availability, and so on). Furthermore, WMC could be directly connected to human brain and enable extremely fast operation and unprecedented interface complexity, directly mapping the continuous states available to biological systems. This review focuses on recent advances in nanotechnology and materials science and pays particular attention to any result and promising technology to enable intrinsically soft, stretchable, flexible WMC.

  5. Ring interconnection for distributed memory automation and computing system

    Energy Technology Data Exchange (ETDEWEB)

    Vinogradov, V I [Inst. for Nuclear Research of the Russian Academy of Sciences, Moscow (Russian Federation)

    1996-12-31

    Problems of development of measurement, acquisition and central systems based on a distributed memory and a ring interface are discussed. It has been found that the RAM LINK-type protocol can be used for ringlet links in non-symmetrical distributed memory architecture multiprocessor system interaction. 5 refs.

  6. The Sensitivity of Memory Consolidation and Reconsolidation to Inhibitors of Protein Synthesis and Kinases: Computational Analysis

    Science.gov (United States)

    Zhang, Yili; Smolen, Paul; Baxter, Douglas A.; Byrne, John H.

    2010-01-01

    Memory consolidation and reconsolidation require kinase activation and protein synthesis. Blocking either process during or shortly after training or recall disrupts memory stabilization, which suggests the existence of a critical time window during which these processes are necessary. Using a computational model of kinase synthesis and…

  7. Computational cost of isogeometric multi-frontal solvers on parallel distributed memory machines

    KAUST Repository

    Woźniak, Maciej; Paszyński, Maciej R.; Pardo, D.; Dalcin, Lisandro; Calo, Victor M.

    2015-01-01

    This paper derives theoretical estimates of the computational cost for isogeometric multi-frontal direct solver executed on parallel distributed memory machines. We show theoretically that for the Cp-1 global continuity of the isogeometric solution

  8. Computational dissection of human episodic memory reveals mental process-specific genetic profiles

    Science.gov (United States)

    Luksys, Gediminas; Fastenrath, Matthias; Coynel, David; Freytag, Virginie; Gschwind, Leo; Heck, Angela; Jessen, Frank; Maier, Wolfgang; Milnik, Annette; Riedel-Heller, Steffi G.; Scherer, Martin; Spalek, Klara; Vogler, Christian; Wagner, Michael; Wolfsgruber, Steffen; Papassotiropoulos, Andreas; de Quervain, Dominique J.-F.

    2015-01-01

    Episodic memory performance is the result of distinct mental processes, such as learning, memory maintenance, and emotional modulation of memory strength. Such processes can be effectively dissociated using computational models. Here we performed gene set enrichment analyses of model parameters estimated from the episodic memory performance of 1,765 healthy young adults. We report robust and replicated associations of the amine compound SLC (solute-carrier) transporters gene set with the learning rate, of the collagen formation and transmembrane receptor protein tyrosine kinase activity gene sets with the modulation of memory strength by negative emotional arousal, and of the L1 cell adhesion molecule (L1CAM) interactions gene set with the repetition-based memory improvement. Furthermore, in a large functional MRI sample of 795 subjects we found that the association between L1CAM interactions and memory maintenance revealed large clusters of differences in brain activity in frontal cortical areas. Our findings provide converging evidence that distinct genetic profiles underlie specific mental processes of human episodic memory. They also provide empirical support to previous theoretical and neurobiological studies linking specific neuromodulators to the learning rate and linking neural cell adhesion molecules to memory maintenance. Furthermore, our study suggests additional memory-related genetic pathways, which may contribute to a better understanding of the neurobiology of human memory. PMID:26261317

  9. Computational dissection of human episodic memory reveals mental process-specific genetic profiles.

    Science.gov (United States)

    Luksys, Gediminas; Fastenrath, Matthias; Coynel, David; Freytag, Virginie; Gschwind, Leo; Heck, Angela; Jessen, Frank; Maier, Wolfgang; Milnik, Annette; Riedel-Heller, Steffi G; Scherer, Martin; Spalek, Klara; Vogler, Christian; Wagner, Michael; Wolfsgruber, Steffen; Papassotiropoulos, Andreas; de Quervain, Dominique J-F

    2015-09-01

    Episodic memory performance is the result of distinct mental processes, such as learning, memory maintenance, and emotional modulation of memory strength. Such processes can be effectively dissociated using computational models. Here we performed gene set enrichment analyses of model parameters estimated from the episodic memory performance of 1,765 healthy young adults. We report robust and replicated associations of the amine compound SLC (solute-carrier) transporters gene set with the learning rate, of the collagen formation and transmembrane receptor protein tyrosine kinase activity gene sets with the modulation of memory strength by negative emotional arousal, and of the L1 cell adhesion molecule (L1CAM) interactions gene set with the repetition-based memory improvement. Furthermore, in a large functional MRI sample of 795 subjects we found that the association between L1CAM interactions and memory maintenance revealed large clusters of differences in brain activity in frontal cortical areas. Our findings provide converging evidence that distinct genetic profiles underlie specific mental processes of human episodic memory. They also provide empirical support to previous theoretical and neurobiological studies linking specific neuromodulators to the learning rate and linking neural cell adhesion molecules to memory maintenance. Furthermore, our study suggests additional memory-related genetic pathways, which may contribute to a better understanding of the neurobiology of human memory.

  10. Efficient Numeric and Geometric Computations using Heterogeneous Shared Memory Architectures

    Science.gov (United States)

    2017-10-04

    to the memory architectures of CPUs and GPUs to obtain good performance and result in good memory performance using cache management. These methods ...Accomplishments: The PI and students has developed new methods for path and ray tracing and their Report Date: 14-Oct-2017 INVESTIGATOR(S): Phone...The efficiency of our method makes it a good candidate for forming hybrid schemes with wave-based models. One possibility is to couple the ray curve

  11. Projection multiplex recording of computer-synthesised one-dimensional Fourier holograms for holographic memory systems: mathematical and experimental modelling

    Energy Technology Data Exchange (ETDEWEB)

    Betin, A Yu; Bobrinev, V I; Verenikina, N M; Donchenko, S S; Odinokov, S B [Research Institute ' Radiotronics and Laser Engineering' , Bauman Moscow State Technical University, Moscow (Russian Federation); Evtikhiev, N N; Zlokazov, E Yu; Starikov, S N; Starikov, R S [National Reseach Nuclear University MEPhI (Moscow Engineering Physics Institute), Moscow (Russian Federation)

    2015-08-31

    A multiplex method of recording computer-synthesised one-dimensional Fourier holograms intended for holographic memory devices is proposed. The method potentially allows increasing the recording density in the previously proposed holographic memory system based on the computer synthesis and projection recording of data page holograms. (holographic memory)

  12. Computer Game Play Reduces Intrusive Memories of Experimental Trauma via Reconsolidation-Update Mechanisms.

    Science.gov (United States)

    James, Ella L; Bonsall, Michael B; Hoppitt, Laura; Tunbridge, Elizabeth M; Geddes, John R; Milton, Amy L; Holmes, Emily A

    2015-08-01

    Memory of a traumatic event becomes consolidated within hours. Intrusive memories can then flash back repeatedly into the mind's eye and cause distress. We investigated whether reconsolidation-the process during which memories become malleable when recalled-can be blocked using a cognitive task and whether such an approach can reduce these unbidden intrusions. We predicted that reconsolidation of a reactivated visual memory of experimental trauma could be disrupted by engaging in a visuospatial task that would compete for visual working memory resources. We showed that intrusive memories were virtually abolished by playing the computer game Tetris following a memory-reactivation task 24 hr after initial exposure to experimental trauma. Furthermore, both memory reactivation and playing Tetris were required to reduce subsequent intrusions (Experiment 2), consistent with reconsolidation-update mechanisms. A simple, noninvasive cognitive-task procedure administered after emotional memory has already consolidated (i.e., > 24 hours after exposure to experimental trauma) may prevent the recurrence of intrusive memories of those emotional events. © The Author(s) 2015.

  13. Method of computer generation and projection recording of microholograms for holographic memory systems: mathematical modelling and experimental implementation

    International Nuclear Information System (INIS)

    Betin, A Yu; Bobrinev, V I; Evtikhiev, N N; Zherdev, A Yu; Zlokazov, E Yu; Lushnikov, D S; Markin, V V; Odinokov, S B; Starikov, S N; Starikov, R S

    2013-01-01

    A method of computer generation and projection recording of microholograms for holographic memory systems is presented; the results of mathematical modelling and experimental implementation of the method are demonstrated. (holographic memory)

  14. Computer-Presented Organizational/Memory Aids as Instruction for Solving Pico-Fomi Problems.

    Science.gov (United States)

    Steinberg, Esther R.; And Others

    1985-01-01

    Describes investigation of effectiveness of computer-presented organizational/memory aids (matrix and verbal charts controlled by computer or learner) as instructional technique for solving Pico-Fomi problems, and the acquisition of deductive inference rules when such aids are present. Results indicate chart use control should be adapted to…

  15. Single-Chip Computers With Microelectromechanical Systems-Based Magnetic Memory

    NARCIS (Netherlands)

    Carley, L. Richard; Bain, James A.; Fedder, Gary K.; Greve, David W.; Guillou, David F.; Lu, Michael S.C.; Mukherjee, Tamal; Santhanam, Suresh; Abelmann, Leon; Min, Seungook

    This article describes an approach for implementing a complete computer system (CPU, RAM, I/O, and nonvolatile mass memory) on a single integrated-circuit substrate (a chip)—hence, the name "single-chip computer." The approach presented combines advances in the field of microelectromechanical

  16. Visuospatial memory computations during whole-body rotations in roll

    NARCIS (Netherlands)

    Pelt, S. van; Gisbergen, J.A.M. van; Medendorp, W.P.

    2005-01-01

    We used a memory-saccade task to test whether the location of a target, briefly presented before a whole-body rotation in roll, is stored in egocentric or in allocentric coordinates. To make this distinction, we exploited the fact that subjects, when tilted sideways in darkness, make systematic

  17. Attention and visual memory in visualization and computer graphics.

    Science.gov (United States)

    Healey, Christopher G; Enns, James T

    2012-07-01

    A fundamental goal of visualization is to produce images of data that support visual analysis, exploration, and discovery of novel insights. An important consideration during visualization design is the role of human visual perception. How we "see" details in an image can directly impact a viewer's efficiency and effectiveness. This paper surveys research on attention and visual perception, with a specific focus on results that have direct relevance to visualization and visual analytics. We discuss theories of low-level visual perception, then show how these findings form a foundation for more recent work on visual memory and visual attention. We conclude with a brief overview of how knowledge of visual attention and visual memory is being applied in visualization and graphics. We also discuss how challenges in visualization are motivating research in psychophysics.

  18. Forms of memory: Investigating the computational basis of semantic-episodic memory interactions

    NARCIS (Netherlands)

    Neville, D.A.

    2015-01-01

    The present thesis investigated how the memory systems related to the processing of semantic and episodic information combine to generate behavioural performance as measured in standard laboratory tasks. Across a series of behavioural experiment I looked at different types of interactions between

  19. A homotopy method for solving Riccati equations on a shared memory parallel computer

    International Nuclear Information System (INIS)

    Zigic, D.; Watson, L.T.; Collins, E.G. Jr.; Davis, L.D.

    1993-01-01

    Although there are numerous algorithms for solving Riccati equations, there still remains a need for algorithms which can operate efficiently on large problems and on parallel machines. This paper gives a new homotopy-based algorithm for solving Riccati equations on a shared memory parallel computer. The central part of the algorithm is the computation of the kernel of the Jacobian matrix, which is essential for the corrector iterations along the homotopy zero curve. Using a Schur decomposition the tensor product structure of various matrices can be efficiently exploited. The algorithm allows for efficient parallelization on shared memory machines

  20. Memory

    Science.gov (United States)

    ... it has to decide what is worth remembering. Memory is the process of storing and then remembering this information. There are different types of memory. Short-term memory stores information for a few ...

  1. Distributed Cognition (DCOG): Foundations for a Computational Associative Memory Model

    National Research Council Canada - National Science Library

    Eggleston, Robert G; McCreight, Katherine L

    2006-01-01

    .... In this report, we describe the foundations of a different type of computational architecture; one that we believe will be less susceptible to cognitive brittleness and can better scale to complex and ill-structured work domains...

  2. Memory intensive functional architecture for distributed computer control systems

    International Nuclear Information System (INIS)

    Dimmler, D.G.

    1983-10-01

    A memory-intensive functional architectue for distributed data-acquisition, monitoring, and control systems with large numbers of nodes has been conceptually developed and applied in several large-scale and some smaller systems. This discussion concentrates on: (1) the basic architecture; (2) recent expansions of the architecture which now become feasible in view of the rapidly developing component technologies in microprocessors and functional large-scale integration circuits; and (3) implementation of some key hardware and software structures and one system implementation which is a system for performing control and data acquisition of a neutron spectrometer at the Brookhaven High Flux Beam Reactor. The spectrometer is equipped with a large-area position-sensitive neutron detector

  3. Parallel grid generation algorithm for distributed memory computers

    Science.gov (United States)

    Moitra, Stuti; Moitra, Anutosh

    1994-01-01

    A parallel grid-generation algorithm and its implementation on the Intel iPSC/860 computer are described. The grid-generation scheme is based on an algebraic formulation of homotopic relations. Methods for utilizing the inherent parallelism of the grid-generation scheme are described, and implementation of multiple levELs of parallelism on multiple instruction multiple data machines are indicated. The algorithm is capable of providing near orthogonality and spacing control at solid boundaries while requiring minimal interprocessor communications. Results obtained on the Intel hypercube for a blended wing-body configuration are used to demonstrate the effectiveness of the algorithm. Fortran implementations bAsed on the native programming model of the iPSC/860 computer and the Express system of software tools are reported. Computational gains in execution time speed-up ratios are given.

  4. Effects of Violent and Non-Violent Computer Game Content on Memory Performance in Adolescents

    Science.gov (United States)

    Maass, Asja; Kollhorster, Kirsten; Riediger, Annemarie; MacDonald, Vanessa; Lohaus, Arnold

    2011-01-01

    The present study focuses on the short-term effects of electronic entertainment media on memory and learning processes. It compares the effects of violent versus non-violent computer game content in a condition of playing and in another condition of watching the same game. The participants consisted of 83 female and 94 male adolescents with a mean…

  5. Computer Simulations of Developmental Change: The Contributions of Working Memory Capacity and Long-Term Knowledge

    Science.gov (United States)

    Jones, Gary; Gobet, Fernand; Pine, Julian M.

    2008-01-01

    Increasing working memory (WM) capacity is often cited as a major influence on children's development and yet WM capacity is difficult to examine independently of long-term knowledge. A computational model of children's nonword repetition (NWR) performance is presented that independently manipulates long-term knowledge and WM capacity to determine…

  6. The Effects of 3D Computer Simulation on Biology Students' Achievement and Memory Retention

    Science.gov (United States)

    Elangovan, Tavasuria; Ismail, Zurida

    2014-01-01

    A quasi experimental study was conducted for six weeks to determine the effectiveness of two different 3D computer simulation based teaching methods, that is, realistic simulation and non-realistic simulation on Form Four Biology students' achievement and memory retention in Perak, Malaysia. A sample of 136 Form Four Biology students in Perak,…

  7. Explicit time integration of finite element models on a vectorized, concurrent computer with shared memory

    Science.gov (United States)

    Gilbertsen, Noreen D.; Belytschko, Ted

    1990-01-01

    The implementation of a nonlinear explicit program on a vectorized, concurrent computer with shared memory is described and studied. The conflict between vectorization and concurrency is described and some guidelines are given for optimal block sizes. Several example problems are summarized to illustrate the types of speed-ups which can be achieved by reprogramming as compared to compiler optimization.

  8. Effect of Computer-Presented Organizational/Memory Aids on Problem Solving Behavior.

    Science.gov (United States)

    Steinberg, Esther R.; And Others

    This research studied the effects of computer-presented organizational/memory aids on problem solving behavior. The aids were either matrix or verbal charts shown on the display screen next to the problem. The 104 college student subjects were randomly assigned to one of the four conditions: type of chart (matrix or verbal chart) and use of charts…

  9. From shoebox to performative agent: the computer as personal memory machine

    NARCIS (Netherlands)

    van Dijck, J.

    2005-01-01

    Digital technologies offer new opportunities in the everyday lives of people: with still expanding memory capacities, the computer is rapidly becoming a giant storage and processing facility for recording and retrieving ‘bits of life’. Software engineers and companies promise not only to expand the

  10. Memory allocation and computations for Laplace’s equation of 3-D arbitrary boundary problems

    Directory of Open Access Journals (Sweden)

    Tsay Tswn-Syau

    2017-01-01

    Full Text Available Computation iteration schemes and memory allocation technique for finite difference method were presented in this paper. The transformed form of a groundwater flow problem in the generalized curvilinear coordinates was taken to be the illustrating example and a 3-dimensional second order accurate 19-point scheme was presented. Traditional element-by-element methods (e.g. SOR are preferred since it is simple and memory efficient but time consuming in computation. For efficient memory allocation, an index method was presented to store the sparse non-symmetric matrix of the problem. For computations, conjugate-gradient-like methods were reported to be computationally efficient. Among them, using incomplete Choleski decomposition as preconditioner was reported to be good method for iteration convergence. In general, the developed index method in this paper has the following advantages: (1 adaptable to various governing and boundary conditions, (2 flexible for higher order approximation, (3 independence of problem dimension, (4 efficient for complex problems when global matrix is not symmetric, (5 convenience for general sparse matrices, (6 computationally efficient in the most time consuming procedure of matrix multiplication, and (7 applicable to any developed matrix solver.

  11. Fencing direct memory access data transfers in a parallel active messaging interface of a parallel computer

    Science.gov (United States)

    Blocksome, Michael A.; Mamidala, Amith R.

    2013-09-03

    Fencing direct memory access (`DMA`) data transfers in a parallel active messaging interface (`PAMI`) of a parallel computer, the PAMI including data communications endpoints, each endpoint including specifications of a client, a context, and a task, the endpoints coupled for data communications through the PAMI and through DMA controllers operatively coupled to segments of shared random access memory through which the DMA controllers deliver data communications deterministically, including initiating execution through the PAMI of an ordered sequence of active DMA instructions for DMA data transfers between two endpoints, effecting deterministic DMA data transfers through a DMA controller and a segment of shared memory; and executing through the PAMI, with no FENCE accounting for DMA data transfers, an active FENCE instruction, the FENCE instruction completing execution only after completion of all DMA instructions initiated prior to execution of the FENCE instruction for DMA data transfers between the two endpoints.

  12. Virtual memory support for distributed computing environments using a shared data object model

    Science.gov (United States)

    Huang, F.; Bacon, J.; Mapp, G.

    1995-12-01

    Conventional storage management systems provide one interface for accessing memory segments and another for accessing secondary storage objects. This hinders application programming and affects overall system performance due to mandatory data copying and user/kernel boundary crossings, which in the microkernel case may involve context switches. Memory-mapping techniques may be used to provide programmers with a unified view of the storage system. This paper extends such techniques to support a shared data object model for distributed computing environments in which good support for coherence and synchronization is essential. The approach is based on a microkernel, typed memory objects, and integrated coherence control. A microkernel architecture is used to support multiple coherence protocols and the addition of new protocols. Memory objects are typed and applications can choose the most suitable protocols for different types of object to avoid protocol mismatch. Low-level coherence control is integrated with high-level concurrency control so that the number of messages required to maintain memory coherence is reduced and system-wide synchronization is realized without severely impacting the system performance. These features together contribute a novel approach to the support for flexible coherence under application control.

  13. Computational cost estimates for parallel shared memory isogeometric multi-frontal solvers

    KAUST Repository

    Woźniak, Maciej

    2014-06-01

    In this paper we present computational cost estimates for parallel shared memory isogeometric multi-frontal solvers. The estimates show that the ideal isogeometric shared memory parallel direct solver scales as O( p2log(N/p)) for one dimensional problems, O(Np2) for two dimensional problems, and O(N4/3p2) for three dimensional problems, where N is the number of degrees of freedom, and p is the polynomial order of approximation. The computational costs of the shared memory parallel isogeometric direct solver are compared with those corresponding to the sequential isogeometric direct solver, being the latest equal to O(N p2) for the one dimensional case, O(N1.5p3) for the two dimensional case, and O(N2p3) for the three dimensional case. The shared memory version significantly reduces both the scalability in terms of N and p. Theoretical estimates are compared with numerical experiments performed with linear, quadratic, cubic, quartic, and quintic B-splines, in one and two spatial dimensions. © 2014 Elsevier Ltd. All rights reserved.

  14. Computational cost estimates for parallel shared memory isogeometric multi-frontal solvers

    KAUST Repository

    Woźniak, Maciej; Kuźnik, Krzysztof M.; Paszyński, Maciej R.; Calo, Victor M.; Pardo, D.

    2014-01-01

    In this paper we present computational cost estimates for parallel shared memory isogeometric multi-frontal solvers. The estimates show that the ideal isogeometric shared memory parallel direct solver scales as O( p2log(N/p)) for one dimensional problems, O(Np2) for two dimensional problems, and O(N4/3p2) for three dimensional problems, where N is the number of degrees of freedom, and p is the polynomial order of approximation. The computational costs of the shared memory parallel isogeometric direct solver are compared with those corresponding to the sequential isogeometric direct solver, being the latest equal to O(N p2) for the one dimensional case, O(N1.5p3) for the two dimensional case, and O(N2p3) for the three dimensional case. The shared memory version significantly reduces both the scalability in terms of N and p. Theoretical estimates are compared with numerical experiments performed with linear, quadratic, cubic, quartic, and quintic B-splines, in one and two spatial dimensions. © 2014 Elsevier Ltd. All rights reserved.

  15. Exploiting short-term memory in soft body dynamics as a computational resource.

    Science.gov (United States)

    Nakajima, K; Li, T; Hauser, H; Pfeifer, R

    2014-11-06

    Soft materials are not only highly deformable, but they also possess rich and diverse body dynamics. Soft body dynamics exhibit a variety of properties, including nonlinearity, elasticity and potentially infinitely many degrees of freedom. Here, we demonstrate that such soft body dynamics can be employed to conduct certain types of computation. Using body dynamics generated from a soft silicone arm, we show that they can be exploited to emulate functions that require memory and to embed robust closed-loop control into the arm. Our results suggest that soft body dynamics have a short-term memory and can serve as a computational resource. This finding paves the way towards exploiting passive body dynamics for control of a large class of underactuated systems. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  16. Novel spintronics devices for memory and logic: prospects and challenges for room temperature all spin computing

    Science.gov (United States)

    Wang, Jian-Ping

    An energy efficient memory and logic device for the post-CMOS era has been the goal of a variety of research fields. The limits of scaling, which we expect to reach by the year 2025, demand that future advances in computational power will not be realized from ever-shrinking device sizes, but rather by innovative designs and new materials and physics. Magnetoresistive based devices have been a promising candidate for future integrated magnetic computation because of its unique non-volatility and functionalities. The application of perpendicular magnetic anisotropy for potential STT-RAM application was demonstrated and later has been intensively investigated by both academia and industry groups, but there is no clear path way how scaling will eventually work for both memory and logic applications. One of main reasons is that there is no demonstrated material stack candidate that could lead to a scaling scheme down to sub 10 nm. Another challenge for the usage of magnetoresistive based devices for logic application is its available switching speed and writing energy. Although a good progress has been made to demonstrate the fast switching of a thermally stable magnetic tunnel junction (MTJ) down to 165 ps, it is still several times slower than its CMOS counterpart. In this talk, I will review the recent progress by my research group and my C-SPIN colleagues, then discuss the opportunities, challenges and some potential path ways for magnetoresitive based devices for memory and logic applications and their integration for room temperature all spin computing system.

  17. PREFACE: Special section on Computational Fluid Dynamics—in memory of Professor Kunio Kuwahara Special section on Computational Fluid Dynamics—in memory of Professor Kunio Kuwahara

    Science.gov (United States)

    Ishii, Katsuya

    2011-08-01

    This issue includes a special section on computational fluid dynamics (CFD) in memory of the late Professor Kunio Kuwahara, who passed away on 15 September 2008, at the age of 66. In this special section, five articles are included that are based on the lectures and discussions at `The 7th International Nobeyama Workshop on CFD: To the Memory of Professor Kuwahara' held in Tokyo on 23 and 24 September 2009. Professor Kuwahara started his research in fluid dynamics under Professor Imai at the University of Tokyo. His first paper was published in 1969 with the title 'Steady Viscous Flow within Circular Boundary', with Professor Imai. In this paper, he combined theoretical and numerical methods in fluid dynamics. Since that time, he made significant and seminal contributions to computational fluid dynamics. He undertook pioneering numerical studies on the vortex method in 1970s. From then to the early nineties, he developed numerical analyses on a variety of three-dimensional unsteady phenomena of incompressible and compressible fluid flows and/or complex fluid flows using his own supercomputers with academic and industrial co-workers and members of his private research institute, ICFD in Tokyo. In addition, a number of senior and young researchers of fluid mechanics around the world were invited to ICFD and the Nobeyama workshops, which were held near his villa, and they intensively discussed new frontier problems of fluid physics and fluid engineering at Professor Kuwahara's kind hospitality. At the memorial Nobeyama workshop held in 2009, 24 overseas speakers presented their papers, including the talks of Dr J P Boris (Naval Research Laboratory), Dr E S Oran (Naval Research Laboratory), Professor Z J Wang (Iowa State University), Dr M Meinke (RWTH Aachen), Professor K Ghia (University of Cincinnati), Professor U Ghia (University of Cincinnati), Professor F Hussain (University of Houston), Professor M Farge (École Normale Superieure), Professor J Y Yong (National

  18. System of common usage on the base of external memory devices and the SM-3 computer

    International Nuclear Information System (INIS)

    Baluka, G.; Vasin, A.Yu.; Ermakov, V.A.; Zhukov, G.P.; Zimin, G.N.; Namsraj, Yu.; Ostrovnoj, A.I.; Savvateev, A.S.; Salamatin, I.M.; Yanovskij, G.Ya.

    1980-01-01

    An easily modified system of common usage on the base of external memories and a SM-3 minicomputer replacing some pulse analysers is described. The system has merits of PA and is more advantageous with regard to effectiveness of equipment using, the possibility of changing configuration and functions, the data protection against losses due to user errors and some failures, price of one registration channel, place occupied. The system of common usage is intended for the IBR-2 pulse reactor computing centre. It is designed using the SANPO system means for SM-3 computer [ru

  19. A learnable parallel processing architecture towards unity of memory and computing.

    Science.gov (United States)

    Li, H; Gao, B; Chen, Z; Zhao, Y; Huang, P; Ye, H; Liu, L; Liu, X; Kang, J

    2015-08-14

    Developing energy-efficient parallel information processing systems beyond von Neumann architecture is a long-standing goal of modern information technologies. The widely used von Neumann computer architecture separates memory and computing units, which leads to energy-hungry data movement when computers work. In order to meet the need of efficient information processing for the data-driven applications such as big data and Internet of Things, an energy-efficient processing architecture beyond von Neumann is critical for the information society. Here we show a non-von Neumann architecture built of resistive switching (RS) devices named "iMemComp", where memory and logic are unified with single-type devices. Leveraging nonvolatile nature and structural parallelism of crossbar RS arrays, we have equipped "iMemComp" with capabilities of computing in parallel and learning user-defined logic functions for large-scale information processing tasks. Such architecture eliminates the energy-hungry data movement in von Neumann computers. Compared with contemporary silicon technology, adder circuits based on "iMemComp" can improve the speed by 76.8% and the power dissipation by 60.3%, together with a 700 times aggressive reduction in the circuit area.

  20. A learnable parallel processing architecture towards unity of memory and computing

    Science.gov (United States)

    Li, H.; Gao, B.; Chen, Z.; Zhao, Y.; Huang, P.; Ye, H.; Liu, L.; Liu, X.; Kang, J.

    2015-08-01

    Developing energy-efficient parallel information processing systems beyond von Neumann architecture is a long-standing goal of modern information technologies. The widely used von Neumann computer architecture separates memory and computing units, which leads to energy-hungry data movement when computers work. In order to meet the need of efficient information processing for the data-driven applications such as big data and Internet of Things, an energy-efficient processing architecture beyond von Neumann is critical for the information society. Here we show a non-von Neumann architecture built of resistive switching (RS) devices named “iMemComp”, where memory and logic are unified with single-type devices. Leveraging nonvolatile nature and structural parallelism of crossbar RS arrays, we have equipped “iMemComp” with capabilities of computing in parallel and learning user-defined logic functions for large-scale information processing tasks. Such architecture eliminates the energy-hungry data movement in von Neumann computers. Compared with contemporary silicon technology, adder circuits based on “iMemComp” can improve the speed by 76.8% and the power dissipation by 60.3%, together with a 700 times aggressive reduction in the circuit area.

  1. Metal oxide resistive random access memory based synaptic devices for brain-inspired computing

    Science.gov (United States)

    Gao, Bin; Kang, Jinfeng; Zhou, Zheng; Chen, Zhe; Huang, Peng; Liu, Lifeng; Liu, Xiaoyan

    2016-04-01

    The traditional Boolean computing paradigm based on the von Neumann architecture is facing great challenges for future information technology applications such as big data, the Internet of Things (IoT), and wearable devices, due to the limited processing capability issues such as binary data storage and computing, non-parallel data processing, and the buses requirement between memory units and logic units. The brain-inspired neuromorphic computing paradigm is believed to be one of the promising solutions for realizing more complex functions with a lower cost. To perform such brain-inspired computing with a low cost and low power consumption, novel devices for use as electronic synapses are needed. Metal oxide resistive random access memory (ReRAM) devices have emerged as the leading candidate for electronic synapses. This paper comprehensively addresses the recent work on the design and optimization of metal oxide ReRAM-based synaptic devices. A performance enhancement methodology and optimized operation scheme to achieve analog resistive switching and low-energy training behavior are provided. A three-dimensional vertical synapse network architecture is proposed for high-density integration and low-cost fabrication. The impacts of the ReRAM synaptic device features on the performances of neuromorphic systems are also discussed on the basis of a constructed neuromorphic visual system with a pattern recognition function. Possible solutions to achieve the high recognition accuracy and efficiency of neuromorphic systems are presented.

  2. Energy Scaling Advantages of Resistive Memory Crossbar Based Computation and its Application to Sparse Coding

    Directory of Open Access Journals (Sweden)

    Sapan eAgarwal

    2016-01-01

    Full Text Available The exponential increase in data over the last decade presents a significant challenge to analytics efforts that seek to process and interpret such data for various applications. Neural-inspired computing approaches are being developed in order to leverage the computational advantages of the analog, low-power data processing observed in biological systems. Analog resistive memory crossbars can perform a parallel read or a vector-matrix multiplication as well as a parallel write or a rank-1 update with high computational efficiency. For an NxN crossbar, these two kernels are at a minimum O(N more energy efficient than a digital memory-based architecture. If the read operation is noise limited, the energy to read a column can be independent of the crossbar size (O(1. These two kernels form the basis of many neuromorphic algorithms such as image, text, and speech recognition. For instance, these kernels can be applied to a neural sparse coding algorithm to give an O(N reduction in energy for the entire algorithm. Sparse coding is a rich problem with a host of applications including computer vision, object tracking, and more generally unsupervised learning.

  3. Soft-error tolerance and energy consumption evaluation of embedded computer with magnetic random access memory in practical systems using computer simulations

    Science.gov (United States)

    Nebashi, Ryusuke; Sakimura, Noboru; Sugibayashi, Tadahiko

    2017-08-01

    We evaluated the soft-error tolerance and energy consumption of an embedded computer with magnetic random access memory (MRAM) using two computer simulators. One is a central processing unit (CPU) simulator of a typical embedded computer system. We simulated the radiation-induced single-event-upset (SEU) probability in a spin-transfer-torque MRAM cell and also the failure rate of a typical embedded computer due to its main memory SEU error. The other is a delay tolerant network (DTN) system simulator. It simulates the power dissipation of wireless sensor network nodes of the system using a revised CPU simulator and a network simulator. We demonstrated that the SEU effect on the embedded computer with 1 Gbit MRAM-based working memory is less than 1 failure in time (FIT). We also demonstrated that the energy consumption of the DTN sensor node with MRAM-based working memory can be reduced to 1/11. These results indicate that MRAM-based working memory enhances the disaster tolerance of embedded computers.

  4. Logic computation in phase change materials by threshold and memory switching.

    Science.gov (United States)

    Cassinerio, M; Ciocchini, N; Ielmini, D

    2013-11-06

    Memristors, namely hysteretic devices capable of changing their resistance in response to applied electrical stimuli, may provide new opportunities for future memory and computation, thanks to their scalable size, low switching energy and nonvolatile nature. We have developed a functionally complete set of logic functions including NOR, NAND and NOT gates, each utilizing a single phase-change memristor (PCM) where resistance switching is due to the phase transformation of an active chalcogenide material. The logic operations are enabled by the high functionality of nanoscale phase change, featuring voltage comparison, additive crystallization and pulse-induced amorphization. The nonvolatile nature of memristive states provides the basis for developing reconfigurable hybrid logic/memory circuits featuring low-power and high-speed switching. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. Studies of electron collisions with polyatomic molecules using distributed-memory parallel computers

    International Nuclear Information System (INIS)

    Winstead, C.; Hipes, P.G.; Lima, M.A.P.; McKoy, V.

    1991-01-01

    Elastic electron scattering cross sections from 5--30 eV are reported for the molecules C 2 H 4 , C 2 H 6 , C 3 H 8 , Si 2 H 6 , and GeH 4 , obtained using an implementation of the Schwinger multichannel method for distributed-memory parallel computer architectures. These results, obtained within the static-exchange approximation, are in generally good agreement with the available experimental data. These calculations demonstrate the potential of highly parallel computation in the study of collisions between low-energy electrons and polyatomic gases. The computational methodology discussed is also directly applicable to the calculation of elastic cross sections at higher levels of approximation (target polarization) and of electronic excitation cross sections

  6. Computational cost of isogeometric multi-frontal solvers on parallel distributed memory machines

    KAUST Repository

    Woźniak, Maciej

    2015-02-01

    This paper derives theoretical estimates of the computational cost for isogeometric multi-frontal direct solver executed on parallel distributed memory machines. We show theoretically that for the Cp-1 global continuity of the isogeometric solution, both the computational cost and the communication cost of a direct solver are of order O(log(N)p2) for the one dimensional (1D) case, O(Np2) for the two dimensional (2D) case, and O(N4/3p2) for the three dimensional (3D) case, where N is the number of degrees of freedom and p is the polynomial order of the B-spline basis functions. The theoretical estimates are verified by numerical experiments performed with three parallel multi-frontal direct solvers: MUMPS, PaStiX and SuperLU, available through PETIGA toolkit built on top of PETSc. Numerical results confirm these theoretical estimates both in terms of p and N. For a given problem size, the strong efficiency rapidly decreases as the number of processors increases, becoming about 20% for 256 processors for a 3D example with 1283 unknowns and linear B-splines with C0 global continuity, and 15% for a 3D example with 643 unknowns and quartic B-splines with C3 global continuity. At the same time, one cannot arbitrarily increase the problem size, since the memory required by higher order continuity spaces is large, quickly consuming all the available memory resources even in the parallel distributed memory version. Numerical results also suggest that the use of distributed parallel machines is highly beneficial when solving higher order continuity spaces, although the number of processors that one can efficiently employ is somehow limited.

  7. Iterative schemes for parallel Sn algorithms in a shared-memory computing environment

    International Nuclear Information System (INIS)

    Haghighat, A.; Hunter, M.A.; Mattis, R.E.

    1995-01-01

    Several two-dimensional spatial domain partitioning S n transport theory algorithms are developed on the basis of different iterative schemes. These algorithms are incorporated into TWOTRAN-II and tested on the shared-memory CRAY Y-MP C90 computer. For a series of fixed-source r-z geometry homogeneous problems, it is demonstrated that the concurrent red-black algorithms may result in large parallel efficiencies (>60%) on C90. It is also demonstrated that for a realistic shielding problem, the use of the negative flux fixup causes high load imbalance, which results in a significant loss of parallel efficiency

  8. Disentangling the Relationship Between the Adoption of In-Memory Computing and Firm Performance

    DEFF Research Database (Denmark)

    Fay, Marua; Müller, Oliver; vom Brocke, Jan

    2016-01-01

    Recent growth in data volume, variety, and velocity led to an increased demand for high-performance data processing and analytics solutions. In-memory computing (IMC) enables organizations to boost their information processing capacity, and is widely acknowledged to be one of the leading strategic...... at explaining the relationship between the adoption of IMC solutions and firm performance. In this research-in-progress paper we discuss the theoretical background of our work, describe the proposed research design, and develop five hypotheses for later testing. Our work aims at contributing to the research...

  9. Memory

    OpenAIRE

    Wager, Nadia

    2017-01-01

    This chapter will explore a response to traumatic victimisation which has divided the opinions of psychologists at an exponential rate. We will be examining amnesia for memories of childhood sexual abuse and the potential to recover these memories in adulthood. Whilst this phenomenon is generally accepted in clinical circles, it is seen as highly contentious amongst research psychologists, particularly experimental cognitive psychologists. The chapter will begin with a real case study of a wo...

  10. SiGe epitaxial memory for neuromorphic computing with reproducible high performance based on engineered dislocations

    Science.gov (United States)

    Choi, Shinhyun; Tan, Scott H.; Li, Zefan; Kim, Yunjo; Choi, Chanyeol; Chen, Pai-Yu; Yeon, Hanwool; Yu, Shimeng; Kim, Jeehwan

    2018-01-01

    Although several types of architecture combining memory cells and transistors have been used to demonstrate artificial synaptic arrays, they usually present limited scalability and high power consumption. Transistor-free analog switching devices may overcome these limitations, yet the typical switching process they rely on—formation of filaments in an amorphous medium—is not easily controlled and hence hampers the spatial and temporal reproducibility of the performance. Here, we demonstrate analog resistive switching devices that possess desired characteristics for neuromorphic computing networks with minimal performance variations using a single-crystalline SiGe layer epitaxially grown on Si as a switching medium. Such epitaxial random access memories utilize threading dislocations in SiGe to confine metal filaments in a defined, one-dimensional channel. This confinement results in drastically enhanced switching uniformity and long retention/high endurance with a high analog on/off ratio. Simulations using the MNIST handwritten recognition data set prove that epitaxial random access memories can operate with an online learning accuracy of 95.1%.

  11. Results from the First Two Flights of the Static Computer Memory Integrity Testing Experiment

    Science.gov (United States)

    Hancock, Thomas M., III

    1999-01-01

    This paper details the scientific objectives, experiment design, data collection method, and post flight analysis following the first two flights of the Static Computer Memory Integrity Testing (SCMIT) experiment. SCMIT is designed to detect soft-event upsets in passive magnetic memory. A soft-event upset is a change in the logic state of active or passive forms of magnetic memory, commonly referred to as a "Bitflip". In its mildest form a soft-event upset can cause software exceptions, unexpected events, start spacecraft safeing (ending data collection) or corrupted fault protection and error recovery capabilities. In it's most severe form loss of mission or spacecraft can occur. Analysis after the first flight (in 1991 during STS-40) identified possible soft-event upsets to 25% of the experiment detectors. Post flight analysis after the second flight (in 1997 on STS-87) failed to find any evidence of soft-event upsets. The SCMIT experiment is currently scheduled for a third flight in December 1999 on STS-101.

  12. Modified stretched exponential model of computer system resources management limitations-The case of cache memory

    Science.gov (United States)

    Strzałka, Dominik; Dymora, Paweł; Mazurek, Mirosław

    2018-02-01

    In this paper we present some preliminary results in the field of computer systems management with relation to Tsallis thermostatistics and the ubiquitous problem of hardware limited resources. In the case of systems with non-deterministic behaviour, management of their resources is a key point that guarantees theirs acceptable performance and proper working. This is very wide problem that stands for many challenges in financial, transport, water and food, health, etc. areas. We focus on computer systems with attention paid to cache memory and propose to use an analytical model that is able to connect non-extensive entropy formalism, long-range dependencies, management of system resources and queuing theory. Obtained analytical results are related to the practical experiment showing interesting and valuable results.

  13. Raster Scan Computer Image Generation (CIG) System Based On Refresh Memory

    Science.gov (United States)

    Dichter, W.; Doris, K.; Conkling, C.

    1982-06-01

    A full color, Computer Image Generation (CIG) raster visual system has been developed which provides a high level of training sophistication by utilizing advanced semiconductor technology and innovative hardware and firmware techniques. Double buffered refresh memory and efficient algorithms eliminate the problem of conventional raster line ordering by allowing the generated image to be stored in a random fashion. Modular design techniques and simplified architecture provide significant advantages in reduced system cost, standardization of parts, and high reliability. The major system components are a general purpose computer to perform interfacing and data base functions; a geometric processor to define the instantaneous scene image; a display generator to convert the image to a video signal; an illumination control unit which provides final image processing; and a CRT monitor for display of the completed image. Additional optional enhancements include texture generators, increased edge and occultation capability, curved surface shading, and data base extensions.

  14. Photon echo quantum random access memory integration in a quantum computer

    International Nuclear Information System (INIS)

    Moiseev, Sergey A; Andrianov, Sergey N

    2012-01-01

    We have analysed an efficient integration of multi-qubit echo quantum memory (QM) into the quantum computer scheme based on squids, quantum dots or atomic resonant ensembles in a quantum electrodynamics cavity. Here, one atomic ensemble with controllable inhomogeneous broadening is used for the QM node and other nodes characterized by the homogeneously broadened resonant line are used for processing. We have found the optimal conditions for the efficient integration of the multi-qubit QM modified for the analysed scheme, and we have determined the self-temporal modes providing a perfect reversible transfer of the photon qubits between the QM node and arbitrary processing nodes. The obtained results open the way for realization of a full-scale solid state quantum computing based on the efficient multi-qubit QM. (paper)

  15. A multiprocessor computer simulation model employing a feedback scheduler/allocator for memory space and bandwidth matching and TMR processing

    Science.gov (United States)

    Bradley, D. B.; Irwin, J. D.

    1974-01-01

    A computer simulation model for a multiprocessor computer is developed that is useful for studying the problem of matching multiprocessor's memory space, memory bandwidth and numbers and speeds of processors with aggregate job set characteristics. The model assumes an input work load of a set of recurrent jobs. The model includes a feedback scheduler/allocator which attempts to improve system performance through higher memory bandwidth utilization by matching individual job requirements for space and bandwidth with space availability and estimates of bandwidth availability at the times of memory allocation. The simulation model includes provisions for specifying precedence relations among the jobs in a job set, and provisions for specifying precedence execution of TMR (Triple Modular Redundant and SIMPLEX (non redundant) jobs.

  16. Linking Working Memory and Long-Term Memory: A Computational Model of the Learning of New Words

    Science.gov (United States)

    Jones, Gary; Gobet, Fernand; Pine, Julian M.

    2007-01-01

    The nonword repetition (NWR) test has been shown to be a good predictor of children's vocabulary size. NWR performance has been explained using phonological working memory, which is seen as a critical component in the learning of new words. However, no detailed specification of the link between phonological working memory and long-term memory…

  17. Read-only-memory-based quantum computation: Experimental explorations using nuclear magnetic resonance and future prospects

    International Nuclear Information System (INIS)

    Sypher, D.R.; Brereton, I.M.; Wiseman, H.M.; Hollis, B.L.; Travaglione, B.C.

    2002-01-01

    Read-only-memory-based (ROM-based) quantum computation (QC) is an alternative to oracle-based QC. It has the advantages of being less 'magical', and being more suited to implementing space-efficient computation (i.e., computation using the minimum number of writable qubits). Here we consider a number of small (one- and two-qubit) quantum algorithms illustrating different aspects of ROM-based QC. They are: (a) a one-qubit algorithm to solve the Deutsch problem; (b) a one-qubit binary multiplication algorithm; (c) a two-qubit controlled binary multiplication algorithm; and (d) a two-qubit ROM-based version of the Deutsch-Jozsa algorithm. For each algorithm we present experimental verification using nuclear magnetic resonance ensemble QC. The average fidelities for the implementation were in the ranges 0.9-0.97 for the one-qubit algorithms, and 0.84-0.94 for the two-qubit algorithms. We conclude with a discussion of future prospects for ROM-based quantum computation. We propose a four-qubit algorithm, using Grover's iterate, for solving a miniature 'real-world' problem relating to the lengths of paths in a network

  18. ClimateSpark: An in-memory distributed computing framework for big climate data analytics

    Science.gov (United States)

    Hu, Fei; Yang, Chaowei; Schnase, John L.; Duffy, Daniel Q.; Xu, Mengchao; Bowen, Michael K.; Lee, Tsengdar; Song, Weiwei

    2018-06-01

    The unprecedented growth of climate data creates new opportunities for climate studies, and yet big climate data pose a grand challenge to climatologists to efficiently manage and analyze big data. The complexity of climate data content and analytical algorithms increases the difficulty of implementing algorithms on high performance computing systems. This paper proposes an in-memory, distributed computing framework, ClimateSpark, to facilitate complex big data analytics and time-consuming computational tasks. Chunking data structure improves parallel I/O efficiency, while a spatiotemporal index is built for the chunks to avoid unnecessary data reading and preprocessing. An integrated, multi-dimensional, array-based data model (ClimateRDD) and ETL operations are developed to address big climate data variety by integrating the processing components of the climate data lifecycle. ClimateSpark utilizes Spark SQL and Apache Zeppelin to develop a web portal to facilitate the interaction among climatologists, climate data, analytic operations and computing resources (e.g., using SQL query and Scala/Python notebook). Experimental results show that ClimateSpark conducts different spatiotemporal data queries/analytics with high efficiency and data locality. ClimateSpark is easily adaptable to other big multiple-dimensional, array-based datasets in various geoscience domains.

  19. Memories.

    Science.gov (United States)

    Brand, Judith, Ed.

    1998-01-01

    This theme issue of the journal "Exploring" covers the topic of "memories" and describes an exhibition at San Francisco's Exploratorium that ran from May 22, 1998 through January 1999 and that contained over 40 hands-on exhibits, demonstrations, artworks, images, sounds, smells, and tastes that demonstrated and depicted the biological,…

  20. ClimateSpark: An In-memory Distributed Computing Framework for Big Climate Data Analytics

    Science.gov (United States)

    Hu, F.; Yang, C. P.; Duffy, D.; Schnase, J. L.; Li, Z.

    2016-12-01

    Massive array-based climate data is being generated from global surveillance systems and model simulations. They are widely used to analyze the environment problems, such as climate changes, natural hazards, and public health. However, knowing the underlying information from these big climate datasets is challenging due to both data- and computing- intensive issues in data processing and analyzing. To tackle the challenges, this paper proposes ClimateSpark, an in-memory distributed computing framework to support big climate data processing. In ClimateSpark, the spatiotemporal index is developed to enable Apache Spark to treat the array-based climate data (e.g. netCDF4, HDF4) as native formats, which are stored in Hadoop Distributed File System (HDFS) without any preprocessing. Based on the index, the spatiotemporal query services are provided to retrieve dataset according to a defined geospatial and temporal bounding box. The data subsets will be read out, and a data partition strategy will be applied to equally split the queried data to each computing node, and store them in memory as climateRDDs for processing. By leveraging Spark SQL and User Defined Function (UDFs), the climate data analysis operations can be conducted by the intuitive SQL language. ClimateSpark is evaluated by two use cases using the NASA Modern-Era Retrospective Analysis for Research and Applications (MERRA) climate reanalysis dataset. One use case is to conduct the spatiotemporal query and visualize the subset results in animation; the other one is to compare different climate model outputs using Taylor-diagram service. Experimental results show that ClimateSpark can significantly accelerate data query and processing, and enable the complex analysis services served in the SQL-style fashion.

  1. Reduction of Used Memory Ensemble Kalman Filtering (RumEnKF): A data assimilation scheme for memory intensive, high performance computing

    Science.gov (United States)

    Hut, Rolf; Amisigo, Barnabas A.; Steele-Dunne, Susan; van de Giesen, Nick

    2015-12-01

    Reduction of Used Memory Ensemble Kalman Filtering (RumEnKF) is introduced as a variant on the Ensemble Kalman Filter (EnKF). RumEnKF differs from EnKF in that it does not store the entire ensemble, but rather only saves the first two moments of the ensemble distribution. In this way, the number of ensemble members that can be calculated is less dependent on available memory, and mainly on available computing power (CPU). RumEnKF is developed to make optimal use of current generation super computer architecture, where the number of available floating point operations (flops) increases more rapidly than the available memory and where inter-node communication can quickly become a bottleneck. RumEnKF reduces the used memory compared to the EnKF when the number of ensemble members is greater than half the number of state variables. In this paper, three simple models are used (auto-regressive, low dimensional Lorenz and high dimensional Lorenz) to show that RumEnKF performs similarly to the EnKF. Furthermore, it is also shown that increasing the ensemble size has a similar impact on the estimation error from the three algorithms.

  2. Impact of singular excessive computer game and television exposure on sleep patterns and memory performance of school-aged children.

    Science.gov (United States)

    Dworak, Markus; Schierl, Thomas; Bruns, Thomas; Strüder, Heiko Klaus

    2007-11-01

    Television and computer game consumption are a powerful influence in the lives of most children. Previous evidence has supported the notion that media exposure could impair a variety of behavioral characteristics. Excessive television viewing and computer game playing have been associated with many psychiatric symptoms, especially emotional and behavioral symptoms, somatic complaints, attention problems such as hyperactivity, and family interaction problems. Nevertheless, there is insufficient knowledge about the relationship between singular excessive media consumption on sleep patterns and linked implications on children. The aim of this study was to investigate the effects of singular excessive television and computer game consumption on sleep patterns and memory performance of children. Eleven school-aged children were recruited for this polysomnographic study. Children were exposed to voluntary excessive television and computer game consumption. In the subsequent night, polysomnographic measurements were conducted to measure sleep-architecture and sleep-continuity parameters. In addition, a visual and verbal memory test was conducted before media stimulation and after the subsequent sleeping period to determine visuospatial and verbal memory performance. Only computer game playing resulted in significant reduced amounts of slow-wave sleep as well as significant declines in verbal memory performance. Prolonged sleep-onset latency and more stage 2 sleep were also detected after previous computer game consumption. No effects on rapid eye movement sleep were observed. Television viewing reduced sleep efficiency significantly but did not affect sleep patterns. The results suggest that television and computer game exposure affect children's sleep and deteriorate verbal cognitive performance, which supports the hypothesis of the negative influence of media consumption on children's sleep, learning, and memory.

  3. A computer vision-based automated Figure-8 maze for working memory test in rodents.

    Science.gov (United States)

    Pedigo, Samuel F; Song, Eun Young; Jung, Min Whan; Kim, Jeansok J

    2006-09-30

    The benchmark test for prefrontal cortex (PFC)-mediated working memory in rodents is a delayed alternation task utilizing variations of T-maze or Figure-8 maze, which requires the animals to make specific arm entry responses for reward. In this task, however, manual procedures involved in shaping target behavior, imposing delays between trials and delivering rewards can potentially influence the animal's performance on the maze. Here, we report an automated Figure-8 maze which does not necessitate experimenter-subject interaction during shaping, training or testing. This system incorporates a computer vision system for tracking, motorized gates to impose delays, and automated reward delivery. The maze is controlled by custom software that records the animal's location and activates the gates according to the animal's behavior and a control algorithm. The program performs calculations of task accuracy, tracks movement sequence through the maze, and provides other dependent variables (such as running speed, time spent in different maze locations, activity level during delay). Testing in rats indicates that the performance accuracy is inversely proportional to the delay interval, decreases with PFC lesions, and that animals anticipate timing during long delays. Thus, our automated Figure-8 maze is effective at assessing working memory and provides novel behavioral measures in rodents.

  4. Holographic memory system based on projection recording of computer-generated 1D Fourier holograms.

    Science.gov (United States)

    Betin, A Yu; Bobrinev, V I; Donchenko, S S; Odinokov, S B; Evtikhiev, N N; Starikov, R S; Starikov, S N; Zlokazov, E Yu

    2014-10-01

    Utilization of computer generation of holographic structures significantly simplifies the optical scheme that is used to record the microholograms in a holographic memory record system. Also digital holographic synthesis allows to account the nonlinear errors of the record system to improve the microholograms quality. The multiplexed record of holograms is a widespread technique to increase the data record density. In this article we represent the holographic memory system based on digital synthesis of amplitude one-dimensional (1D) Fourier transform holograms and the multiplexed record of these holograms onto the holographic carrier using optical projection scheme. 1D Fourier transform holograms are very sensitive to orientation of the anamorphic optical element (cylindrical lens) that is required for encoded data object reconstruction. The multiplex record of several holograms with different orientation in an optical projection scheme allowed reconstruction of the data object from each hologram by rotating the cylindrical lens on the corresponding angle. Also, we discuss two optical schemes for the recorded holograms readout: a full-page readout system and line-by-line readout system. We consider the benefits of both systems and present the results of experimental modeling of 1D Fourier holograms nonmultiplex and multiplex record and reconstruction.

  5. Logic and memory concepts for all-magnetic computing based on transverse domain walls

    International Nuclear Information System (INIS)

    Vandermeulen, J; Van de Wiele, B; Dupré, L; Van Waeyenberge, B

    2015-01-01

    We introduce a non-volatile digital logic and memory concept in which the binary data is stored in the transverse magnetic domain walls present in in-plane magnetized nanowires with sufficiently small cross sectional dimensions. We assign the digital bit to the two possible orientations of the transverse domain wall. Numerical proofs-of-concept are presented for a NOT-, AND- and OR-gate, a FAN-out as well as a reading and writing device. Contrary to the chirality based vortex domain wall logic gates introduced in Omari and Hayward (2014 Phys. Rev. Appl. 2 044001), the presented concepts remain applicable when miniaturized and are driven by electrical currents, making the technology compatible with the in-plane racetrack memory concept. The individual devices can be easily combined to logic networks working with clock speeds that scale linearly with decreasing design dimensions. This opens opportunities to an all-magnetic computing technology where the digital data is stored and processed under the same magnetic representation. (paper)

  6. Computational Thermodynamics and Kinetics-Based ICME Framework for High-Temperature Shape Memory Alloys

    Science.gov (United States)

    Arróyave, Raymundo; Talapatra, Anjana; Johnson, Luke; Singh, Navdeep; Ma, Ji; Karaman, Ibrahim

    2015-11-01

    Over the last decade, considerable interest in the development of High-Temperature Shape Memory Alloys (HTSMAs) for solid-state actuation has increased dramatically as key applications in the aerospace and automotive industry demand actuation temperatures well above those of conventional SMAs. Most of the research to date has focused on establishing the (forward) connections between chemistry, processing, (micro)structure, properties, and performance. Much less work has been dedicated to the development of frameworks capable of addressing the inverse problem of establishing necessary chemistry and processing schedules to achieve specific performance goals. Integrated Computational Materials Engineering (ICME) has emerged as a powerful framework to address this problem, although it has yet to be applied to the development of HTSMAs. In this paper, the contributions of computational thermodynamics and kinetics to ICME of HTSMAs are described. Some representative examples of the use of computational thermodynamics and kinetics to understand the phase stability and microstructural evolution in HTSMAs are discussed. Some very recent efforts at combining both to assist in the design of HTSMAs and limitations to the full implementation of ICME frameworks for HTSMA development are presented.

  7. The Case for Higher Computational Density in the Memory-Bound FDTD Method within Multicore Environments

    Directory of Open Access Journals (Sweden)

    Mohammed F. Hadi

    2012-01-01

    Full Text Available It is argued here that more accurate though more compute-intensive alternate algorithms to certain computational methods which are deemed too inefficient and wasteful when implemented within serial codes can be more efficient and cost-effective when implemented in parallel codes designed to run on today's multicore and many-core environments. This argument is most germane to methods that involve large data sets with relatively limited computational density—in other words, algorithms with small ratios of floating point operations to memory accesses. The examples chosen here to support this argument represent a variety of high-order finite-difference time-domain algorithms. It will be demonstrated that a three- to eightfold increase in floating-point operations due to higher-order finite-differences will translate to only two- to threefold increases in actual run times using either graphical or central processing units of today. It is hoped that this argument will convince researchers to revisit certain numerical techniques that have long been shelved and reevaluate them for multicore usability.

  8. Event parallelism: Distributed memory parallel computing for high energy physics experiments

    International Nuclear Information System (INIS)

    Nash, T.

    1989-05-01

    This paper describes the present and expected future development of distributed memory parallel computers for high energy physics experiments. It covers the use of event parallel microprocessor farms, particularly at Fermilab, including both ACP multiprocessors and farms of MicroVAXES. These systems have proven very cost effective in the past. A case is made for moving to the more open environment of UNIX and RISC processors. The 2nd Generation ACP Multiprocessor System, which is based on powerful RISC systems, is described. Given the promise of still more extraordinary increases in processor performance, a new emphasis on point to point, rather than bussed, communication will be required. Developments in this direction are described. 6 figs

  9. Simulation of radiation effects on three-dimensional computer optical memories

    International Nuclear Information System (INIS)

    Moscovitch, M.; Emfietzoglou, D.

    1997-01-01

    A model was developed to simulate the effects of heavy charged-particle (HCP) radiation on the information stored in three-dimensional computer optical memories. The model is based on (i) the HCP track radial dose distribution, (ii) the spatial and temporal distribution of temperature in the track, (iii) the matrix-specific radiation-induced changes that will affect the response, and (iv) the kinetics of transition of photochromic molecules from the colored to the colorless isomeric form (bit flip). It is shown that information stored in a volume of several nanometers radius around the particle close-quote s track axis may be lost. The magnitude of the effect is dependent on the particle close-quote s track structure. copyright 1997 American Institute of Physics

  10. Simulation of radiation effects on three-dimensional computer optical memories

    Science.gov (United States)

    Moscovitch, M.; Emfietzoglou, D.

    1997-01-01

    A model was developed to simulate the effects of heavy charged-particle (HCP) radiation on the information stored in three-dimensional computer optical memories. The model is based on (i) the HCP track radial dose distribution, (ii) the spatial and temporal distribution of temperature in the track, (iii) the matrix-specific radiation-induced changes that will affect the response, and (iv) the kinetics of transition of photochromic molecules from the colored to the colorless isomeric form (bit flip). It is shown that information stored in a volume of several nanometers radius around the particle's track axis may be lost. The magnitude of the effect is dependent on the particle's track structure.

  11. Event parallelism: Distributed memory parallel computing for high energy physics experiments

    International Nuclear Information System (INIS)

    Nash, T.

    1989-01-01

    This paper describes the present and expected future development of distributed memory parallel computers for high energy physics experiments. It covers the use of event parallel microprocessor farms, particularly at Fermilab, including both ACP multiprocessors and farms of MicroVAXES. These systems have proven very cost effective in the past. A case is made for moving to the more open environment of UNIX and RISC processors. The 2nd Generation ACP Multiprocessor System, which is based on powerful RISC systems, is described. Given the promise of still more extraordinary increases in processor performance, a new emphasis on point to point, rather than bussed, communication will be required. Developments in this direction are described. (orig.)

  12. Event parallelism: Distributed memory parallel computing for high energy physics experiments

    Science.gov (United States)

    Nash, Thomas

    1989-12-01

    This paper describes the present and expected future development of distributed memory parallel computers for high energy physics experiments. It covers the use of event parallel microprocessor farms, particularly at Fermilab, including both ACP multiprocessors and farms of MicroVAXES. These systems have proven very cost effective in the past. A case is made for moving to the more open environment of UNIX and RISC processors. The 2nd Generation ACP Multiprocessor System, which is based on powerful RISC system, is described. Given the promise of still more extraordinary increases in processor performance, a new emphasis on point to point, rather than bussed, communication will be required. Developments in this direction are described.

  13. Memory architecture for efficient utilization of SDRAM: a case study of the computation/memory access trade-off

    DEFF Research Database (Denmark)

    Gleerup, Thomas Møller; Holten-Lund, Hans Erik; Madsen, Jan

    2000-01-01

    . In software, forward differencing is usually better, but in this hardware implementation, the trade-off has made it possible to develop a very regular memory architecture with a buffering system, which can reach 95% bandwidth utilization using off-the-shelf SDRAM, This is achieved by changing the algorithm......This paper discusses the trade-off between calculations and memory accesses in a 3D graphics tile renderer for visualization of data from medical scanners. The performance requirement of this application is a frame rate of 25 frames per second when rendering 3D models with 2 million triangles, i...... to use a memory access strategy with write-only and read-only phases, and a buffering system, which uses round-robin bank write-access combined with burst read-access....

  14. Distinctive Features Hold a Privileged Status in the Computation of Word Meaning: Implications for Theories of Semantic Memory

    Science.gov (United States)

    Cree, George S.; McNorgan, Chris; McRae, Ken

    2006-01-01

    The authors present data from 2 feature verification experiments designed to determine whether distinctive features have a privileged status in the computation of word meaning. They use an attractor-based connectionist model of semantic memory to derive predictions for the experiments. Contrary to central predictions of the conceptual structure…

  15. Memory and selective attention in multiple sclerosis: cross-sectional computer-based assessment in a large outpatient sample.

    Science.gov (United States)

    Adler, Georg; Lembach, Yvonne

    2015-08-01

    Cognitive impairments may have a severe impact on everyday functioning and quality of life of patients with multiple sclerosis (MS). However, there are some methodological problems in the assessment and only a few studies allow a representative estimate of the prevalence and severity of cognitive impairments in MS patients. We applied a computer-based method, the memory and attention test (MAT), in 531 outpatients with MS, who were assessed at nine neurological practices or specialized outpatient clinics. The findings were compared with those obtained in an age-, sex- and education-matched control group of 84 healthy subjects. Episodic short-term memory was substantially decreased in the MS patients. About 20% of them reached a score of only less than two standard deviations below the mean of the control group. The episodic short-term memory score was negatively correlated with the EDSS score. Minor but also significant impairments in the MS patients were found for verbal short-term memory, episodic working memory and selective attention. The computer-based MAT was found to be useful for a routine assessment of cognition in MS outpatients.

  16. Playing the computer game Tetris prior to viewing traumatic film material and subsequent intrusive memories: Examining proactive interference.

    Science.gov (United States)

    James, Ella L; Lau-Zhu, Alex; Tickle, Hannah; Horsch, Antje; Holmes, Emily A

    2016-12-01

    Visuospatial working memory (WM) tasks performed concurrently or after an experimental trauma (traumatic film viewing) have been shown to reduce subsequent intrusive memories (concurrent or retroactive interference, respectively). This effect is thought to arise because, during the time window of memory consolidation, the film memory is labile and vulnerable to interference by the WM task. However, it is not known whether tasks before an experimental trauma (i.e. proactive interference) would also be effective. Therefore, we tested if a visuospatial WM task given before a traumatic film reduced intrusions. Findings are relevant to the development of preventative strategies to reduce intrusive memories of trauma for groups who are routinely exposed to trauma (e.g. emergency services personnel) and for whom tasks prior to trauma exposure might be beneficial. Participants were randomly assigned to 1 of 2 conditions. In the Tetris condition (n = 28), participants engaged in the computer game for 11 min immediately before viewing a 12-min traumatic film, whereas those in the Control condition (n = 28) had no task during this period. Intrusive memory frequency was assessed using an intrusion diary over 1-week and an Intrusion Provocation Task at 1-week follow-up. Recognition memory for the film was also assessed at 1-week. Compared to the Control condition, participants in the Tetris condition did not report statistically significant difference in intrusive memories of the trauma film on either measure. There was also no statistically significant difference in recognition memory scores between conditions. The study used an experimental trauma paradigm and findings may not be generalizable to a clinical population. Compared to control, playing Tetris before viewing a trauma film did not lead to a statistically significant reduction in the frequency of later intrusive memories of the film. It is unlikely that proactive interference, at least with this task

  17. A single-trace dual-process model of episodic memory: a novel computational account of familiarity and recollection.

    Science.gov (United States)

    Greve, Andrea; Donaldson, David I; van Rossum, Mark C W

    2010-02-01

    Dual-process theories of episodic memory state that retrieval is contingent on two independent processes: familiarity (providing a sense of oldness) and recollection (recovering events and their context). A variety of studies have reported distinct neural signatures for familiarity and recollection, supporting dual-process theory. One outstanding question is whether these signatures reflect the activation of distinct memory traces or the operation of different retrieval mechanisms on a single memory trace. We present a computational model that uses a single neuronal network to store memory traces, but two distinct and independent retrieval processes access the memory. The model is capable of performing familiarity and recollection-based discrimination between old and new patterns, demonstrating that dual-process models need not to rely on multiple independent memory traces, but can use a single trace. Importantly, our putative familiarity and recollection processes exhibit distinct characteristics analogous to those found in empirical data; they diverge in capacity and sensitivity to sparse and correlated patterns, exhibit distinct ROC curves, and account for performance on both item and associative recognition tests. The demonstration that a single-trace, dual-process model can account for a range of empirical findings highlights the importance of distinguishing between neuronal processes and the neuronal representations on which they operate.

  18. Effects of degraded sensory input on memory for speech: behavioral data and a test of biologically constrained computational models.

    Science.gov (United States)

    Piquado, Tepring; Cousins, Katheryn A Q; Wingfield, Arthur; Miller, Paul

    2010-12-13

    Poor hearing acuity reduces memory for spoken words, even when the words are presented with enough clarity for correct recognition. An "effortful hypothesis" suggests that the perceptual effort needed for recognition draws from resources that would otherwise be available for encoding the word in memory. To assess this hypothesis, we conducted a behavioral task requiring immediate free recall of word-lists, some of which contained an acoustically masked word that was just above perceptual threshold. Results show that masking a word reduces the recall of that word and words prior to it, as well as weakening the linking associations between the masked and prior words. In contrast, recall probabilities of words following the masked word are not affected. To account for this effect we conducted computational simulations testing two classes of models: Associative Linking Models and Short-Term Memory Buffer Models. Only a model that integrated both contextual linking and buffer components matched all of the effects of masking observed in our behavioral data. In this Linking-Buffer Model, the masked word disrupts a short-term memory buffer, causing associative links of words in the buffer to be weakened, affecting memory for the masked word and the word prior to it, while allowing links of words following the masked word to be spared. We suggest that these data account for the so-called "effortful hypothesis", where distorted input has a detrimental impact on prior information stored in short-term memory. Copyright © 2010 Elsevier B.V. All rights reserved.

  19. A memory efficient user interface for CLIPS micro-computer applications

    Science.gov (United States)

    Sterle, Mark E.; Mayer, Richard J.; Jordan, Janice A.; Brodale, Howard N.; Lin, Min-Jin

    1990-01-01

    The goal of the Integrated Southern Pine Beetle Expert System (ISPBEX) is to provide expert level knowledge concerning treatment advice that is convenient and easy to use for Forest Service personnel. ISPBEX was developed in CLIPS and delivered on an IBM PC AT class micro-computer, operating with an MS/DOS operating system. This restricted the size of the run time system to 640K. In order to provide a robust expert system, with on-line explanation, help, and alternative actions menus, as well as features that allow the user to back up or execute 'what if' scenarios, a memory efficient menuing system was developed to interface with the CLIPS programs. By robust, we mean an expert system that (1) is user friendly, (2) provides reasonable solutions for a wide variety of domain specific problems, (3) explains why some solutions were suggested but others were not, and (4) provides technical information relating to the problem solution. Several advantages were gained by using this type of user interface (UI). First, by storing the menus on the hard disk (instead of main memory) during program execution, a more robust system could be implemented. Second, since the menus were built rapidly, development time was reduced. Third, the user may try a new scenario by backing up to any of the input screens and revising segments of the original input without having to retype all the information. And fourth, asserting facts from the menus provided for a dynamic and flexible fact base. This UI technology has been applied successfully in expert systems applications in forest management, agriculture, and manufacturing. This paper discusses the architecture of the UI system, human factors considerations, and the menu syntax design.

  20. Serotonergic modulation of spatial working memory: predictions from a computational network model

    Directory of Open Access Journals (Sweden)

    Maria eCano-Colino

    2013-09-01

    Full Text Available Serotonin (5-HT receptors of types 1A and 2A are massively expressed in prefrontal cortex (PFC neurons, an area associated with cognitive function. Hence, 5-HT could be effective in modulating prefrontal-dependent cognitive functions, such as spatial working memory (SWM. However, a direct association between 5-HT and SWM has proved elusive in psycho-pharmacological studies. Recently, a computational network model of the PFC microcircuit was used to explore the relationship between 5‑HT and SWM (Cano-Colino et al. 2013. This study found that both excessive and insufficient 5-HT levels lead to impaired SWM performance in the network, and it concluded that analyzing behavioral responses based on confidence reports could facilitate the experimental identification of SWM behavioral effects of 5‑HT neuromodulation. Such analyses may have confounds based on our limited understanding of metacognitive processes. Here, we extend these results by deriving three additional predictions from the model that do not rely on confidence reports. Firstly, only excessive levels of 5-HT should result in SWM deficits that increase with delay duration. Secondly, excessive 5-HT baseline concentration makes the network vulnerable to distractors at distances that were robust to distraction in control conditions, while the network still ignores distractors efficiently for low 5‑HT levels that impair SWM. Finally, 5-HT modulates neuronal memory fields in neurophysiological experiments: Neurons should be better tuned to the cued stimulus than to the behavioral report for excessive 5-HT levels, while the reverse should happen for low 5-HT concentrations. In all our simulations agonists of 5-HT1A receptors and antagonists of 5-HT2A receptors produced behavioral and physiological effects in line with global 5-HT level increases. Our model makes specific predictions to be tested experimentally and advance our understanding of the neural basis of SWM and its neuromodulation

  1. Conditional bistability, a generic cellular mnemonic mechanism for robust and flexible working memory computations.

    Science.gov (United States)

    Rodriguez, Guillaume; Sarazin, Matthieu; Clemente, Alexandra; Holden, Stephanie; Paz, Jeanne T; Delord, Bruno

    2018-04-30

    Persistent neural activity, the substrate of working memory, is thought to emerge from synaptic reverberation within recurrent networks. However, reverberation models do not robustly explain fundamental dynamics of persistent activity, including high-spiking irregularity, large intertrial variability, and state transitions. While cellular bistability may contribute to persistent activity, its rigidity appears incompatible with persistent activity labile characteristics. Here, we unravel in a cellular model a form of spike-mediated conditional bistability that is robust, generic and provides a rich repertoire of mnemonic computations. Under asynchronous synaptic inputs of the awakened state, conditional bistability generates spiking/bursting episodes, accounting for the irregularity, variability and state transitions characterizing persistent activity. This mechanism has likely been overlooked because of the sub-threshold input it requires and we predict how to assess it experimentally. Our results suggest a reexamination of the role of intrinsic properties in the collective network dynamics responsible for flexible working memory. SIGNIFICANCE STATEMENT This study unravels a novel form of intrinsic neuronal property, i.e. conditional bistability. We show that, thanks of its conditional character, conditional bistability favors the emergence of flexible and robust forms of persistent activity in PFC neural networks, in opposition to previously studied classical forms of absolute bistability. Specifically, we demonstrate for the first time that conditional bistability 1) is a generic biophysical spike-dependent mechanism of layer V pyramidal neurons in the PFC and that 2) it accounts for essential neurodynamical features for the organisation and flexibility of PFC persistent activity (the large irregularity and intertrial variability of the discharge and its organization under discrete stable states), which remain unexplained in a robust fashion by current models

  2. A computational model of fMRI activity in the intraparietal sulcus that supports visual working memory.

    Science.gov (United States)

    Domijan, Dražen

    2011-12-01

    A computational model was developed to explain a pattern of results of fMRI activation in the intraparietal sulcus (IPS) supporting visual working memory for multiobject scenes. The model is based on the hypothesis that dendrites of excitatory neurons are major computational elements in the cortical circuit. Dendrites enable formation of a competitive queue that exhibits a gradient of activity values for nodes encoding different objects, and this pattern is stored in working memory. In the model, brain imaging data are interpreted as a consequence of blood flow arising from dendritic processing. Computer simulations showed that the model successfully simulates data showing the involvement of inferior IPS in object individuation and spatial grouping through representation of objects' locations in space, along with the involvement of superior IPS in object identification through representation of a set of objects' features. The model exhibits a capacity limit due to the limited dynamic range for nodes and the operation of lateral inhibition among them. The capacity limit is fixed in the inferior IPS regardless of the objects' complexity, due to the normalization of lateral inhibition, and variable in the superior IPS, due to the different encoding demands for simple and complex shapes. Systematic variation in the strength of self-excitation enables an understanding of the individual differences in working memory capacity. The model offers several testable predictions regarding the neural basis of visual working memory.

  3. A parallelization study of the general purpose Monte Carlo code MCNP4 on a distributed memory highly parallel computer

    International Nuclear Information System (INIS)

    Yamazaki, Takao; Fujisaki, Masahide; Okuda, Motoi; Takano, Makoto; Masukawa, Fumihiro; Naito, Yoshitaka

    1993-01-01

    The general purpose Monte Carlo code MCNP4 has been implemented on the Fujitsu AP1000 distributed memory highly parallel computer. Parallelization techniques developed and studied are reported. A shielding analysis function of the MCNP4 code is parallelized in this study. A technique to map a history to each processor dynamically and to map control process to a certain processor was applied. The efficiency of parallelized code is up to 80% for a typical practical problem with 512 processors. These results demonstrate the advantages of a highly parallel computer to the conventional computers in the field of shielding analysis by Monte Carlo method. (orig.)

  4. Bessel function expansion to reduce the calculation time and memory usage for cylindrical computer-generated holograms.

    Science.gov (United States)

    Sando, Yusuke; Barada, Daisuke; Jackin, Boaz Jessie; Yatagai, Toyohiko

    2017-07-10

    This study proposes a method to reduce the calculation time and memory usage required for calculating cylindrical computer-generated holograms. The wavefront on the cylindrical observation surface is represented as a convolution integral in the 3D Fourier domain. The Fourier transformation of the kernel function involving this convolution integral is analytically performed using a Bessel function expansion. The analytical solution can drastically reduce the calculation time and the memory usage without any cost, compared with the numerical method using fast Fourier transform to Fourier transform the kernel function. In this study, we present the analytical derivation, the efficient calculation of Bessel function series, and a numerical simulation. Furthermore, we demonstrate the effectiveness of the analytical solution through comparisons of calculation time and memory usage.

  5. Determination of strain fields in porous shape memory alloys using micro-computed tomography

    Science.gov (United States)

    Bormann, Therese; Friess, Sebastian; de Wild, Michael; Schumacher, Ralf; Schulz, Georg; Müller, Bert

    2010-09-01

    Shape memory alloys (SMAs) belong to 'intelligent' materials since the metal alloy can change its macroscopic shape as the result of the temperature-induced, reversible martensite-austenite phase transition. SMAs are often applied for medical applications such as stents, hinge-less instruments, artificial muscles, and dental braces. Rapid prototyping techniques, including selective laser melting (SLM), allow fabricating complex porous SMA microstructures. In the present study, the macroscopic shape changes of the SMA test structures fabricated by SLM have been investigated by means of micro computed tomography (μCT). For this purpose, the SMA structures are placed into the heating stage of the μCT system SkyScan 1172™ (SkyScan, Kontich, Belgium) to acquire three-dimensional datasets above and below the transition temperature, i.e. at room temperature and at about 80°C, respectively. The two datasets were registered on the basis of an affine registration algorithm with nine independent parameters - three for the translation, three for the rotation and three for the scaling in orthogonal directions. Essentially, the scaling parameters characterize the macroscopic deformation of the SMA structure of interest. Furthermore, applying the non-rigid registration algorithm, the three-dimensional strain field of the SMA structure on the micrometer scale comes to light. The strain fields obtained will serve for the optimization of the SLM-process and, more important, of the design of the complex shaped SMA structures for tissue engineering and medical implants.

  6. Procedural Memory: Computer Learning in Control Subjects and in Parkinson’s Disease Patients

    Directory of Open Access Journals (Sweden)

    C. Thomas-Antérion

    1996-01-01

    Full Text Available We used perceptual motor tasks involving the learning of mouse control by looking at a Macintosh computer screen. We studied 90 control subjects aged between sixteen and seventy-five years. There was a significant time difference between the scales of age but improvement was the same for all subjects. We also studied 24 patients with Parkinson's disease (PD. We observed an influence of age and also of educational levels. The PD patients had difficulties of learning in all tests but they did not show differences in time when compared to the control group in the first learning session (Student's t-test. They learned two or four and a half times less well than the control group. In the first test, they had some difficulty in initiating the procedure and learned eight times less well than the control group. Performances seemed to be heterogeneous: patients with only tremor (seven and patients without treatment (five performed better than others but learned less. Success in procedural tasks for the PD group seemed to depend on the capacity to initiate the response and not on the development of an accurate strategy. Many questions still remain unanswered, and we have to study different kinds of implicit memory tasks to differentiate performance in control and basal ganglia groups.

  7. Context-dependent memory decay is evidence of effort minimization in motor learning: a computational study

    OpenAIRE

    Takiyama, Ken

    2015-01-01

    Recent theoretical models suggest that motor learning includes at least two processes: error minimization and memory decay. While learning a novel movement, a motor memory of the movement is gradually formed to minimize the movement error between the desired and actual movements in each training trial, but the memory is slightly forgotten in each trial. The learning effects of error minimization trained with a certain movement are partially available in other non-trained movements, and this t...

  8. Computational Modeling of Shape Memory Polymer Origami that Responds to Light

    Science.gov (United States)

    Mailen, Russell William

    Shape memory polymers (SMPs) transform in response to external stimuli, such as infrared (IR) light. Although SMPs have many applications, this investigation focuses on their use as actuators in self-folding origami structures. Ink patterned on the surface of the SMP sheet absorbs thermal energy from the IR light, which produces localized heating. The material shrinks wherever the activation temperature is exceeded and can produce out-of-plane deformation. The time and temperature dependent response of these SMPs provides unique opportunities for developing complex three-dimensional (3D) structures from initially flat sheets through self-folding origami, but the application of this technique requires predicting accurately the final folded or deformed shape. Furthermore, current computational approaches for SMPs do not fully couple the thermo-mechanical response of the material. Hence, a proposed nonlinear, 3D, thermo-viscoelastic finite element framework was formulated to predict deformed shapes for different self-folding systems and compared to experimental results for self-folding origami structures. A detailed understanding of the shape memory response and the effect of controllable design parameters, such as the ink pattern, pre-strain conditions, and applied thermal and mechanical fields, allows for a predictive understanding and design of functional, 3D structures. The proposed modeling framework was used to obtain a fundamental understanding of the thermo-mechanical behavior of SMPs and the impact of the material behavior on hinged self-folding. These predictions indicated how the thermal and mechanical conditions during pre-strain significantly affect the shrinking and folding response of the SMP. Additionally, the externally applied thermal loads significantly influenced the folding rate and maximum bending angle. The computational framework was also adapted to understand the effects of fully coupling the thermal and mechanical response of the material

  9. Static Computer Memory Integrity Testing (SCMIT): An experiment flown on STS-40 as part of GAS payload G-616

    Science.gov (United States)

    Hancock, Thomas

    1993-01-01

    This experiment investigated the integrity of static computer memory (floppy disk media) when exposed to the environment of low earth orbit. The experiment attempted to record soft-event upsets (bit-flips) in static computer memory. Typical conditions that exist in low earth orbit that may cause soft-event upsets include: cosmic rays, low level background radiation, charged fields, static charges, and the earth's magnetic field. Over the years several spacecraft have been affected by soft-event upsets (bit-flips), and these events have caused a loss of data or affected spacecraft guidance and control. This paper describes a commercial spin-off that is being developed from the experiment.

  10. Elucidation and Optimization of Resistive Random Access Memory Switching Behavior for Advanced Computing Applications

    Science.gov (United States)

    Alamgir, Zahiruddin

    RRAM has recently emerged as a strong candidate for non-volatile memory (NVM). Beyond memory applications, RRAM holds promise for use in performing logic functions, mimicking neuromorphic activities, enabling multi-level switching, and as one of the key elements of hardware based encryption or signal processing systems. It has been shown previously that RRAM resistance levels can be changed by adjusting compliance current or voltage level. This characteristic makes RRAM suitable for use in setting the synaptic weight in neuromorphic computing circuits. RRAM is also considered as a key element in hardware encryption systems, to produce unique and reproducible signals. However, a key challenge to implement RRAM in these applications is significant cycle to cycle performance variability. We sought to develop RRAM that can be tuned to different resistance levels gradually, with high reliability, and low variability. To achieve this goal, we focused on elucidating the conduction mechanisms underlying the resistive switching behavior for these devices. Electrical conduction mechanisms were determined by curve fitting I-V data using different current conduction equations. Temperature studies were also performed to corroborate these data. It was found that Schottky barrier height and width modulation was one of the key parameters that could be tuned to achieve different resistance levels, and for switching resistance states, primarily via oxygen vacancy movement. Oxygen exchange layers with different electronegativity were placed between top electrode and the oxide layer of TaOx devices to determine the effect of oxygen vacancy concentrations and gradients in these devices. It was found that devices with OELs with lower electronegativity tend to yield greater separation in the OFF vs. ON state resistance levels. As an extension of this work, TaOx based RRAM with Hf as the OEL was fabricated and could be tuned to different resistance level using pulse width and height

  11. Computational complexity and memory usage for multi-frontal direct solvers used in p finite element analysis

    KAUST Repository

    Calo, Victor M.; Collier, Nathan; Pardo, David; Paszyński, Maciej R.

    2011-01-01

    The multi-frontal direct solver is the state of the art for the direct solution of linear systems. This paper provides computational complexity and memory usage estimates for the application of the multi-frontal direct solver algorithm on linear systems resulting from p finite elements. Specifically we provide the estimates for systems resulting from C0 polynomial spaces spanned by B-splines. The structured grid and uniform polynomial order used in isogeometric meshes simplifies the analysis.

  12. Computational complexity and memory usage for multi-frontal direct solvers used in p finite element analysis

    KAUST Repository

    Calo, Victor M.

    2011-05-14

    The multi-frontal direct solver is the state of the art for the direct solution of linear systems. This paper provides computational complexity and memory usage estimates for the application of the multi-frontal direct solver algorithm on linear systems resulting from p finite elements. Specifically we provide the estimates for systems resulting from C0 polynomial spaces spanned by B-splines. The structured grid and uniform polynomial order used in isogeometric meshes simplifies the analysis.

  13. Information processing in bacteria: memory, computation, and statistical physics: a key issues review

    International Nuclear Information System (INIS)

    Lan, Ganhui; Tu, Yuhai

    2016-01-01

    preserving information, it does not reveal the underlying mechanism that leads to the observed input-output relationship, nor does it tell us much about which information is important for the organism and how biological systems use information to carry out specific functions. To do that, we need to develop models of the biological machineries, e.g. biochemical networks and neural networks, to understand the dynamics of biological information processes. This is a much more difficult task. It requires deep knowledge of the underlying biological network—the main players (nodes) and their interactions (links)—in sufficient detail to build a model with predictive power, as well as quantitative input-output measurements of the system under different perturbations (both genetic variations and different external conditions) to test the model predictions to guide further development of the model. Due to the recent growth of biological knowledge thanks in part to high throughput methods (sequencing, gene expression microarray, etc) and development of quantitative in vivo techniques such as various florescence technology, these requirements are starting to be realized in different biological systems. The possible close interaction between quantitative experimentation and theoretical modeling has made systems biology an attractive field for physicists interested in quantitative biology. In this review, we describe some of the recent work in developing a quantitative predictive model of bacterial chemotaxis, which can be considered as the hydrogen atom of systems biology. Using statistical physics approaches, such as the Ising model and Langevin equation, we study how bacteria, such as E. coli, sense and amplify external signals, how they keep a working memory of the stimuli, and how they use these data to compute the chemical gradient. In particular, we will describe how E. coli cells avoid cross-talk in a heterogeneous receptor cluster to keep a ligand-specific memory. We will also

  14. Information processing in bacteria: memory, computation, and statistical physics: a key issues review

    Science.gov (United States)

    Lan, Ganhui; Tu, Yuhai

    2016-05-01

    preserving information, it does not reveal the underlying mechanism that leads to the observed input-output relationship, nor does it tell us much about which information is important for the organism and how biological systems use information to carry out specific functions. To do that, we need to develop models of the biological machineries, e.g. biochemical networks and neural networks, to understand the dynamics of biological information processes. This is a much more difficult task. It requires deep knowledge of the underlying biological network—the main players (nodes) and their interactions (links)—in sufficient detail to build a model with predictive power, as well as quantitative input-output measurements of the system under different perturbations (both genetic variations and different external conditions) to test the model predictions to guide further development of the model. Due to the recent growth of biological knowledge thanks in part to high throughput methods (sequencing, gene expression microarray, etc) and development of quantitative in vivo techniques such as various florescence technology, these requirements are starting to be realized in different biological systems. The possible close interaction between quantitative experimentation and theoretical modeling has made systems biology an attractive field for physicists interested in quantitative biology. In this review, we describe some of the recent work in developing a quantitative predictive model of bacterial chemotaxis, which can be considered as the hydrogen atom of systems biology. Using statistical physics approaches, such as the Ising model and Langevin equation, we study how bacteria, such as E. coli, sense and amplify external signals, how they keep a working memory of the stimuli, and how they use these data to compute the chemical gradient. In particular, we will describe how E. coli cells avoid cross-talk in a heterogeneous receptor cluster to keep a ligand-specific memory. We will also

  15. Information processing in bacteria: memory, computation, and statistical physics: a key issues review.

    Science.gov (United States)

    Lan, Ganhui; Tu, Yuhai

    2016-05-01

    preserving information, it does not reveal the underlying mechanism that leads to the observed input-output relationship, nor does it tell us much about which information is important for the organism and how biological systems use information to carry out specific functions. To do that, we need to develop models of the biological machineries, e.g. biochemical networks and neural networks, to understand the dynamics of biological information processes. This is a much more difficult task. It requires deep knowledge of the underlying biological network-the main players (nodes) and their interactions (links)-in sufficient detail to build a model with predictive power, as well as quantitative input-output measurements of the system under different perturbations (both genetic variations and different external conditions) to test the model predictions to guide further development of the model. Due to the recent growth of biological knowledge thanks in part to high throughput methods (sequencing, gene expression microarray, etc) and development of quantitative in vivo techniques such as various florescence technology, these requirements are starting to be realized in different biological systems. The possible close interaction between quantitative experimentation and theoretical modeling has made systems biology an attractive field for physicists interested in quantitative biology. In this review, we describe some of the recent work in developing a quantitative predictive model of bacterial chemotaxis, which can be considered as the hydrogen atom of systems biology. Using statistical physics approaches, such as the Ising model and Langevin equation, we study how bacteria, such as E. coli, sense and amplify external signals, how they keep a working memory of the stimuli, and how they use these data to compute the chemical gradient. In particular, we will describe how E. coli cells avoid cross-talk in a heterogeneous receptor cluster to keep a ligand-specific memory. We will also

  16. Episodic grammar: a computational model of the interaction between episodic and semantic memory in language processing

    NARCIS (Netherlands)

    Borensztajn, G.; Zuidema, W.; Carlson, L.; Hoelscher, C.; Shipley, T.F.

    2011-01-01

    We present a model of the interaction of semantic and episodic memory in language processing. Our work shows how language processing can be understood in terms of memory retrieval. We point out that the perceived dichotomy between rule-based versus exemplar-based language modelling can be

  17. Optical interconnection network for parallel access to multi-rank memory in future computing systems.

    Science.gov (United States)

    Wang, Kang; Gu, Huaxi; Yang, Yintang; Wang, Kun

    2015-08-10

    With the number of cores increasing, there is an emerging need for a high-bandwidth low-latency interconnection network, serving core-to-memory communication. In this paper, aiming at the goal of simultaneous access to multi-rank memory, we propose an optical interconnection network for core-to-memory communication. In the proposed network, the wavelength usage is delicately arranged so that cores can communicate with different ranks at the same time and broadcast for flow control can be achieved. A distributed memory controller architecture that works in a pipeline mode is also designed for efficient optical communication and transaction address processes. The scaling method and wavelength assignment for the proposed network are investigated. Compared with traditional electronic bus-based core-to-memory communication, the simulation results based on the PARSEC benchmark show that the bandwidth enhancement and latency reduction are apparent.

  18. Context-dependent memory decay is evidence of effort minimization in motor learning: a computational study.

    Science.gov (United States)

    Takiyama, Ken

    2015-01-01

    Recent theoretical models suggest that motor learning includes at least two processes: error minimization and memory decay. While learning a novel movement, a motor memory of the movement is gradually formed to minimize the movement error between the desired and actual movements in each training trial, but the memory is slightly forgotten in each trial. The learning effects of error minimization trained with a certain movement are partially available in other non-trained movements, and this transfer of the learning effect can be reproduced by certain theoretical frameworks. Although most theoretical frameworks have assumed that a motor memory trained with a certain movement decays at the same speed during performing the trained movement as non-trained movements, a recent study reported that the motor memory decays faster during performing the trained movement than non-trained movements, i.e., the decay rate of motor memory is movement or context dependent. Although motor learning has been successfully modeled based on an optimization framework, e.g., movement error minimization, the type of optimization that can lead to context-dependent memory decay is unclear. Thus, context-dependent memory decay raises the question of what is optimized in motor learning. To reproduce context-dependent memory decay, I extend a motor primitive framework. Specifically, I introduce motor effort optimization into the framework because some previous studies have reported the existence of effort optimization in motor learning processes and no conventional motor primitive model has yet considered the optimization. Here, I analytically and numerically revealed that context-dependent decay is a result of motor effort optimization. My analyses suggest that context-dependent decay is not merely memory decay but is evidence of motor effort optimization in motor learning.

  19. Context-dependent memory decay is evidence of effort minimization in motor learning: A computational study

    Directory of Open Access Journals (Sweden)

    Ken eTakiyama

    2015-02-01

    Full Text Available Recent theoretical models suggest that motor learning includes at least two processes: error minimization and memory decay. While learning a novel movement, a motor memory of the movement is gradually formed to minimize the movement error between the desired and actual movements in each training trial, but the memory is slightly forgotten in each trial. The learning effects of error minimization trained with a certain movement are partially available in other non-trained movements, and this transfer of the learning effect can be reproduced by certain theoretical frameworks. Although most theoretical frameworks have assumed that a motor memory trained with a certain movement decays at the same speed during performing the trained movement as non-trained movements, a recent study reported that the motor memory decays faster during performing the trained movement than non-trained movements, i.e., the decay rate of motor memory is movement or context dependent. Although motor learning has been successfully modeled based on an optimization framework, e.g., movement error minimization, the type of optimization that can lead to context-dependent memory decay is unclear. Thus, context-dependent memory decay raises the question of what is optimized in motor learning. To reproduce context-dependent memory decay, I extend a motor primitive framework. Specifically, I introduce motor effort optimization into the framework because some previous studies have reported the existence of effort optimization in motor learning processes and no conventional motor primitive model has yet considered the optimization. Here, I analytically and numerically revealed that context-dependent decay is a result of motor effort optimization. My analyses suggest that context-dependent decay is not merely memory decay but is evidence of motor effort optimization in motor learning.

  20. Coupling Computer Codes for The Analysis of Severe Accident Using A Pseudo Shared Memory Based on MPI

    International Nuclear Information System (INIS)

    Cho, Young Chul; Park, Chang-Hwan; Kim, Dong-Min

    2016-01-01

    As there are four codes in-vessel analysis code (CSPACE), ex-vessel analysis code (SACAP), corium behavior analysis code (COMPASS), and fission product behavior analysis code, for the analysis of severe accident, it is complex to implement the coupling of codes with the similar methodologies for RELAP and CONTEMPT or SPACE and CAP. Because of that, an efficient coupling so called Pseudo shared memory architecture was introduced. In this paper, coupling methodologies will be compared and the methodology used for the analysis of severe accident will be discussed in detail. The barrier between in-vessel and ex-vessel has been removed for the analysis of severe accidents with the implementation of coupling computer codes with pseudo shared memory architecture based on MPI. The remaining are proper choice and checking of variables and values for the selected severe accident scenarios, e.g., TMI accident. Even though it is possible to couple more than two computer codes with pseudo shared memory architecture, the methodology should be revised to couple parallel codes especially when they are programmed using MPI

  1. The MUSOS (MUsic SOftware System) Toolkit: A computer-based, open source application for testing memory for melodies.

    Science.gov (United States)

    Rainsford, M; Palmer, M A; Paine, G

    2018-04-01

    Despite numerous innovative studies, rates of replication in the field of music psychology are extremely low (Frieler et al., 2013). Two key methodological challenges affecting researchers wishing to administer and reproduce studies in music cognition are the difficulty of measuring musical responses, particularly when conducting free-recall studies, and access to a reliable set of novel stimuli unrestricted by copyright or licensing issues. In this article, we propose a solution for these challenges in computer-based administration. We present a computer-based application for testing memory for melodies. Created using the software Max/MSP (Cycling '74, 2014a), the MUSOS (Music Software System) Toolkit uses a simple modular framework configurable for testing common paradigms such as recall, old-new recognition, and stem completion. The program is accompanied by a stimulus set of 156 novel, copyright-free melodies, in audio and Max/MSP file formats. Two pilot tests were conducted to establish the properties of the accompanying stimulus set that are relevant to music cognition and general memory research. By using this software, a researcher without specialist musical training may administer and accurately measure responses from common paradigms used in the study of memory for music.

  2. Coupling Computer Codes for The Analysis of Severe Accident Using A Pseudo Shared Memory Based on MPI

    Energy Technology Data Exchange (ETDEWEB)

    Cho, Young Chul; Park, Chang-Hwan; Kim, Dong-Min [FNC Technology Co., Yongin (Korea, Republic of)

    2016-10-15

    As there are four codes in-vessel analysis code (CSPACE), ex-vessel analysis code (SACAP), corium behavior analysis code (COMPASS), and fission product behavior analysis code, for the analysis of severe accident, it is complex to implement the coupling of codes with the similar methodologies for RELAP and CONTEMPT or SPACE and CAP. Because of that, an efficient coupling so called Pseudo shared memory architecture was introduced. In this paper, coupling methodologies will be compared and the methodology used for the analysis of severe accident will be discussed in detail. The barrier between in-vessel and ex-vessel has been removed for the analysis of severe accidents with the implementation of coupling computer codes with pseudo shared memory architecture based on MPI. The remaining are proper choice and checking of variables and values for the selected severe accident scenarios, e.g., TMI accident. Even though it is possible to couple more than two computer codes with pseudo shared memory architecture, the methodology should be revised to couple parallel codes especially when they are programmed using MPI.

  3. Fostering multidisciplinary learning through computer-supported collaboration script: The role of a transactive memory script

    NARCIS (Netherlands)

    Noroozi, O.; Weinberger, A.; Biemans, H.J.A.; Teasley, S.D.; Mulder, M.

    2012-01-01

    For solving many of today's complex problems, professionals need to collaborate in multidisciplinary teams. Facilitation of knowledge awareness and coordination among group members, that is through a Transactive Memory System (TMS), is vital in multidisciplinary collaborative settings. Online

  4. Construction and Application of an AMR Algorithm for Distributed Memory Computers

    OpenAIRE

    Deiterding, Ralf

    2003-01-01

    While the parallelization of blockstructured adaptive mesh refinement techniques is relatively straight-forward on shared memory architectures, appropriate distribution strategies for the emerging generation of distributed memory machines are a topic of on-going research. In this paper, a locality-preserving domain decomposition is proposed that partitions the entire AMR hierarchy from the base level on. It is shown that the approach reduces the communication costs and simplifies the im...

  5. The effects of working memory on brain-computer interface performance.

    Science.gov (United States)

    Sprague, Samantha A; McBee, Matthew T; Sellers, Eric W

    2016-02-01

    The purpose of the present study is to evaluate the relationship between working memory and BCI performance. Participants took part in two separate sessions. The first session consisted of three computerized tasks. The List Sorting Working Memory Task was used to measure working memory, the Picture Vocabulary Test was used to measure general intelligence, and the Dimensional Change Card Sort Test was used to measure executive function, specifically cognitive flexibility. The second session consisted of a P300-based BCI copy-spelling task. The results indicate that both working memory and general intelligence are significant predictors of BCI performance. This suggests that working memory training could be used to improve performance on a BCI task. Working memory training may help to reduce a portion of the individual differences that exist in BCI performance allowing for a wider range of users to successfully operate the BCI system as well as increase the BCI performance of current users. Copyright © 2015 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  6. Time complexity analysis for distributed memory computers: implementation of parallel conjugate gradient method

    NARCIS (Netherlands)

    Hoekstra, A.G.; Sloot, P.M.A.; Haan, M.J.; Hertzberger, L.O.; van Leeuwen, J.

    1991-01-01

    New developments in Computer Science, both hardware and software, offer researchers, such as physicists, unprecedented possibilities to solve their computational intensive problems.However, full exploitation of e.g. new massively parallel computers, parallel languages or runtime environments

  7. A highly efficient parallel algorithm for solving the neutron diffusion nodal equations on shared-memory computers

    International Nuclear Information System (INIS)

    Azmy, Y.Y.; Kirk, B.L.

    1990-01-01

    Modern parallel computer architectures offer an enormous potential for reducing CPU and wall-clock execution times of large-scale computations commonly performed in various applications in science and engineering. Recently, several authors have reported their efforts in developing and implementing parallel algorithms for solving the neutron diffusion equation on a variety of shared- and distributed-memory parallel computers. Testing of these algorithms for a variety of two- and three-dimensional meshes showed significant speedup of the computation. Even for very large problems (i.e., three-dimensional fine meshes) executed concurrently on a few nodes in serial (nonvector) mode, however, the measured computational efficiency is very low (40 to 86%). In this paper, the authors present a highly efficient (∼85 to 99.9%) algorithm for solving the two-dimensional nodal diffusion equations on the Sequent Balance 8000 parallel computer. Also presented is a model for the performance, represented by the efficiency, as a function of problem size and the number of participating processors. The model is validated through several tests and then extrapolated to larger problems and more processors to predict the performance of the algorithm in more computationally demanding situations

  8. Trial-by-Trial Modulation of Associative Memory Formation by Reward Prediction Error and Reward Anticipation as Revealed by a Biologically Plausible Computational Model.

    Science.gov (United States)

    Aberg, Kristoffer C; Müller, Julia; Schwartz, Sophie

    2017-01-01

    Anticipation and delivery of rewards improves memory formation, but little effort has been made to disentangle their respective contributions to memory enhancement. Moreover, it has been suggested that the effects of reward on memory are mediated by dopaminergic influences on hippocampal plasticity. Yet, evidence linking memory improvements to actual reward computations reflected in the activity of the dopaminergic system, i.e., prediction errors and expected values, is scarce and inconclusive. For example, different previous studies reported that the magnitude of prediction errors during a reinforcement learning task was a positive, negative, or non-significant predictor of successfully encoding simultaneously presented images. Individual sensitivities to reward and punishment have been found to influence the activation of the dopaminergic reward system and could therefore help explain these seemingly discrepant results. Here, we used a novel associative memory task combined with computational modeling and showed independent effects of reward-delivery and reward-anticipation on memory. Strikingly, the computational approach revealed positive influences from both reward delivery, as mediated by prediction error magnitude, and reward anticipation, as mediated by magnitude of expected value, even in the absence of behavioral effects when analyzed using standard methods, i.e., by collapsing memory performance across trials within conditions. We additionally measured trait estimates of reward and punishment sensitivity and found that individuals with increased reward (vs. punishment) sensitivity had better memory for associations encoded during positive (vs. negative) prediction errors when tested after 20 min, but a negative trend when tested after 24 h. In conclusion, modeling trial-by-trial fluctuations in the magnitude of reward, as we did here for prediction errors and expected value computations, provides a comprehensive and biologically plausible description of

  9. Local annealing of shape memory alloys using laser scanning and computer vision

    Science.gov (United States)

    Hafez, Moustapha; Bellouard, Yves; Sidler, Thomas C.; Clavel, Reymond; Salathe, Rene-Paul

    2000-11-01

    A complete set-up for local annealing of Shape Memory Alloys (SMA) is proposed. Such alloys, when plastically deformed at a given low temperature, have the ability to recover a previously memorized shape simply by heating up to a higher temperature. They find more and more applications in the fields of robotics and micro engineering. There is a tremendous advantage in using local annealing because this process can produce monolithic parts, which have different mechanical behavior at different location of the same body. Using this approach, it is possible to integrate all the functionality of a device within one piece of material. The set-up is based on a 2W-laser diode emitting at 805nm and a scanner head. The laser beam is coupled into an optical fiber of 60(mu) in diameter. The fiber output is focused on the SMA work-piece using a relay lens system with a 1:1 magnification, resulting in a spot diameter of 60(mu) . An imaging system is used to control the position of the laser spot on the sample. In order to displace the spot on the surface a tip/tilt laser scanner is used. The scanner is positioned in a pre-objective configuration and allows a scan field size of more than 10 x 10 mm2. A graphical user interface of the scan field allows the user to quickly set up marks and alter their placement and power density. This is achieved by computer controlling X and Y positions of the scanner as well as the laser diode power. A SMA micro-gripper with a surface area less than 1 mm2 and an opening of the jaws of 200(mu) has been realized using this set-up. It is electrically actuated and a controlled force of 16mN can be applied to hold and release small objects such as graded index micro-lenses at a cycle time of typically 1s.

  10. In-Depth Analysis of Computer Memory Acquisition Software for Forensic Purposes.

    Science.gov (United States)

    McDown, Robert J; Varol, Cihan; Carvajal, Leonardo; Chen, Lei

    2016-01-01

    The comparison studies on random access memory (RAM) acquisition tools are either limited in metrics or the selected tools were designed to be executed in older operating systems. Therefore, this study evaluates widely used seven shareware or freeware/open source RAM acquisition forensic tools that are compatible to work with the latest 64-bit Windows operating systems. These tools' user interface capabilities, platform limitations, reporting capabilities, total execution time, shared and proprietary DLLs, modified registry keys, and invoked files during processing were compared. We observed that Windows Memory Reader and Belkasoft's Live Ram Capturer leaves the least fingerprints in memory when loaded. On the other hand, ProDiscover and FTK Imager perform poor in memory usage, processing time, DLL usage, and not-wanted artifacts introduced to the system. While Belkasoft's Live Ram Capturer is the fastest to obtain an image of the memory, Pro Discover takes the longest time to do the same job. © 2015 American Academy of Forensic Sciences.

  11. USE OF VIDEOGAMES AND COMPUTER GAMES: INFLUENCES ON ATTENTION, MEMORY, ACADEMIC ACHIEVEMENT AND PROBLEMS BEHAVIOR

    Directory of Open Access Journals (Sweden)

    Harold Germán Rodríguez Celis

    2011-12-01

    Full Text Available This study was designed to identify the relationship between video and computergames use on attention, memory, academic performance and problemsbehavior in school children in Bogotá. Memory and attention were assessedusing a set of different scales of ENI Battery (Matute, Rosselli, Ardila, & Ostrosky-Solís, 2007. For Academic performance, school newsletters were used.Behavioral problems were assessed through the CBCL / 6 -18 questionnaire(Child Behavior Checklist of (Achenbach & Edelbrock, 1983. 123 children and99 parents were enrolled in 2 factorial design experimental studies. The resultsdid not support the hypothesis of a significant change in memory tests, orintra-subject selective visual and hearing attention. However, these variablesshowed significant differences among children exposed to habitual videogamesconsumption. No differences were found between the level of regular videogames consumption in school children and academic performance variables orbehavioral problems.

  12. A Compute Capable SSD Architecture for Next-Generation Non-volatile Memories

    Energy Technology Data Exchange (ETDEWEB)

    De, Arup [Univ. of California, San Diego, CA (United States)

    2014-01-01

    Existing storage technologies (e.g., disks and ash) are failing to cope with the processor and main memory speed and are limiting the overall perfor- mance of many large scale I/O or data-intensive applications. Emerging fast byte-addressable non-volatile memory (NVM) technologies, such as phase-change memory (PCM), spin-transfer torque memory (STTM) and memristor are very promising and are approaching DRAM-like performance with lower power con- sumption and higher density as process technology scales. These new memories are narrowing down the performance gap between the storage and the main mem- ory and are putting forward challenging problems on existing SSD architecture, I/O interface (e.g, SATA, PCIe) and software. This dissertation addresses those challenges and presents a novel SSD architecture called XSSD. XSSD o oads com- putation in storage to exploit fast NVMs and reduce the redundant data tra c across the I/O bus. XSSD o ers a exible RPC-based programming framework that developers can use for application development on SSD without dealing with the complication of the underlying architecture and communication management. We have built a prototype of XSSD on the BEE3 FPGA prototyping system. We implement various data-intensive applications and achieve speedup and energy ef- ciency of 1.5-8.9 and 1.7-10.27 respectively. This dissertation also compares XSSD with previous work on intelligent storage and intelligent memory. The existing ecosystem and these new enabling technologies make this system more viable than earlier ones.

  13. Compiling for Novel Scratch Pad Memory based Multicore Architectures for Extreme Scale Computing

    Energy Technology Data Exchange (ETDEWEB)

    Shrivastava, Aviral

    2016-02-05

    The objective of this proposal is to develop tools and techniques (in the compiler) to manage data of a task and communication among tasks on the scratch pad memory (SPM) of the core, so that any application (a set of tasks) can be executed efficiently on an SPM based manycore architecture.

  14. Computer-Based Working Memory Training in Children with Mild Intellectual Disability

    Science.gov (United States)

    Delavarian, Mona; Bokharaeian, Behrouz; Towhidkhah, Farzad; Gharibzadeh, Shahriar

    2015-01-01

    We designed a working memory (WM) training programme in game framework for mild intellectually disabled students. Twenty-four students participated as test and control groups. The auditory and visual-spatial WM were assessed by primary test, which included computerised Wechsler numerical forward and backward sub-tests and secondary tests, which…

  15. Data systems and computer science space data systems: Onboard memory and storage

    Science.gov (United States)

    Shull, Tom

    1991-01-01

    The topics are presented in viewgraph form and include the following: technical objectives; technology challenges; state-of-the-art assessment; mass storage comparison; SODR drive and system concepts; program description; vertical Bloch line (VBL) device concept; relationship to external programs; and backup charts for memory and storage.

  16. Cognitive memory.

    Science.gov (United States)

    Widrow, Bernard; Aragon, Juan Carlos

    2013-05-01

    Regarding the workings of the human mind, memory and pattern recognition seem to be intertwined. You generally do not have one without the other. Taking inspiration from life experience, a new form of computer memory has been devised. Certain conjectures about human memory are keys to the central idea. The design of a practical and useful "cognitive" memory system is contemplated, a memory system that may also serve as a model for many aspects of human memory. The new memory does not function like a computer memory where specific data is stored in specific numbered registers and retrieval is done by reading the contents of the specified memory register, or done by matching key words as with a document search. Incoming sensory data would be stored at the next available empty memory location, and indeed could be stored redundantly at several empty locations. The stored sensory data would neither have key words nor would it be located in known or specified memory locations. Sensory inputs concerning a single object or subject are stored together as patterns in a single "file folder" or "memory folder". When the contents of the folder are retrieved, sights, sounds, tactile feel, smell, etc., are obtained all at the same time. Retrieval would be initiated by a query or a prompt signal from a current set of sensory inputs or patterns. A search through the memory would be made to locate stored data that correlates with or relates to the prompt input. The search would be done by a retrieval system whose first stage makes use of autoassociative artificial neural networks and whose second stage relies on exhaustive search. Applications of cognitive memory systems have been made to visual aircraft identification, aircraft navigation, and human facial recognition. Concerning human memory, reasons are given why it is unlikely that long-term memory is stored in the synapses of the brain's neural networks. Reasons are given suggesting that long-term memory is stored in DNA or RNA

  17. Spin-wave interference patterns created by spin-torque nano-oscillators for memory and computation

    International Nuclear Information System (INIS)

    Macia, Ferran; Kent, Andrew D; Hoppensteadt, Frank C

    2011-01-01

    Magnetization dynamics in nanomagnets has attracted broad interest since it was predicted that a dc current flowing through a thin magnetic layer can create spin-wave excitations. These excitations are due to spin momentum transfer, a transfer of spin angular momentum between conduction electrons and the background magnetization, that enables new types of information processing. Here we show how arrays of spin-torque nano-oscillators can create propagating spin-wave interference patterns of use for memory and computation. Memristic transponders distributed on the thin film respond to threshold tunnel magnetoresistance values, thereby allowing spin-wave detection and creating new excitation patterns. We show how groups of transponders create resonant (reverberating) spin-wave interference patterns that may be used for polychronous wave computation and information storage.

  18. An energy efficient and high speed architecture for convolution computing based on binary resistive random access memory

    Science.gov (United States)

    Liu, Chen; Han, Runze; Zhou, Zheng; Huang, Peng; Liu, Lifeng; Liu, Xiaoyan; Kang, Jinfeng

    2018-04-01

    In this work we present a novel convolution computing architecture based on metal oxide resistive random access memory (RRAM) to process the image data stored in the RRAM arrays. The proposed image storage architecture shows performances of better speed-device consumption efficiency compared with the previous kernel storage architecture. Further we improve the architecture for a high accuracy and low power computing by utilizing the binary storage and the series resistor. For a 28 × 28 image and 10 kernels with a size of 3 × 3, compared with the previous kernel storage approach, the newly proposed architecture shows excellent performances including: 1) almost 100% accuracy within 20% LRS variation and 90% HRS variation; 2) more than 67 times speed boost; 3) 71.4% energy saving.

  19. A three-dimensional ground-water-flow model modified to reduce computer-memory requirements and better simulate confining-bed and aquifer pinchouts

    Science.gov (United States)

    Leahy, P.P.

    1982-01-01

    The Trescott computer program for modeling groundwater flow in three dimensions has been modified to (1) treat aquifer and confining bed pinchouts more realistically and (2) reduce the computer memory requirements needed for the input data. Using the original program, simulation of aquifer systems with nonrectangular external boundaries may result in a large number of nodes that are not involved in the numerical solution of the problem, but require computer storage. (USGS)

  20. A microscopically motivated constitutive model for shape memory alloys: Formulation, analysis and computations

    Czech Academy of Sciences Publication Activity Database

    Frost, Miroslav; Benešová, B.; Sedlák, P.

    2016-01-01

    Roč. 21, č. 3 (2016), s. 358-382 ISSN 1081-2865 R&D Projects: GA ČR GA13-13616S; GA ČR GAP201/10/0357 Institutional support: RVO:61388998 Keywords : shape memory alloys * constitutive model * generalized standard materials * dissipation * energetic solution Subject RIV: BA - General Mathematics Impact factor: 2.953, year: 2016 http://mms.sagepub.com/content/21/3/358

  1. Circuit-Switched Memory Access in Photonic Interconnection Networks for High-Performance Embedded Computing

    Science.gov (United States)

    2010-07-22

    Memory Systems: Cadle. DRAM, Disk. Morgan Kaufmann , 2007. (2 1) A. Joshi, C. Ballen, Y.-J. Kwon. S . Beamcr, I. Shamim . K. Asano\\’ic, and V...COVERED (From - To) 4. TITLE AND SUBTITLE 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR( S ) 5d...PROJECT NUMBER 5e. TASK NUMBER 5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME( S ) AND ADDRESS(ES) 8. PERFORMING ORGANIZATION

  2. FLASHRAD: A 3D Rad Hard Memory Module For High Performance Space Computers, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — The computing capabilities of onboard spacecraft are a major limiting factor for accomplishing many classes of future missions. Although technology development...

  3. Computational perspectives in the history of science: to the memory of Peter Damerow.

    Science.gov (United States)

    Laubichler, Manfred D; Maienschein, Jane; Renn, Jürgen

    2013-03-01

    Computational methods and perspectives can transform the history of science by enabling the pursuit of novel types of questions, dramatically expanding the scale of analysis (geographically and temporally), and offering novel forms of publication that greatly enhance access and transparency. This essay presents a brief summary of a computational research system for the history of science, discussing its implications for research, education, and publication practices and its connections to the open-access movement and similar transformations in the natural and social sciences that emphasize big data. It also argues that computational approaches help to reconnect the history of science to individual scientific disciplines.

  4. Novel Kinds of Random Access Memory and New Vulnerabilities of Computer Aids based on Them

    Directory of Open Access Journals (Sweden)

    Alexander Mihaylovich Korotin

    2014-09-01

    Full Text Available The article discusses vulnerabilities of computer aids based on existing RAM and mechanisms for restricting exploitation of such vulnerabilities. In addition, the article discusses features and work methods of different RAM.

  5. Parallel statistical image reconstruction for cone-beam x-ray CT on a shared memory computation platform

    International Nuclear Information System (INIS)

    Kole, J S; Beekman, F J

    2005-01-01

    Statistical reconstruction methods offer possibilities of improving image quality as compared to analytical methods, but current reconstruction times prohibit routine clinical applications. To reduce reconstruction times we have parallelized a statistical reconstruction algorithm for cone-beam x-ray CT, the ordered subset convex algorithm (OSC), and evaluated it on a shared memory computer. Two different parallelization strategies were developed: one that employs parallelism by computing the work for all projections within a subset in parallel, and one that divides the total volume into parts and processes the work for each sub-volume in parallel. Both methods are used to reconstruct a three-dimensional mathematical phantom on two different grid densities. The reconstructed images are binary identical to the result of the serial (non-parallelized) algorithm. The speed-up factor equals approximately 30 when using 32 to 40 processors, and scales almost linearly with the number of cpus for both methods. The huge reduction in computation time allows us to apply statistical reconstruction to clinically relevant studies for the first time

  6. Efficient implementation of multidimensional fast fourier transform on a distributed-memory parallel multi-node computer

    Science.gov (United States)

    Bhanot, Gyan V [Princeton, NJ; Chen, Dong [Croton-On-Hudson, NY; Gara, Alan G [Mount Kisco, NY; Giampapa, Mark E [Irvington, NY; Heidelberger, Philip [Cortlandt Manor, NY; Steinmacher-Burow, Burkhard D [Mount Kisco, NY; Vranas, Pavlos M [Bedford Hills, NY

    2012-01-10

    The present in invention is directed to a method, system and program storage device for efficiently implementing a multidimensional Fast Fourier Transform (FFT) of a multidimensional array comprising a plurality of elements initially distributed in a multi-node computer system comprising a plurality of nodes in communication over a network, comprising: distributing the plurality of elements of the array in a first dimension across the plurality of nodes of the computer system over the network to facilitate a first one-dimensional FFT; performing the first one-dimensional FFT on the elements of the array distributed at each node in the first dimension; re-distributing the one-dimensional FFT-transformed elements at each node in a second dimension via "all-to-all" distribution in random order across other nodes of the computer system over the network; and performing a second one-dimensional FFT on elements of the array re-distributed at each node in the second dimension, wherein the random order facilitates efficient utilization of the network thereby efficiently implementing the multidimensional FFT. The "all-to-all" re-distribution of array elements is further efficiently implemented in applications other than the multidimensional FFT on the distributed-memory parallel supercomputer.

  7. Efficient implementation of a multidimensional fast fourier transform on a distributed-memory parallel multi-node computer

    Science.gov (United States)

    Bhanot, Gyan V [Princeton, NJ; Chen, Dong [Croton-On-Hudson, NY; Gara, Alan G [Mount Kisco, NY; Giampapa, Mark E [Irvington, NY; Heidelberger, Philip [Cortlandt Manor, NY; Steinmacher-Burow, Burkhard D [Mount Kisco, NY; Vranas, Pavlos M [Bedford Hills, NY

    2008-01-01

    The present in invention is directed to a method, system and program storage device for efficiently implementing a multidimensional Fast Fourier Transform (FFT) of a multidimensional array comprising a plurality of elements initially distributed in a multi-node computer system comprising a plurality of nodes in communication over a network, comprising: distributing the plurality of elements of the array in a first dimension across the plurality of nodes of the computer system over the network to facilitate a first one-dimensional FFT; performing the first one-dimensional FFT on the elements of the array distributed at each node in the first dimension; re-distributing the one-dimensional FFT-transformed elements at each node in a second dimension via "all-to-all" distribution in random order across other nodes of the computer system over the network; and performing a second one-dimensional FFT on elements of the array re-distributed at each node in the second dimension, wherein the random order facilitates efficient utilization of the network thereby efficiently implementing the multidimensional FFT. The "all-to-all" re-distribution of array elements is further efficiently implemented in applications other than the multidimensional FFT on the distributed-memory parallel supercomputer.

  8. Fencing network direct memory access data transfers in a parallel active messaging interface of a parallel computer

    Science.gov (United States)

    Blocksome, Michael A.; Mamidala, Amith R.

    2015-07-07

    Fencing direct memory access (`DMA`) data transfers in a parallel active messaging interface (`PAMI`) of a parallel computer, the PAMI including data communications endpoints, each endpoint including specifications of a client, a context, and a task, the endpoints coupled for data communications through the PAMI and through DMA controllers operatively coupled to a deterministic data communications network through which the DMA controllers deliver data communications deterministically, including initiating execution through the PAMI of an ordered sequence of active DMA instructions for DMA data transfers between two endpoints, effecting deterministic DMA data transfers through a DMA controller and the deterministic data communications network; and executing through the PAMI, with no FENCE accounting for DMA data transfers, an active FENCE instruction, the FENCE instruction completing execution only after completion of all DMA instructions initiated prior to execution of the FENCE instruction for DMA data transfers between the two endpoints.

  9. Parallelization of MCNP 4, a Monte Carlo neutron and photon transport code system, in highly parallel distributed memory type computer

    International Nuclear Information System (INIS)

    Masukawa, Fumihiro; Takano, Makoto; Naito, Yoshitaka; Yamazaki, Takao; Fujisaki, Masahide; Suzuki, Koichiro; Okuda, Motoi.

    1993-11-01

    In order to improve the accuracy and calculating speed of shielding analyses, MCNP 4, a Monte Carlo neutron and photon transport code system, has been parallelized and measured of its efficiency in the highly parallel distributed memory type computer, AP1000. The code has been analyzed statically and dynamically, then the suitable algorithm for parallelization has been determined for the shielding analysis functions of MCNP 4. This includes a strategy where a new history is assigned to the idling processor element dynamically during the execution. Furthermore, to avoid the congestion of communicative processing, the batch concept, processing multi-histories by a unit, has been introduced. By analyzing a sample cask problem with 2,000,000 histories by the AP1000 with 512 processor elements, the 82 % of parallelization efficiency is achieved, and the calculational speed has been estimated to be around 50 times as fast as that of FACOM M-780. (author)

  10. Memristor-based ternary content addressable memory (mTCAM) for data-intensive computing

    International Nuclear Information System (INIS)

    Zheng, Le; Shin, Sangho; Steve Kang, Sung-Mo

    2014-01-01

    A memristor-based ternary content addressable memory (mTCAM) is presented. Each mTCAM cell, consisting of five transistors and two memristors to store and search for ternary data, is capable of remarkable nonvolatility and higher storage density than conventional CMOS-based TCAMs. Each memristor in the cell can be programmed individually such that high impedance is always present between searchlines to reduce static energy consumption. A unique two-step write scheme offers reliable and energy-efficient write operations. The search voltage is designed to ensure optimum sensing margins with the presence of variations in memristor devices. Simulations of the proposed mTCAM demonstrate functionalities in write and search modes, as well as a search delay of 2 ns and a search of 0.99 fJ/bit/search for a word width of 128 bits. (paper)

  11. A Computational Account of Children's Analogical Reasoning: Balancing Inhibitory Control in Working Memory and Relational Representation

    Science.gov (United States)

    Morrison, Robert G.; Doumas, Leonidas A. A.; Richland, Lindsey E.

    2011-01-01

    Theories accounting for the development of analogical reasoning tend to emphasize either the centrality of relational knowledge accretion or changes in information processing capability. Simulations in LISA (Hummel & Holyoak, 1997, 2003), a neurally inspired computer model of analogical reasoning, allow us to explore how these factors may…

  12. Blood leakage detection during dialysis therapy based on fog computing with array photocell sensors and heteroassociative memory model.

    Science.gov (United States)

    Wu, Jian-Xing; Huang, Ping-Tzan; Lin, Chia-Hung; Li, Chien-Ming

    2018-02-01

    Blood leakage and blood loss are serious life-threatening complications occurring during dialysis therapy. These events have been of concerns to both healthcare givers and patients. More than 40% of adult blood volume can be lost in just a few minutes, resulting in morbidities and mortality. The authors intend to propose the design of a warning tool for the detection of blood leakage/blood loss during dialysis therapy based on fog computing with an array of photocell sensors and heteroassociative memory (HAM) model. Photocell sensors are arranged in an array on a flexible substrate to detect blood leakage via the resistance changes with illumination in the visible spectrum of 500-700 nm. The HAM model is implemented to design a virtual alarm unit using electricity changes in an embedded system. The proposed warning tool can indicate the risk level in both end-sensing units and remote monitor devices via a wireless network and fog/cloud computing. The animal experimental results (pig blood) will demonstrate the feasibility.

  13. Blood leakage detection during dialysis therapy based on fog computing with array photocell sensors and heteroassociative memory model

    Science.gov (United States)

    Wu, Jian-Xing; Huang, Ping-Tzan; Li, Chien-Ming

    2018-01-01

    Blood leakage and blood loss are serious life-threatening complications occurring during dialysis therapy. These events have been of concerns to both healthcare givers and patients. More than 40% of adult blood volume can be lost in just a few minutes, resulting in morbidities and mortality. The authors intend to propose the design of a warning tool for the detection of blood leakage/blood loss during dialysis therapy based on fog computing with an array of photocell sensors and heteroassociative memory (HAM) model. Photocell sensors are arranged in an array on a flexible substrate to detect blood leakage via the resistance changes with illumination in the visible spectrum of 500–700 nm. The HAM model is implemented to design a virtual alarm unit using electricity changes in an embedded system. The proposed warning tool can indicate the risk level in both end-sensing units and remote monitor devices via a wireless network and fog/cloud computing. The animal experimental results (pig blood) will demonstrate the feasibility. PMID:29515815

  14. Change of short-term memory effect in acute ischemic ventricular myocardium: a computational study.

    Science.gov (United States)

    Mei, Xi; Wang, Jing; Zhang, Hong; Liu, Zhi-cheng; Zhang, Zhen-xi

    2014-02-01

    The ionic mechanism of change in short-term memory (STM) during acute myocardial ischemia has not been well understood. In this paper, an advanced guinea pig ventricular model developed by Luo and Rudy was used to investigate STM property of ischemic ventricular myocardium. STM response was calculated by testing the time to reach steady-state action potential duration (APD) after an abrupt shortening of basic cycling length (BCL) in the pacing protocol. Electrical restitution curves (RCs), which can simultaneously visualize multiple aspects of APD restitution and STM, were obtained from dynamic and local S1S2 restitution portrait (RP), which consist of a longer interval stimulus (S1) and a shorter interval stimulus (S2). The angle between dynamic RC and local S1S2 RC reflects the amount of STM. Our results indicated that compared with control (normal) condition, time constant of STM response in the ischemic condition decreased significantly. Meanwhile the angle which reflects STM amount is less in ischemic model than that in control model. By tracking the effect of ischemia on intracellular ion concentration and membrane currents, we declared that changes in membrane currents caused by ischemia exert subtle influences on STM; it is only the decline of intracellular calcium concentration that give rise to the most decrement of STM. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  15. Three-dimensional magnetic field computation on a distributed memory parallel processor

    International Nuclear Information System (INIS)

    Barion, M.L.

    1990-01-01

    The analysis of three-dimensional magnetic fields by finite element methods frequently proves too onerous a task for the computing resource on which it is attempted. When non-linear and transient effects are included, it may become impossible to calculate the field distribution to sufficient resolution. One approach to this problem is to exploit the natural parallelism in the finite element method via parallel processing. This paper reports on an implementation of a finite element code for non-linear three-dimensional low-frequency magnetic field calculation on Intel's iPSC/2

  16. Visual Memories Bypass Normalization.

    Science.gov (United States)

    Bloem, Ilona M; Watanabe, Yurika L; Kibbe, Melissa M; Ling, Sam

    2018-05-01

    How distinct are visual memory representations from visual perception? Although evidence suggests that briefly remembered stimuli are represented within early visual cortices, the degree to which these memory traces resemble true visual representations remains something of a mystery. Here, we tested whether both visual memory and perception succumb to a seemingly ubiquitous neural computation: normalization. Observers were asked to remember the contrast of visual stimuli, which were pitted against each other to promote normalization either in perception or in visual memory. Our results revealed robust normalization between visual representations in perception, yet no signature of normalization occurring between working memory stores-neither between representations in memory nor between memory representations and visual inputs. These results provide unique insight into the nature of visual memory representations, illustrating that visual memory representations follow a different set of computational rules, bypassing normalization, a canonical visual computation.

  17. Human performance across decision making, selective attention, and working memory tasks: Experimental data and computer simulations

    Directory of Open Access Journals (Sweden)

    Andrea Stocco

    2018-04-01

    Full Text Available This article describes the data analyzed in the paper “Individual differences in the Simon effect are underpinned by differences in the competitive dynamics in the basal ganglia: An experimental verification and a computational model” (Stocco et al., 2017 [1]. The data includes behavioral results from participants performing three cognitive tasks (Probabilistic Stimulus Selection (Frank et al., 2004 [2], Simon task (Craft and Simon, 1970 [3], and Automated Operation Span (Unsworth et al., 2005 [4], as well as simulationed traces generated by a computational neurocognitive model that accounts for individual variations in human performance across the tasks. The experimental data encompasses individual data files (in both preprocessed and native output format as well as group-level summary files. The simulation data includes the entire model code, the results of a full-grid search of the model's parameter space, and the code used to partition the model space and parallelize the simulations. Finally, the repository includes the R scripts used to carry out the statistical analyses reported in the original paper.

  18. Human performance across decision making, selective attention, and working memory tasks: Experimental data and computer simulations.

    Science.gov (United States)

    Stocco, Andrea; Yamasaki, Brianna L; Prat, Chantel S

    2018-04-01

    This article describes the data analyzed in the paper "Individual differences in the Simon effect are underpinned by differences in the competitive dynamics in the basal ganglia: An experimental verification and a computational model" (Stocco et al., 2017) [1]. The data includes behavioral results from participants performing three cognitive tasks (Probabilistic Stimulus Selection (Frank et al., 2004) [2], Simon task (Craft and Simon, 1970) [3], and Automated Operation Span (Unsworth et al., 2005) [4]), as well as simulationed traces generated by a computational neurocognitive model that accounts for individual variations in human performance across the tasks. The experimental data encompasses individual data files (in both preprocessed and native output format) as well as group-level summary files. The simulation data includes the entire model code, the results of a full-grid search of the model's parameter space, and the code used to partition the model space and parallelize the simulations. Finally, the repository includes the R scripts used to carry out the statistical analyses reported in the original paper.

  19. Main Memory DBMS

    NARCIS (Netherlands)

    P.A. Boncz (Peter); L. Liu (Lei); M. Tamer Özsu

    2008-01-01

    htmlabstractA main memory database system is a DBMS that primarily relies on main memory for computer data storage. In contrast, normal database management systems employ hard disk based persisntent storage.

  20. Computational design of precipitation-strengthened titanium-nickel-based shape memory alloys

    Science.gov (United States)

    Bender, Matthew D.

    Motivated by performance requirements of future medical stent applications, experimental research addresses the design of novel TiNi-based, superelastic shape-memory alloys employing nanoscale precipitation strengthening to minimize accommodation slip for cyclic stability and to increase output stress capability for smaller devices. Using a thermodynamic database describing the B2 and L21 phases in the Al-Ni-Ti-Zr system, Thermo-Calc software was used to assist modeling the evolution of phase composition during 600°C isothermal evolution of coherent L21 Heusler phase precipitation from supersaturated TiNi-based B2 phase matrix in an alloy experimentally characterized by atomic-scale Local Electrode Atom Probe (LEAP) microanalysis. Based on measured evolution of the alloy hardness (under conditions stable against martensitic transformation) a model for the combined effects of solid solution strengthening and precipitation strengthening was calibrated, and the optimum particle size for efficient strengthening was identified. Thermodynamic modeling of the evolution of measured phase fractions and compositions identified the interfacial capillary energy enabling thermodynamic design of alloy microstructure with the optimal strengthening particle size. Extension of alloy designs to incorporate Pt and Pd for reducing Ni content, enhancing radiopacity, and improving manufacturability were considered using measured Pt and Pd B2/L2 1 partitioning coefficients. After determining that Pt partitioning greatly increases interphase misfit, full attention was devoted to Pd alloy designs. A quantitative approach to radiopacity was employed using mass attenuation as a metric. Radiopacity improvements were also qualitatively observed using x-ray fluoroscopy. Transformation temperatures were experimentally measured as a function of Al and Pd content. Redlich-Kister polynomial modeling was utilized for the dependence of transformation reversion Af temperature on B2 matrix phase

  1. Ab Initio Molecular-Dynamics Simulation of Neuromorphic Computing in Phase-Change Memory Materials.

    Science.gov (United States)

    Skelton, Jonathan M; Loke, Desmond; Lee, Taehoon; Elliott, Stephen R

    2015-07-08

    We present an in silico study of the neuromorphic-computing behavior of the prototypical phase-change material, Ge2Sb2Te5, using ab initio molecular-dynamics simulations. Stepwise changes in structural order in response to temperature pulses of varying length and duration are observed, and a good reproduction of the spike-timing-dependent plasticity observed in nanoelectronic synapses is demonstrated. Short above-melting pulses lead to instantaneous loss of structural and chemical order, followed by delayed partial recovery upon structural relaxation. We also investigate the link between structural order and electrical and optical properties. These results pave the way toward a first-principles understanding of phase-change physics beyond binary switching.

  2. Number sense or working memory? The effect of two computer-based trainings on mathematical skills in elementary school.

    Science.gov (United States)

    Kuhn, Jörg-Tobias; Holling, Heinz

    2014-01-01

    Research on the improvement of elementary school mathematics has shown that computer-based training of number sense (e.g., processing magnitudes or locating numbers on the number line) can lead to substantial achievement gains in arithmetic skills. Recent studies, however, have highlighted that training domain-general cognitive abilities (e.g., working memory [WM]) may also improve mathematical achievement. This study addressed the question of whether a training of domain-specific number sense skills or domain-general WM abilities is more appropriate for improving mathematical abilities in elementary school. Fifty-nine children (M age = 9 years, 32 girls and 27 boys) received either a computer-based, adaptive training of number sense (n = 20), WM skills (n = 19), or served as a control group (n = 20). The training duration was 20 min per day for 15 days. Before and after training, we measured mathematical ability using a curriculum-based math test, as well as spatial WM. For both training groups, we observed substantial increases in the math posttest compared to the control group (d = .54 for number sense skills training, d = .57 for WM training, respectively). Whereas the number sense group showed significant gains in arithmetical skills, the WM training group exhibited marginally significant gains in word problem solving. However, no training group showed significant posttest gains on the spatial WM task. Results indicate that a short training of either domain-specific or domain-general skills may result in reliable short-term training gains in math performance, although no stable training effects were found in the spatial WM task.

  3. Cognitive and Neural Plasticity in Older Adults’ Prospective Memory Following Training with the Virtual Week Computer Game

    Directory of Open Access Journals (Sweden)

    Nathan S Rose

    2015-10-01

    Full Text Available Prospective memory (PM – the ability to remember and successfully execute our intentions and planned activities – is critical for functional independence and declines with age, yet few studies have attempted to train PM in older adults. We developed a PM training program using the Virtual Week computer game. Trained participants played the game in twelve, 1-hour sessions over one month. Measures of neuropsychological functions, lab-based PM, event-related potentials (ERPs during performance on a lab-based PM task, instrumental activities of daily living, and real-world PM were assessed before and after training. Performance was compared to both no-contact and active (music training control groups. PM on the Virtual Week game dramatically improved following training relative to controls, suggesting PM plasticity is preserved in older adults. Relative to control participants, training did not produce reliable transfer to laboratory-based tasks, but was associated with a reduction of an ERP component (sustained negativity over occipito-parietal cortex associated with processing PM cues, indicative of more automatic PM retrieval. Most importantly, training produced far transfer to real-world outcomes including improvements in performance on real-world PM and activities of daily living. Real-world gains were not observed in either control group. Our findings demonstrate that short-term training with the Virtual Week game produces cognitive and neural plasticity that may result in real-world benefits to supporting functional independence in older adulthood.

  4. Cognitive and neural plasticity in older adults' prospective memory following training with the Virtual Week computer game.

    Science.gov (United States)

    Rose, Nathan S; Rendell, Peter G; Hering, Alexandra; Kliegel, Matthias; Bidelman, Gavin M; Craik, Fergus I M

    2015-01-01

    Prospective memory (PM) - the ability to remember and successfully execute our intentions and planned activities - is critical for functional independence and declines with age, yet few studies have attempted to train PM in older adults. We developed a PM training program using the Virtual Week computer game. Trained participants played the game in 12, 1-h sessions over 1 month. Measures of neuropsychological functions, lab-based PM, event-related potentials (ERPs) during performance on a lab-based PM task, instrumental activities of daily living, and real-world PM were assessed before and after training. Performance was compared to both no-contact and active (music training) control groups. PM on the Virtual Week game dramatically improved following training relative to controls, suggesting PM plasticity is preserved in older adults. Relative to control participants, training did not produce reliable transfer to laboratory-based tasks, but was associated with a reduction of an ERP component (sustained negativity over occipito-parietal cortex) associated with processing PM cues, indicative of more automatic PM retrieval. Most importantly, training produced far transfer to real-world outcomes including improvements in performance on real-world PM and activities of daily living. Real-world gains were not observed in either control group. Our findings demonstrate that short-term training with the Virtual Week game produces cognitive and neural plasticity that may result in real-world benefits to supporting functional independence in older adulthood.

  5. An A.P.L. micro-programmed machine: implementation on a Multi-20 mini-computer, memory organization, micro-programming and flowcharts

    International Nuclear Information System (INIS)

    Granger, Jean-Louis

    1975-01-01

    This work deals with the presentation of an APL interpreter implemented on an MULTI 20 mini-computer. It includes a left to right syntax analyser, a recursive routine for generation and execution. This routine uses a beating method for array processing. Moreover, during the execution of all APL statements, dynamic memory allocation is used. Execution of basic operations has been micro-programmed. The basic APL interpreter has a length of 10 K bytes. It uses overlay methods. (author) [fr

  6. Analogous Mechanisms of Selection and Updating in Declarative and Procedural Working Memory: Experiments and a Computational Model

    Science.gov (United States)

    Oberauer, Klaus; Souza, Alessandra S.; Druey, Michel D.; Gade, Miriam

    2013-01-01

    The article investigates the mechanisms of selecting and updating representations in declarative and procedural working memory (WM). Declarative WM holds the objects of thought available, whereas procedural WM holds representations of what to do with these objects. Both systems consist of three embedded components: activated long-term memory, a…

  7. The Memory Aid study: protocol for a randomized controlled clinical trial evaluating the effect of computer-based working memory training in elderly patients with mild cognitive impairment (MCI).

    Science.gov (United States)

    Flak, Marianne M; Hernes, Susanne S; Chang, Linda; Ernst, Thomas; Douet, Vanessa; Skranes, Jon; Løhaugen, Gro C C

    2014-05-03

    Mild cognitive impairment (MCI) is a condition characterized by memory problems that are more severe than the normal cognitive changes due to aging, but less severe than dementia. Reduced working memory (WM) is regarded as one of the core symptoms of an MCI condition. Recent studies have indicated that WM can be improved through computer-based training. The objective of this study is to evaluate if WM training is effective in improving cognitive function in elderly patients with MCI, and if cognitive training induces structural changes in the white and gray matter of the brain, as assessed by structural MRI. The proposed study is a blinded, randomized, controlled trail that will include 90 elderly patients diagnosed with MCI at a hospital-based memory clinic. The participants will be randomized to either a training program or a placebo version of the program. The intervention is computerized WM training performed for 45 minutes of 25 sessions over 5 weeks. The placebo version is identical in duration but is non-adaptive in the difficulty level of the tasks. Neuropsychological assessment and structural MRI will be performed before and 1 month after training, and at a 5-month folllow-up. If computer-based training results in positive changes to memory functions in patients with MCI this may represent a new, cost-effective treatment for MCI. Secondly, evaluation of any training-induced structural changes to gray or white matter will improve the current understanding of the mechanisms behind effective cognitive interventions in patients with MCI. ClinicalTrials.gov NCT01991405. November 18, 2013.

  8. Multichannel analyzer using the direct-memory-access channel in a personal computer; Mnogokanal`nyj analizator v personal`nom komp`yutere, ispol`zuyushchij kanal pryamogo dostupa k pamyati

    Energy Technology Data Exchange (ETDEWEB)

    Georgiev, G; Vankov, I; Dimitrov, L [Incn. Yadernykh Issledovanij i Yadernoj Ehnergetiki Bolgarskoj Akademii Nuk, Sofiya (Bulgaria); Peev, I [Firma TOIVEL, Sofiya (Bulgaria)

    1996-12-31

    Paper describes a multichannel analyzer of the spectrometry data developed on the basis of a personal computer memory and a controlled channel of direct access. Analyzer software covering a driver and program of spectrum display control is studied. 2 figs.

  9. Teuchos C++ memory management classes, idioms, and related topics, the complete reference : a comprehensive strategy for safe and efficient memory management in C++ for high performance computing.

    Energy Technology Data Exchange (ETDEWEB)

    Bartlett, Roscoe Ainsworth

    2010-05-01

    The ubiquitous use of raw pointers in higher-level code is the primary cause of all memory usage problems and memory leaks in C++ programs. This paper describes what might be considered a radical approach to the problem which is to encapsulate the use of all raw pointers and all raw calls to new and delete in higher-level C++ code. Instead, a set of cooperating template classes developed in the Trilinos package Teuchos are used to encapsulate every use of raw C++ pointers in every use case where it appears in high-level code. Included in the set of memory management classes is the typical reference-counted smart pointer class similar to boost::shared ptr (and therefore C++0x std::shared ptr). However, what is missing in boost and the new standard library are non-reference counted classes for remaining use cases where raw C++ pointers would need to be used. These classes have a debug build mode where nearly all programmer errors are caught and gracefully reported at runtime. The default optimized build mode strips all runtime checks and allows the code to perform as efficiently as raw C++ pointers with reasonable usage. Also included is a novel approach for dealing with the circular references problem that imparts little extra overhead and is almost completely invisible to most of the code (unlike the boost and therefore C++0x approach). Rather than being a radical approach, encapsulating all raw C++ pointers is simply the logical progression of a trend in the C++ development and standards community that started with std::auto ptr and is continued (but not finished) with std::shared ptr in C++0x. Using the Teuchos reference-counted memory management classes allows one to remove unnecessary constraints in the use of objects by removing arbitrary lifetime ordering constraints which are a type of unnecessary coupling [23]. The code one writes with these classes will be more likely to be correct on first writing, will be less likely to contain silent (but deadly) memory

  10. Applications for Packetized Memory Interfaces

    OpenAIRE

    Watson, Myles Glen

    2015-01-01

    The performance of the memory subsystem has a large impact on the performance of modern computer systems. Many important applications are memory bound and others are expected to become memory bound in the future. The importance of memory performance makes it imperative to understand and optimize the interactions between applications and the system architecture. Prototyping and exploring various configurations of memory systems can give important insights, but current memory interfaces are lim...

  11. Determination of memory performance

    International Nuclear Information System (INIS)

    Gopych, P.M.

    1999-01-01

    Within the scope of testing statistical hypotheses theory a model definition and a computer method for model calculation of widely used in neuropsychology human memory performance (free recall, cued recall, and recognition probabilities), a model definition and a computer method for model calculation of intensities of cues used in experiments for testing human memory quality are proposed. Models for active and passive traces of memory and their relations are found. It was shown that autoassociative memory unit in the form of short two-layer artificial neural network with (or without) damages can be used for model description of memory performance in subjects with (or without) local brain lesions

  12. Cognitive cooperation groups mediated by computers and internet present significant improvement of cognitive status in older adults with memory complaints: a controlled prospective study

    Directory of Open Access Journals (Sweden)

    Rodrigo de Rosso Krug

    Full Text Available ABSTRACT Objective To estimate the effect of participating in cognitive cooperation groups, mediated by computers and the internet, on the Mini-Mental State Examination (MMSE percent variation of outpatients with memory complaints attending two memory clinics. Methods A prospective controlled intervention study carried out from 2006 to 2013 with 293 elders. The intervention group (n = 160 attended a cognitive cooperation group (20 sessions of 1.5 hours each. The control group (n = 133 received routine medical care. Outcome was the percent variation in the MMSE. Control variables included gender, age, marital status, schooling, hypertension, diabetes, dyslipidaemia, hypothyroidism, depression, vascular diseases, polymedication, use of benzodiazepines, exposure to tobacco, sedentary lifestyle, obesity and functional capacity. The final model was obtained by multivariate linear regression. Results The intervention group obtained an independent positive variation of 24.39% (CI 95% = 14.86/33.91 in the MMSE compared to the control group. Conclusion The results suggested that cognitive cooperation groups, mediated by computers and the internet, are associated with cognitive status improvement of older adults in memory clinics.

  13. COMPUTING

    CERN Multimedia

    M. Kasemann

    Overview In autumn the main focus was to process and handle CRAFT data and to perform the Summer08 MC production. The operational aspects were well covered by regular Computing Shifts, experts on duty and Computing Run Coordination. At the Computing Resource Board (CRB) in October a model to account for service work at Tier 2s was approved. The computing resources for 2009 were reviewed for presentation at the C-RRB. The quarterly resource monitoring is continuing. Facilities/Infrastructure operations Operations during CRAFT data taking ran fine. This proved to be a very valuable experience for T0 workflows and operations. The transfers of custodial data to most T1s went smoothly. A first round of reprocessing started at the Tier-1 centers end of November; it will take about two weeks. The Computing Shifts procedure was tested full scale during this period and proved to be very efficient: 30 Computing Shifts Persons (CSP) and 10 Computing Resources Coordinators (CRC). The shift program for the shut down w...

  14. COMPUTING

    CERN Multimedia

    I. Fisk

    2011-01-01

    Introduction CMS distributed computing system performed well during the 2011 start-up. The events in 2011 have more pile-up and are more complex than last year; this results in longer reconstruction times and harder events to simulate. Significant increases in computing capacity were delivered in April for all computing tiers, and the utilisation and load is close to the planning predictions. All computing centre tiers performed their expected functionalities. Heavy-Ion Programme The CMS Heavy-Ion Programme had a very strong showing at the Quark Matter conference. A large number of analyses were shown. The dedicated heavy-ion reconstruction facility at the Vanderbilt Tier-2 is still involved in some commissioning activities, but is available for processing and analysis. Facilities and Infrastructure Operations Facility and Infrastructure operations have been active with operations and several important deployment tasks. Facilities participated in the testing and deployment of WMAgent and WorkQueue+Request...

  15. COMPUTING

    CERN Multimedia

    P. McBride

    The Computing Project is preparing for a busy year where the primary emphasis of the project moves towards steady operations. Following the very successful completion of Computing Software and Analysis challenge, CSA06, last fall, we have reorganized and established four groups in computing area: Commissioning, User Support, Facility/Infrastructure Operations and Data Operations. These groups work closely together with groups from the Offline Project in planning for data processing and operations. Monte Carlo production has continued since CSA06, with about 30M events produced each month to be used for HLT studies and physics validation. Monte Carlo production will continue throughout the year in the preparation of large samples for physics and detector studies ramping to 50 M events/month for CSA07. Commissioning of the full CMS computing system is a major goal for 2007. Site monitoring is an important commissioning component and work is ongoing to devise CMS specific tests to be included in Service Availa...

  16. COMPUTING

    CERN Multimedia

    M. Kasemann

    Overview During the past three months activities were focused on data operations, testing and re-enforcing shift and operational procedures for data production and transfer, MC production and on user support. Planning of the computing resources in view of the new LHC calendar in ongoing. Two new task forces were created for supporting the integration work: Site Commissioning, which develops tools helping distributed sites to monitor job and data workflows, and Analysis Support, collecting the user experience and feedback during analysis activities and developing tools to increase efficiency. The development plan for DMWM for 2009/2011 was developed at the beginning of the year, based on the requirements from the Physics, Computing and Offline groups (see Offline section). The Computing management meeting at FermiLab on February 19th and 20th was an excellent opportunity discussing the impact and for addressing issues and solutions to the main challenges facing CMS computing. The lack of manpower is particul...

  17. Empirical evidence for musical syntax processing? Computer simulations reveal the contribution of auditory short-term memory

    Directory of Open Access Journals (Sweden)

    Emmanuel eBigand

    2014-06-01

    Full Text Available During the last decade, it has been argued that 1 music processing involves syntactic representations similar to those observed in language, and 2 that music and language share similar syntactic-like processes and neural resources. This claim is important for understanding the origin of music and language abilities and, furthermore, it has clinical implications. The Western musical system, however, is rooted in psychoacoustic properties of sound, and this is not the case for linguistic syntax. Accordingly, musical syntax processing could be parsimoniously understood as an emergent property of auditory memory rather than a property of abstract processing similar to linguistic processing. To support this view, we simulated numerous empirical studies that investigated the processing of harmonic structures, using a model based on the accumulation of sensory information in auditory memory. The simulations revealed that most of the musical syntax manipulations used with behavioral and neurophysiological methods as well as with developmental and cross-cultural approaches can be accounted for by the auditory memory model. This led us to question whether current research on musical syntax can really be compared with linguistic processing. Our simulation also raises methodological and theoretical challenges to study musical syntax while disentangling the confounded low-level sensory influences. In order to investigate syntactic abilities in music comparable to language, research should preferentially use musical material with structures that circumvent the tonal effect exerted by psychoacoustic properties of sounds.

  18. Empirical evidence for musical syntax processing? Computer simulations reveal the contribution of auditory short-term memory.

    Science.gov (United States)

    Bigand, Emmanuel; Delbé, Charles; Poulin-Charronnat, Bénédicte; Leman, Marc; Tillmann, Barbara

    2014-01-01

    During the last decade, it has been argued that (1) music processing involves syntactic representations similar to those observed in language, and (2) that music and language share similar syntactic-like processes and neural resources. This claim is important for understanding the origin of music and language abilities and, furthermore, it has clinical implications. The Western musical system, however, is rooted in psychoacoustic properties of sound, and this is not the case for linguistic syntax. Accordingly, musical syntax processing could be parsimoniously understood as an emergent property of auditory memory rather than a property of abstract processing similar to linguistic processing. To support this view, we simulated numerous empirical studies that investigated the processing of harmonic structures, using a model based on the accumulation of sensory information in auditory memory. The simulations revealed that most of the musical syntax manipulations used with behavioral and neurophysiological methods as well as with developmental and cross-cultural approaches can be accounted for by the auditory memory model. This led us to question whether current research on musical syntax can really be compared with linguistic processing. Our simulation also raises methodological and theoretical challenges to study musical syntax while disentangling the confounded low-level sensory influences. In order to investigate syntactic abilities in music comparable to language, research should preferentially use musical material with structures that circumvent the tonal effect exerted by psychoacoustic properties of sounds.

  19. Empirical evidence for musical syntax processing? Computer simulations reveal the contribution of auditory short-term memory

    Science.gov (United States)

    Bigand, Emmanuel; Delbé, Charles; Poulin-Charronnat, Bénédicte; Leman, Marc; Tillmann, Barbara

    2014-01-01

    During the last decade, it has been argued that (1) music processing involves syntactic representations similar to those observed in language, and (2) that music and language share similar syntactic-like processes and neural resources. This claim is important for understanding the origin of music and language abilities and, furthermore, it has clinical implications. The Western musical system, however, is rooted in psychoacoustic properties of sound, and this is not the case for linguistic syntax. Accordingly, musical syntax processing could be parsimoniously understood as an emergent property of auditory memory rather than a property of abstract processing similar to linguistic processing. To support this view, we simulated numerous empirical studies that investigated the processing of harmonic structures, using a model based on the accumulation of sensory information in auditory memory. The simulations revealed that most of the musical syntax manipulations used with behavioral and neurophysiological methods as well as with developmental and cross-cultural approaches can be accounted for by the auditory memory model. This led us to question whether current research on musical syntax can really be compared with linguistic processing. Our simulation also raises methodological and theoretical challenges to study musical syntax while disentangling the confounded low-level sensory influences. In order to investigate syntactic abilities in music comparable to language, research should preferentially use musical material with structures that circumvent the tonal effect exerted by psychoacoustic properties of sounds. PMID:24936174

  20. Neutron detection and characterization for non-proliferation applications using 3D computer optical memories [Use of 3D optical computer memory for radiation detectors/dosimeters. Final progress report

    International Nuclear Information System (INIS)

    Phillips, Gary W.

    2000-01-01

    We have investigated 3-dimensional optical random access memory (3D-ORAM) materials for detection and characterization of charged particles of neutrons by detecting tracks left by the recoil charged particles produced by the neutrons. We have characterized the response of these materials to protons, alpha particles and carbon-12 nuclei as a functions of dose and energy. We have observed individual tracks using scanning electron microscopy and atomic force microscopy. We are investigating the use of neural net analysis to characterize energetic neutron fields from their track structure in these materials

  1. Quantum memory Quantum memory

    Science.gov (United States)

    Le Gouët, Jean-Louis; Moiseev, Sergey

    2012-06-01

    Interaction of quantum radiation with multi-particle ensembles has sparked off intense research efforts during the past decade. Emblematic of this field is the quantum memory scheme, where a quantum state of light is mapped onto an ensemble of atoms and then recovered in its original shape. While opening new access to the basics of light-atom interaction, quantum memory also appears as a key element for information processing applications, such as linear optics quantum computation and long-distance quantum communication via quantum repeaters. Not surprisingly, it is far from trivial to practically recover a stored quantum state of light and, although impressive progress has already been accomplished, researchers are still struggling to reach this ambitious objective. This special issue provides an account of the state-of-the-art in a fast-moving research area that makes physicists, engineers and chemists work together at the forefront of their discipline, involving quantum fields and atoms in different media, magnetic resonance techniques and material science. Various strategies have been considered to store and retrieve quantum light. The explored designs belong to three main—while still overlapping—classes. In architectures derived from photon echo, information is mapped over the spectral components of inhomogeneously broadened absorption bands, such as those encountered in rare earth ion doped crystals and atomic gases in external gradient magnetic field. Protocols based on electromagnetic induced transparency also rely on resonant excitation and are ideally suited to the homogeneous absorption lines offered by laser cooled atomic clouds or ion Coulomb crystals. Finally off-resonance approaches are illustrated by Faraday and Raman processes. Coupling with an optical cavity may enhance the storage process, even for negligibly small atom number. Multiple scattering is also proposed as a way to enlarge the quantum interaction distance of light with matter. The

  2. COMPUTING

    CERN Multimedia

    I. Fisk

    2013-01-01

    Computing activity had ramped down after the completion of the reprocessing of the 2012 data and parked data, but is increasing with new simulation samples for analysis and upgrade studies. Much of the Computing effort is currently involved in activities to improve the computing system in preparation for 2015. Operations Office Since the beginning of 2013, the Computing Operations team successfully re-processed the 2012 data in record time, not only by using opportunistic resources like the San Diego Supercomputer Center which was accessible, to re-process the primary datasets HTMHT and MultiJet in Run2012D much earlier than planned. The Heavy-Ion data-taking period was successfully concluded in February collecting almost 500 T. Figure 3: Number of events per month (data) In LS1, our emphasis is to increase efficiency and flexibility of the infrastructure and operation. Computing Operations is working on separating disk and tape at the Tier-1 sites and the full implementation of the xrootd federation ...

  3. COMPUTING

    CERN Multimedia

    I. Fisk

    2010-01-01

    Introduction It has been a very active quarter in Computing with interesting progress in all areas. The activity level at the computing facilities, driven by both organised processing from data operations and user analysis, has been steadily increasing. The large-scale production of simulated events that has been progressing throughout the fall is wrapping-up and reprocessing with pile-up will continue. A large reprocessing of all the proton-proton data has just been released and another will follow shortly. The number of analysis jobs by users each day, that was already hitting the computing model expectations at the time of ICHEP, is now 33% higher. We are expecting a busy holiday break to ensure samples are ready in time for the winter conferences. Heavy Ion An activity that is still in progress is computing for the heavy-ion program. The heavy-ion events are collected without zero suppression, so the event size is much large at roughly 11 MB per event of RAW. The central collisions are more complex and...

  4. COMPUTING

    CERN Multimedia

    M. Kasemann P. McBride Edited by M-C. Sawley with contributions from: P. Kreuzer D. Bonacorsi S. Belforte F. Wuerthwein L. Bauerdick K. Lassila-Perini M-C. Sawley

    Introduction More than seventy CMS collaborators attended the Computing and Offline Workshop in San Diego, California, April 20-24th to discuss the state of readiness of software and computing for collisions. Focus and priority were given to preparations for data taking and providing room for ample dialog between groups involved in Commissioning, Data Operations, Analysis and MC Production. Throughout the workshop, aspects of software, operating procedures and issues addressing all parts of the computing model were discussed. Plans for the CMS participation in STEP’09, the combined scale testing for all four experiments due in June 2009, were refined. The article in CMS Times by Frank Wuerthwein gave a good recap of the highly collaborative atmosphere of the workshop. Many thanks to UCSD and to the organizers for taking care of this workshop, which resulted in a long list of action items and was definitely a success. A considerable amount of effort and care is invested in the estimate of the comput...

  5. Computer-Based Cognitive Programs for Improvement of Memory, Processing Speed and Executive Function during Age-Related Cognitive Decline: A Meta-Analysis.

    Directory of Open Access Journals (Sweden)

    Yan-kun Shao

    Full Text Available Several studies have assessed the effects of computer-based cognitive programs (CCP in the management of age-related cognitive decline, but the role of CCP remains controversial. Therefore, this systematic review evaluated the evidence on the efficacy of CCP for age-related cognitive decline in healthy older adults.Six electronic databases (through October 2014 were searched. The risk of bias was assessed using the Cochrane Collaboration tool. The standardized mean difference (SMD and 95% confidence intervals (CI of a random-effects model were calculated. The heterogeneity was assessed using the Cochran Q statistic and quantified with the I2 index.Twelve studies were included in the current review and were considered as moderate to high methodological quality. The aggregated results indicate that CCP improves memory performance (SMD, 0.31; 95% CI 0.16 to 0.45; p < 0.0001 and processing speed (SMD, 0.50; 95% CI 0.14 to 0.87; p = 0.007 but not executive function (SMD, -0.12; 95% CI -0.33 to 0.09; p = 0.27. Furthermore, there were long-term gains in memory performance (SMD, 0.59; 95% CI 0.13 to 1.05; p = 0.01.CCP may be a valid complementary and alternative therapy for age-related cognitive decline, especially for memory performance and processing speed. However, more studies with longer follow-ups are warranted to confirm the current findings.

  6. Predictive Place-Cell Sequences for Goal-Finding Emerge from Goal Memory and the Cognitive Map: A Computational Model

    Directory of Open Access Journals (Sweden)

    Lorenz Gönner

    2017-10-01

    Full Text Available Hippocampal place-cell sequences observed during awake immobility often represent previous experience, suggesting a role in memory processes. However, recent reports of goals being overrepresented in sequential activity suggest a role in short-term planning, although a detailed understanding of the origins of hippocampal sequential activity and of its functional role is still lacking. In particular, it is unknown which mechanism could support efficient planning by generating place-cell sequences biased toward known goal locations, in an adaptive and constructive fashion. To address these questions, we propose a model of spatial learning and sequence generation as interdependent processes, integrating cortical contextual coding, synaptic plasticity and neuromodulatory mechanisms into a map-based approach. Following goal learning, sequential activity emerges from continuous attractor network dynamics biased by goal memory inputs. We apply Bayesian decoding on the resulting spike trains, allowing a direct comparison with experimental data. Simulations show that this model (1 explains the generation of never-experienced sequence trajectories in familiar environments, without requiring virtual self-motion signals, (2 accounts for the bias in place-cell sequences toward goal locations, (3 highlights their utility in flexible route planning, and (4 provides specific testable predictions.

  7. Personal Computers.

    Science.gov (United States)

    Toong, Hoo-min D.; Gupta, Amar

    1982-01-01

    Describes the hardware, software, applications, and current proliferation of personal computers (microcomputers). Includes discussions of microprocessors, memory, output (including printers), application programs, the microcomputer industry, and major microcomputer manufacturers (Apple, Radio Shack, Commodore, and IBM). (JN)

  8. COMPUTING

    CERN Multimedia

    P. McBride

    It has been a very active year for the computing project with strong contributions from members of the global community. The project has focused on site preparation and Monte Carlo production. The operations group has begun processing data from P5 as part of the global data commissioning. Improvements in transfer rates and site availability have been seen as computing sites across the globe prepare for large scale production and analysis as part of CSA07. Preparations for the upcoming Computing Software and Analysis Challenge CSA07 are progressing. Ian Fisk and Neil Geddes have been appointed as coordinators for the challenge. CSA07 will include production tests of the Tier-0 production system, reprocessing at the Tier-1 sites and Monte Carlo production at the Tier-2 sites. At the same time there will be a large analysis exercise at the Tier-2 centres. Pre-production simulation of the Monte Carlo events for the challenge is beginning. Scale tests of the Tier-0 will begin in mid-July and the challenge it...

  9. COMPUTING

    CERN Multimedia

    M. Kasemann

    Introduction During the past six months, Computing participated in the STEP09 exercise, had a major involvement in the October exercise and has been working with CMS sites on improving open issues relevant for data taking. At the same time operations for MC production, real data reconstruction and re-reconstructions and data transfers at large scales were performed. STEP09 was successfully conducted in June as a joint exercise with ATLAS and the other experiments. It gave good indication about the readiness of the WLCG infrastructure with the two major LHC experiments stressing the reading, writing and processing of physics data. The October Exercise, in contrast, was conducted as an all-CMS exercise, where Physics, Computing and Offline worked on a common plan to exercise all steps to efficiently access and analyze data. As one of the major results, the CMS Tier-2s demonstrated to be fully capable for performing data analysis. In recent weeks, efforts were devoted to CMS Computing readiness. All th...

  10. COMPUTING

    CERN Multimedia

    I. Fisk

    2011-01-01

    Introduction It has been a very active quarter in Computing with interesting progress in all areas. The activity level at the computing facilities, driven by both organised processing from data operations and user analysis, has been steadily increasing. The large-scale production of simulated events that has been progressing throughout the fall is wrapping-up and reprocessing with pile-up will continue. A large reprocessing of all the proton-proton data has just been released and another will follow shortly. The number of analysis jobs by users each day, that was already hitting the computing model expectations at the time of ICHEP, is now 33% higher. We are expecting a busy holiday break to ensure samples are ready in time for the winter conferences. Heavy Ion The Tier 0 infrastructure was able to repack and promptly reconstruct heavy-ion collision data. Two copies were made of the data at CERN using a large CASTOR disk pool, and the core physics sample was replicated ...

  11. COMPUTING

    CERN Multimedia

    I. Fisk

    2012-01-01

    Introduction Computing continued with a high level of activity over the winter in preparation for conferences and the start of the 2012 run. 2012 brings new challenges with a new energy, more complex events, and the need to make the best use of the available time before the Long Shutdown. We expect to be resource constrained on all tiers of the computing system in 2012 and are working to ensure the high-priority goals of CMS are not impacted. Heavy ions After a successful 2011 heavy-ion run, the programme is moving to analysis. During the run, the CAF resources were well used for prompt analysis. Since then in 2012 on average 200 job slots have been used continuously at Vanderbilt for analysis workflows. Operations Office As of 2012, the Computing Project emphasis has moved from commissioning to operation of the various systems. This is reflected in the new organisation structure where the Facilities and Data Operations tasks have been merged into a common Operations Office, which now covers everything ...

  12. COMPUTING

    CERN Multimedia

    M. Kasemann

    CCRC’08 challenges and CSA08 During the February campaign of the Common Computing readiness challenges (CCRC’08), the CMS computing team had achieved very good results. The link between the detector site and the Tier0 was tested by gradually increasing the number of parallel transfer streams well beyond the target. Tests covered the global robustness at the Tier0, processing a massive number of very large files and with a high writing speed to tapes.  Other tests covered the links between the different Tiers of the distributed infrastructure and the pre-staging and reprocessing capacity of the Tier1’s: response time, data transfer rate and success rate for Tape to Buffer staging of files kept exclusively on Tape were measured. In all cases, coordination with the sites was efficient and no serious problem was found. These successful preparations prepared the ground for the second phase of the CCRC’08 campaign, in May. The Computing Software and Analysis challen...

  13. COMPUTING

    CERN Multimedia

    I. Fisk

    2010-01-01

    Introduction The first data taking period of November produced a first scientific paper, and this is a very satisfactory step for Computing. It also gave the invaluable opportunity to learn and debrief from this first, intense period, and make the necessary adaptations. The alarm procedures between different groups (DAQ, Physics, T0 processing, Alignment/calibration, T1 and T2 communications) have been reinforced. A major effort has also been invested into remodeling and optimizing operator tasks in all activities in Computing, in parallel with the recruitment of new Cat A operators. The teams are being completed and by mid year the new tasks will have been assigned. CRB (Computing Resource Board) The Board met twice since last CMS week. In December it reviewed the experience of the November data-taking period and could measure the positive improvements made for the site readiness. It also reviewed the policy under which Tier-2 are associated with Physics Groups. Such associations are decided twice per ye...

  14. COMPUTING

    CERN Multimedia

    M. Kasemann

    Introduction More than seventy CMS collaborators attended the Computing and Offline Workshop in San Diego, California, April 20-24th to discuss the state of readiness of software and computing for collisions. Focus and priority were given to preparations for data taking and providing room for ample dialog between groups involved in Commissioning, Data Operations, Analysis and MC Production. Throughout the workshop, aspects of software, operating procedures and issues addressing all parts of the computing model were discussed. Plans for the CMS participation in STEP’09, the combined scale testing for all four experiments due in June 2009, were refined. The article in CMS Times by Frank Wuerthwein gave a good recap of the highly collaborative atmosphere of the workshop. Many thanks to UCSD and to the organizers for taking care of this workshop, which resulted in a long list of action items and was definitely a success. A considerable amount of effort and care is invested in the estimate of the co...

  15. Computer Games as a Tool for Implementation of Memory Policy (on the Example of Displaying Events of The Great Patriotic War in Video Games

    Directory of Open Access Journals (Sweden)

    Сергей Игоревич Белов

    2018-12-01

    Full Text Available The presented work is devoted to the study of the practice of using computer games as a tool of the memory policy. The relevance of this study determines both the growth of the importance of video games as a means of forming ideas about the events of the past, and a low degree of study of this topic. As the goal of the author's research, the research is to identify the prospects for using computer games as an instrument for implementing the memory policy within the framework of the case of the events of the Great Patriotic War. The empirical base of the work was formed due to the generalization of the content of such video games as “Call of Duty 1”, “Call of Duty 14: WWII”, “Company of Heroes 2” and “Commandos 3: Destination Berlin”. The methodological base of the research is formed due to the involvement of elements of descriptive political analysis, the theory of operant conditioning B.F. Skinner and the concept of social identity H. Tajfel and J. Turner. The author comes to the conclusion that familiarization of users with the designated games contributes to the consolidation in the minds of users of negative stereotypes regarding the participation of the Red Army in the Great Patriotic War. The process of integration of negative images is carried out using the methods of operant conditioning. The integration of the system of negative images into the mass consciousness of the inhabitants of the post-Soviet space makes it difficult to preserve the remnants of Soviet political symbols and elements constructed on their basis identity. The author puts forward the hypothesis that in the case of complete desovietization of the public policy space in the states that emerged as a result of the collapse of the USSR, the task of revising the history of the Great Patriotic War will be greatly facilitated, and with the subsequent departure from the life of the last eyewitnesses of the relevant events, achieving this goal will be only a

  16. Memory architecture

    NARCIS (Netherlands)

    2012-01-01

    A memory architecture is presented. The memory architecture comprises a first memory and a second memory. The first memory has at least a bank with a first width addressable by a single address. The second memory has a plurality of banks of a second width, said banks being addressable by components

  17. COMPUTING

    CERN Multimedia

    2010-01-01

    Introduction Just two months after the “LHC First Physics” event of 30th March, the analysis of the O(200) million 7 TeV collision events in CMS accumulated during the first 60 days is well under way. The consistency of the CMS computing model has been confirmed during these first weeks of data taking. This model is based on a hierarchy of use-cases deployed between the different tiers and, in particular, the distribution of RECO data to T1s, who then serve data on request to T2s, along a topology known as “fat tree”. Indeed, during this period this model was further extended by almost full “mesh” commissioning, meaning that RECO data were shipped to T2s whenever possible, enabling additional physics analyses compared with the “fat tree” model. Computing activities at the CMS Analysis Facility (CAF) have been marked by a good time response for a load almost evenly shared between ALCA (Alignment and Calibration tasks - highest p...

  18. COMPUTING

    CERN Multimedia

    Contributions from I. Fisk

    2012-01-01

    Introduction The start of the 2012 run has been busy for Computing. We have reconstructed, archived, and served a larger sample of new data than in 2011, and we are in the process of producing an even larger new sample of simulations at 8 TeV. The running conditions and system performance are largely what was anticipated in the plan, thanks to the hard work and preparation of many people. Heavy ions Heavy Ions has been actively analysing data and preparing for conferences.  Operations Office Figure 6: Transfers from all sites in the last 90 days For ICHEP and the Upgrade efforts, we needed to produce and process record amounts of MC samples while supporting the very successful data-taking. This was a large burden, especially on the team members. Nevertheless the last three months were very successful and the total output was phenomenal, thanks to our dedicated site admins who keep the sites operational and the computing project members who spend countless hours nursing the...

  19. COMPUTING

    CERN Multimedia

    M. Kasemann

    Introduction A large fraction of the effort was focused during the last period into the preparation and monitoring of the February tests of Common VO Computing Readiness Challenge 08. CCRC08 is being run by the WLCG collaboration in two phases, between the centres and all experiments. The February test is dedicated to functionality tests, while the May challenge will consist of running at all centres and with full workflows. For this first period, a number of functionality checks of the computing power, data repositories and archives as well as network links are planned. This will help assess the reliability of the systems under a variety of loads, and identifying possible bottlenecks. Many tests are scheduled together with other VOs, allowing the full scale stress test. The data rates (writing, accessing and transfer¬ring) are being checked under a variety of loads and operating conditions, as well as the reliability and transfer rates of the links between Tier-0 and Tier-1s. In addition, the capa...

  20. COMPUTING

    CERN Multimedia

    Matthias Kasemann

    Overview The main focus during the summer was to handle data coming from the detector and to perform Monte Carlo production. The lessons learned during the CCRC and CSA08 challenges in May were addressed by dedicated PADA campaigns lead by the Integration team. Big improvements were achieved in the stability and reliability of the CMS Tier1 and Tier2 centres by regular and systematic follow-up of faults and errors with the help of the Savannah bug tracking system. In preparation for data taking the roles of a Computing Run Coordinator and regular computing shifts monitoring the services and infrastructure as well as interfacing to the data operations tasks are being defined. The shift plan until the end of 2008 is being put together. User support worked on documentation and organized several training sessions. The ECoM task force delivered the report on “Use Cases for Start-up of pp Data-Taking” with recommendations and a set of tests to be performed for trigger rates much higher than the ...

  1. COMPUTING

    CERN Multimedia

    P. MacBride

    The Computing Software and Analysis Challenge CSA07 has been the main focus of the Computing Project for the past few months. Activities began over the summer with the preparation of the Monte Carlo data sets for the challenge and tests of the new production system at the Tier-0 at CERN. The pre-challenge Monte Carlo production was done in several steps: physics generation, detector simulation, digitization, conversion to RAW format and the samples were run through the High Level Trigger (HLT). The data was then merged into three "Soups": Chowder (ALPGEN), Stew (Filtered Pythia) and Gumbo (Pythia). The challenge officially started when the first Chowder events were reconstructed on the Tier-0 on October 3rd. The data operations teams were very busy during the the challenge period. The MC production teams continued with signal production and processing while the Tier-0 and Tier-1 teams worked on splitting the Soups into Primary Data Sets (PDS), reconstruction and skimming. The storage sys...

  2. COMPUTING

    CERN Multimedia

    I. Fisk

    2013-01-01

    Computing operation has been lower as the Run 1 samples are completing and smaller samples for upgrades and preparations are ramping up. Much of the computing activity is focusing on preparations for Run 2 and improvements in data access and flexibility of using resources. Operations Office Data processing was slow in the second half of 2013 with only the legacy re-reconstruction pass of 2011 data being processed at the sites.   Figure 1: MC production and processing was more in demand with a peak of over 750 Million GEN-SIM events in a single month.   Figure 2: The transfer system worked reliably and efficiently and transferred on average close to 520 TB per week with peaks at close to 1.2 PB.   Figure 3: The volume of data moved between CMS sites in the last six months   The tape utilisation was a focus for the operation teams with frequent deletion campaigns from deprecated 7 TeV MC GEN-SIM samples to INVALID datasets, which could be cleaned up...

  3. COMPUTING

    CERN Multimedia

    I. Fisk

    2012-01-01

      Introduction Computing activity has been running at a sustained, high rate as we collect data at high luminosity, process simulation, and begin to process the parked data. The system is functional, though a number of improvements are planned during LS1. Many of the changes will impact users, we hope only in positive ways. We are trying to improve the distributed analysis tools as well as the ability to access more data samples more transparently.  Operations Office Figure 2: Number of events per month, for 2012 Since the June CMS Week, Computing Operations teams successfully completed data re-reconstruction passes and finished the CMSSW_53X MC campaign with over three billion events available in AOD format. Recorded data was successfully processed in parallel, exceeding 1.2 billion raw physics events per month for the first time in October 2012 due to the increase in data-parking rate. In parallel, large efforts were dedicated to WMAgent development and integrati...

  4. Phase change memory

    CERN Document Server

    Qureshi, Moinuddin K

    2011-01-01

    As conventional memory technologies such as DRAM and Flash run into scaling challenges, architects and system designers are forced to look at alternative technologies for building future computer systems. This synthesis lecture begins by listing the requirements for a next generation memory technology and briefly surveys the landscape of novel non-volatile memories. Among these, Phase Change Memory (PCM) is emerging as a leading contender, and the authors discuss the material, device, and circuit advances underlying this exciting technology. The lecture then describes architectural solutions t

  5. COMPUTING

    CERN Multimedia

    I. Fisk

    2011-01-01

    Introduction The Computing Team successfully completed the storage, initial processing, and distribution for analysis of proton-proton data in 2011. There are still a variety of activities ongoing to support winter conference activities and preparations for 2012. Heavy ions The heavy-ion run for 2011 started in early November and has already demonstrated good machine performance and success of some of the more advanced workflows planned for 2011. Data collection will continue until early December. Facilities and Infrastructure Operations Operational and deployment support for WMAgent and WorkQueue+Request Manager components, routinely used in production by Data Operations, are provided. The GlideInWMS and components installation are now deployed at CERN, which is added to the GlideInWMS factory placed in the US. There has been new operational collaboration between the CERN team and the UCSD GlideIn factory operators, covering each others time zones by monitoring/debugging pilot jobs sent from the facto...

  6. A computational model for evaluating the effects of attention, memory, and mental models on situation assessment of nuclear power plant operators

    International Nuclear Information System (INIS)

    Lee, Hyun-Chul; Seong, Poong-Hyun

    2009-01-01

    Operators in nuclear power plants have to acquire information from human system interfaces (HSIs) and the environment in order to create, update, and confirm their understanding of a plant state, as failures of situation assessment may cause wrong decisions for process control and finally errors of commission in nuclear power plants. A few computational models that can be used to predict and quantify the situation awareness of operators have been suggested. However, these models do not sufficiently consider human characteristics for nuclear power plant operators. In this paper, we propose a computational model for situation assessment of nuclear power plant operators using a Bayesian network. This model incorporates human factors significantly affecting operators' situation assessment, such as attention, working memory decay, and mental model. As this proposed model provides quantitative results of situation assessment and diagnostic performance, we expect that this model can be used in the design and evaluation of human system interfaces as well as the prediction of situation awareness errors in the human reliability analysis.

  7. A computational model for evaluating the effects of attention, memory, and mental models on situation assessment of nuclear power plant operators

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Hyun-Chul [Instrumentation and Control/Human Factors Division, Korea Atomic Energy Research Institute, 1045 Daedeok-daero, Yuseong-gu, Daejeon 305-353 (Korea, Republic of)], E-mail: leehc@kaeri.re.kr; Seong, Poong-Hyun [Department of Nuclear and Quantum Engineering, Korea Advanced Institute of Science and Technology, 373-1, Guseong-dong, Yuseong-gu, Daejeon 305-701 (Korea, Republic of)

    2009-11-15

    Operators in nuclear power plants have to acquire information from human system interfaces (HSIs) and the environment in order to create, update, and confirm their understanding of a plant state, as failures of situation assessment may cause wrong decisions for process control and finally errors of commission in nuclear power plants. A few computational models that can be used to predict and quantify the situation awareness of operators have been suggested. However, these models do not sufficiently consider human characteristics for nuclear power plant operators. In this paper, we propose a computational model for situation assessment of nuclear power plant operators using a Bayesian network. This model incorporates human factors significantly affecting operators' situation assessment, such as attention, working memory decay, and mental model. As this proposed model provides quantitative results of situation assessment and diagnostic performance, we expect that this model can be used in the design and evaluation of human system interfaces as well as the prediction of situation awareness errors in the human reliability analysis.

  8. Optical quantum memory

    Science.gov (United States)

    Lvovsky, Alexander I.; Sanders, Barry C.; Tittel, Wolfgang

    2009-12-01

    Quantum memory is essential for the development of many devices in quantum information processing, including a synchronization tool that matches various processes within a quantum computer, an identity quantum gate that leaves any state unchanged, and a mechanism to convert heralded photons to on-demand photons. In addition to quantum computing, quantum memory will be instrumental for implementing long-distance quantum communication using quantum repeaters. The importance of this basic quantum gate is exemplified by the multitude of optical quantum memory mechanisms being studied, such as optical delay lines, cavities and electromagnetically induced transparency, as well as schemes that rely on photon echoes and the off-resonant Faraday interaction. Here, we report on state-of-the-art developments in the field of optical quantum memory, establish criteria for successful quantum memory and detail current performance levels.

  9. Transactional Memory

    CERN Document Server

    Harris, Tim; Rajwar, Ravi

    2010-01-01

    The advent of multicore processors has renewed interest in the idea of incorporating transactions into the programming model used to write parallel programs.This approach, known as transactional memory, offers an alternative, and hopefully better, way to coordinate concurrent threads. The ACI(atomicity, consistency, isolation) properties of transactions provide a foundation to ensure that concurrent reads and writes of shared data do not produce inconsistent or incorrect results. At a higher level, a computation wrapped in a transaction executes atomically - either it completes successfullyand

  10. COMPUTING

    CERN Multimedia

    M. Kasemann

    CMS relies on a well functioning, distributed computing infrastructure. The Site Availability Monitoring (SAM) and the Job Robot submission have been very instrumental for site commissioning in order to increase availability of more sites such that they are available to participate in CSA07 and are ready to be used for analysis. The commissioning process has been further developed, including "lessons learned" documentation via the CMS twiki. Recently the visualization, presentation and summarizing of SAM tests for sites has been redesigned, it is now developed by the central ARDA project of WLCG. Work to test the new gLite Workload Management System was performed; a 4 times increase in throughput with respect to LCG Resource Broker is observed. CMS has designed and launched a new-generation traffic load generator called "LoadTest" to commission and to keep exercised all data transfer routes in the CMS PhE-DEx topology. Since mid-February, a transfer volume of about 12 P...

  11. Computer-Based Training in Math and Working Memory Improves Cognitive Skills and Academic Achievement in Primary School Children: Behavioral Results

    Directory of Open Access Journals (Sweden)

    Noelia Sánchez-Pérez

    2018-01-01

    Full Text Available Student academic achievement has been positively related to further development outcomes, such as the attainment of higher educational, employment, and socioeconomic aspirations. Among all the academic competences, mathematics has been identified as an essential skill in the field of international leadership as well as for those seeking positions in disciplines related to science, technology, and engineering. Given its positive consequences, studies have designed trainings to enhance children's mathematical skills. Additionally, the ability to regulate and control actions and cognitions, i.e., executive functions (EF, has been associated with school success, which has resulted in a strong effort to develop EF training programs to improve students' EF and academic achievement. The present study examined the efficacy of a school computer-based training composed of two components, namely, working memory and mathematics tasks. Among the advantages of using a computer-based training program is the ease with which it can be implemented in school settings and the ease by which the difficulty of the tasks can be adapted to fit the child's ability level. To test the effects of the training, children's cognitive skills (EF and IQ and their school achievement (math and language grades and abilities were evaluated. The results revealed a significant improvement in cognitive skills, such as non-verbal IQ and inhibition, and better school performance in math and reading among the children who participated in the training compared to those children who did not. Most of the improvements were related to training on WM tasks. These findings confirmed the efficacy of a computer-based training that combined WM and mathematics activities as part of the school routines based on the training's impact on children's academic competences and cognitive skills.

  12. Computer-Based Training in Math and Working Memory Improves Cognitive Skills and Academic Achievement in Primary School Children: Behavioral Results.

    Science.gov (United States)

    Sánchez-Pérez, Noelia; Castillo, Alejandro; López-López, José A; Pina, Violeta; Puga, Jorge L; Campoy, Guillermo; González-Salinas, Carmen; Fuentes, Luis J

    2017-01-01

    Student academic achievement has been positively related to further development outcomes, such as the attainment of higher educational, employment, and socioeconomic aspirations. Among all the academic competences, mathematics has been identified as an essential skill in the field of international leadership as well as for those seeking positions in disciplines related to science, technology, and engineering. Given its positive consequences, studies have designed trainings to enhance children's mathematical skills. Additionally, the ability to regulate and control actions and cognitions, i.e., executive functions (EF), has been associated with school success, which has resulted in a strong effort to develop EF training programs to improve students' EF and academic achievement. The present study examined the efficacy of a school computer-based training composed of two components, namely, working memory and mathematics tasks. Among the advantages of using a computer-based training program is the ease with which it can be implemented in school settings and the ease by which the difficulty of the tasks can be adapted to fit the child's ability level. To test the effects of the training, children's cognitive skills (EF and IQ) and their school achievement (math and language grades and abilities) were evaluated. The results revealed a significant improvement in cognitive skills, such as non-verbal IQ and inhibition, and better school performance in math and reading among the children who participated in the training compared to those children who did not. Most of the improvements were related to training on WM tasks. These findings confirmed the efficacy of a computer-based training that combined WM and mathematics activities as part of the school routines based on the training's impact on children's academic competences and cognitive skills.

  13. Progress In Optical Memory Technology

    Science.gov (United States)

    Tsunoda, Yoshito

    1987-01-01

    More than 20 years have passed since the concept of optical memory was first proposed in 1966. Since then considerable progress has been made in this area together with the creation of completely new markets of optical memory in consumer and computer application areas. The first generation of optical memory was mainly developed with holographic recording technology in late 1960s and early 1970s. Considerable number of developments have been done in both analog and digital memory applications. Unfortunately, these technologies did not meet a chance to be a commercial product. The second generation of optical memory started at the beginning of 1970s with bit by bit recording technology. Read-only type optical memories such as video disks and compact audio disks have extensively investigated. Since laser diodes were first applied to optical video disk read out in 1976, there have been extensive developments of laser diode pick-ups for optical disk memory systems. The third generation of optical memory started in 1978 with bit by bit read/write technology using laser diodes. Developments of recording materials including both write-once and erasable have been actively pursued at several research institutes. These technologies are mainly focused on the optical memory systems for computer application. Such practical applications of optical memory technology has resulted in the creation of such new products as compact audio disks and computer file memories.

  14. MEMORY MODULATION

    Science.gov (United States)

    Roozendaal, Benno; McGaugh, James L.

    2011-01-01

    Our memories are not all created equally strong: Some experiences are well remembered while others are remembered poorly, if at all. Research on memory modulation investigates the neurobiological processes and systems that contribute to such differences in the strength of our memories. Extensive evidence from both animal and human research indicates that emotionally significant experiences activate hormonal and brain systems that regulate the consolidation of newly acquired memories. These effects are integrated through noradrenergic activation of the basolateral amygdala which regulates memory consolidation via interactions with many other brain regions involved in consolidating memories of recent experiences. Modulatory systems not only influence neurobiological processes underlying the consolidation of new information, but also affect other mnemonic processes, including memory extinction, memory recall and working memory. In contrast to their enhancing effects on consolidation, adrenal stress hormones impair memory retrieval and working memory. Such effects, as with memory consolidation, require noradrenergic activation of the basolateral amygdala and interactions with other brain regions. PMID:22122145

  15. Memory Matters

    Science.gov (United States)

    ... Staying Safe Videos for Educators Search English Español Memory Matters KidsHealth / For Kids / Memory Matters What's in ... of your complex and multitalented brain. What Is Memory? When an event happens, when you learn something, ...

  16. The parallel processing of EGS4 code on distributed memory scalar parallel computer:Intel Paragon XP/S15-256

    Energy Technology Data Exchange (ETDEWEB)

    Takemiya, Hiroshi; Ohta, Hirofumi; Honma, Ichirou

    1996-03-01

    The parallelization of Electro-Magnetic Cascade Monte Carlo Simulation Code, EGS4 on distributed memory scalar parallel computer: Intel Paragon XP/S15-256 is described. EGS4 has the feature that calculation time for one incident particle is quite different from each other because of the dynamic generation of secondary particles and different behavior of each particle. Granularity for parallel processing, parallel programming model and the algorithm of parallel random number generation are discussed and two kinds of method, each of which allocates particles dynamically or statically, are used for the purpose of realizing high speed parallel processing of this code. Among four problems chosen for performance evaluation, the speedup factors for three problems have been attained to nearly 100 times with 128 processor. It has been found that when both the calculation time for each incident particles and its dispersion are large, it is preferable to use dynamic particle allocation method which can average the load for each processor. And it has also been found that when they are small, it is preferable to use static particle allocation method which reduces the communication overhead. Moreover, it is pointed out that to get the result accurately, it is necessary to use double precision variables in EGS4 code. Finally, the workflow of program parallelization is analyzed and tools for program parallelization through the experience of the EGS4 parallelization are discussed. (author).

  17. Sparse distributed memory overview

    Science.gov (United States)

    Raugh, Mike

    1990-01-01

    The Sparse Distributed Memory (SDM) project is investigating the theory and applications of massively parallel computing architecture, called sparse distributed memory, that will support the storage and retrieval of sensory and motor patterns characteristic of autonomous systems. The immediate objectives of the project are centered in studies of the memory itself and in the use of the memory to solve problems in speech, vision, and robotics. Investigation of methods for encoding sensory data is an important part of the research. Examples of NASA missions that may benefit from this work are Space Station, planetary rovers, and solar exploration. Sparse distributed memory offers promising technology for systems that must learn through experience and be capable of adapting to new circumstances, and for operating any large complex system requiring automatic monitoring and control. Sparse distributed memory is a massively parallel architecture motivated by efforts to understand how the human brain works. Sparse distributed memory is an associative memory, able to retrieve information from cues that only partially match patterns stored in the memory. It is able to store long temporal sequences derived from the behavior of a complex system, such as progressive records of the system's sensory data and correlated records of the system's motor controls.

  18. Time-Predictable Virtual Memory

    DEFF Research Database (Denmark)

    Puffitsch, Wolfgang; Schoeberl, Martin

    2016-01-01

    Virtual memory is an important feature of modern computer architectures. For hard real-time systems, memory protection is a particularly interesting feature of virtual memory. However, current memory management units are not designed for time-predictability and therefore cannot be used...... in such systems. This paper investigates the requirements on virtual memory from the perspective of hard real-time systems and presents the design of a time-predictable memory management unit. Our evaluation shows that the proposed design can be implemented efficiently. The design allows address translation...... and address range checking in constant time of two clock cycles on a cache miss. This constant time is in strong contrast to the possible cost of a miss in a translation look-aside buffer in traditional virtual memory organizations. Compared to a platform without a memory management unit, these two additional...

  19. Exploring memory hierarchy design with emerging memory technologies

    CERN Document Server

    Sun, Guangyu

    2014-01-01

    This book equips readers with tools for computer architecture of high performance, low power, and high reliability memory hierarchy in computer systems based on emerging memory technologies, such as STTRAM, PCM, FBDRAM, etc.  The techniques described offer advantages of high density, near-zero static power, and immunity to soft errors, which have the potential of overcoming the “memory wall.”  The authors discuss memory design from various perspectives: emerging memory technologies are employed in the memory hierarchy with novel architecture modification;  hybrid memory structure is introduced to leverage advantages from multiple memory technologies; an analytical model named “Moguls” is introduced to explore quantitatively the optimization design of a memory hierarchy; finally, the vulnerability of the CMPs to radiation-based soft errors is improved by replacing different levels of on-chip memory with STT-RAMs.   ·         Provides a holistic study of using emerging memory technologies i...

  20. How Human Memory and Working Memory Work in Second Language Acquisition

    OpenAIRE

    小那覇, 洋子; Onaha, Hiroko

    2014-01-01

    We often draw an analogy between human memory and computers. Information around us is taken into our memory storage first, and then we use the information in storage whatever we need it in our daily life. Linguistic information is also in storage and we process our thoughts based on the memory that is stored. Memory storage consists of multiple memory systems; one of which is called working memory that includes short-term memory. Working memory is the central system that underpins the process...

  1. Computer programming and computer systems

    CERN Document Server

    Hassitt, Anthony

    1966-01-01

    Computer Programming and Computer Systems imparts a "reading knowledge? of computer systems.This book describes the aspects of machine-language programming, monitor systems, computer hardware, and advanced programming that every thorough programmer should be acquainted with. This text discusses the automatic electronic digital computers, symbolic language, Reverse Polish Notation, and Fortran into assembly language. The routine for reading blocked tapes, dimension statements in subroutines, general-purpose input routine, and efficient use of memory are also elaborated.This publication is inten

  2. The scopolamine-reversal paradigm in rats and monkeys: the importance of computer-assisted operant-conditioning memory tasks for screening drug candidates.

    Science.gov (United States)

    Buccafusco, Jerry J; Terry, Alvin V; Webster, Scott J; Martin, Daniel; Hohnadel, Elizabeth J; Bouchard, Kristy A; Warner, Samantha E

    2008-08-01

    The scopolamine-reversal model is enjoying a resurgence of interest in clinical studies as a reversible pharmacological model for Alzheimer's disease (AD). The cognitive impairment associated with scopolamine is similar to that in AD. The scopolamine model is not simply a cholinergic model, as it can be reversed by drugs that are noncholinergic cognition-enhancing agents. The objective of the study was to determine relevance of computer-assisted operant-conditioning tasks in the scopolamine-reversal model in rats and monkeys. Rats were evaluated for their acquisition of a spatial reference memory task in the Morris water maze. A separate cohort was proficient in performance of an automated delayed stimulus discrimination task (DSDT). Rhesus monkeys were proficient in the performance of an automated delayed matching-to-sample task (DMTS). The AD drug donepezil was evaluated for its ability to reverse the decrements in accuracy induced by scopolamine administration in all three tasks. In the DSDT and DMTS tasks, the effects of donepezil were delay (retention interval)-dependent, affecting primarily short delay trials. Donepezil produced significant but partial reversals of the scopolamine-induced impairment in task accuracies after 2 mg/kg in the water maze, after 1 mg/kg in the DSDT, and after 50 microg/kg in the DMTS task. The two operant-conditioning tasks (DSDT and DMTS) provided data most in keeping with those reported in clinical studies with these drugs. The model applied to nonhuman primates provides an excellent transitional model for new cognition-enhancing drugs before clinical trials.

  3. Method and apparatus for managing access to a memory

    Science.gov (United States)

    DeBenedictis, Erik

    2017-08-01

    A method and apparatus for managing access to a memory of a computing system. A controller transforms a plurality of operations that represent a computing job into an operational memory layout that reduces a size of a selected portion of the memory that needs to be accessed to perform the computing job. The controller stores the operational memory layout in a plurality of memory cells within the selected portion of the memory. The controller controls a sequence by which a processor in the computing system accesses the memory to perform the computing job using the operational memory layout. The operational memory layout reduces an amount of energy consumed by the processor to perform the computing job.

  4. Extended memory management under RTOS

    Science.gov (United States)

    Plummer, M.

    1981-01-01

    A technique for extended memory management in ROLM 1666 computers using FORTRAN is presented. A general software system is described for which the technique can be ideally applied. The memory manager interface with the system is described. The protocols by which the manager is invoked are presented, as well as the methods used by the manager.

  5. Josephson Thermal Memory

    Science.gov (United States)

    Guarcello, Claudio; Solinas, Paolo; Braggio, Alessandro; Di Ventra, Massimiliano; Giazotto, Francesco

    2018-01-01

    We propose a superconducting thermal memory device that exploits the thermal hysteresis in a flux-controlled temperature-biased superconducting quantum-interference device (SQUID). This system reveals a flux-controllable temperature bistability, which can be used to define two well-distinguishable thermal logic states. We discuss a suitable writing-reading procedure for these memory states. The time of the memory writing operation is expected to be on the order of approximately 0.2 ns for a Nb-based SQUID in thermal contact with a phonon bath at 4.2 K. We suggest a noninvasive readout scheme for the memory states based on the measurement of the effective resonance frequency of a tank circuit inductively coupled to the SQUID. The proposed device paves the way for a practical implementation of thermal logic and computation. The advantage of this proposal is that it represents also an example of harvesting thermal energy in superconducting circuits.

  6. Topological Schemas of Memory Spaces

    Science.gov (United States)

    Babichev, Andrey; Dabaghian, Yuri A.

    2018-01-01

    Hippocampal cognitive map—a neuronal representation of the spatial environment—is widely discussed in the computational neuroscience literature for decades. However, more recent studies point out that hippocampus plays a major role in producing yet another cognitive framework—the memory space—that incorporates not only spatial, but also non-spatial memories. Unlike the cognitive maps, the memory spaces, broadly understood as “networks of interconnections among the representations of events,” have not yet been studied from a theoretical perspective. Here we propose a mathematical approach that allows modeling memory spaces constructively, as epiphenomena of neuronal spiking activity and thus to interlink several important notions of cognitive neurophysiology. First, we suggest that memory spaces have a topological nature—a hypothesis that allows treating both spatial and non-spatial aspects of hippocampal function on equal footing. We then model the hippocampal memory spaces in different environments and demonstrate that the resulting constructions naturally incorporate the corresponding cognitive maps and provide a wider context for interpreting spatial information. Lastly, we propose a formal description of the memory consolidation process that connects memory spaces to the Morris' cognitive schemas-heuristic representations of the acquired memories, used to explain the dynamics of learning and memory consolidation in a given environment. The proposed approach allows evaluating these constructs as the most compact representations of the memory space's structure. PMID:29740306

  7. Topological Schemas of Memory Spaces

    Directory of Open Access Journals (Sweden)

    Andrey Babichev

    2018-04-01

    Full Text Available Hippocampal cognitive map—a neuronal representation of the spatial environment—is widely discussed in the computational neuroscience literature for decades. However, more recent studies point out that hippocampus plays a major role in producing yet another cognitive framework—the memory space—that incorporates not only spatial, but also non-spatial memories. Unlike the cognitive maps, the memory spaces, broadly understood as “networks of interconnections among the representations of events,” have not yet been studied from a theoretical perspective. Here we propose a mathematical approach that allows modeling memory spaces constructively, as epiphenomena of neuronal spiking activity and thus to interlink several important notions of cognitive neurophysiology. First, we suggest that memory spaces have a topological nature—a hypothesis that allows treating both spatial and non-spatial aspects of hippocampal function on equal footing. We then model the hippocampal memory spaces in different environments and demonstrate that the resulting constructions naturally incorporate the corresponding cognitive maps and provide a wider context for interpreting spatial information. Lastly, we propose a formal description of the memory consolidation process that connects memory spaces to the Morris' cognitive schemas-heuristic representations of the acquired memories, used to explain the dynamics of learning and memory consolidation in a given environment. The proposed approach allows evaluating these constructs as the most compact representations of the memory space's structure.

  8. Parallel External Memory Graph Algorithms

    DEFF Research Database (Denmark)

    Arge, Lars Allan; Goodrich, Michael T.; Sitchinava, Nodari

    2010-01-01

    In this paper, we study parallel I/O efficient graph algorithms in the Parallel External Memory (PEM) model, one o f the private-cache chip multiprocessor (CMP) models. We study the fundamental problem of list ranking which leads to efficient solutions to problems on trees, such as computing lowest...... an optimal speedup of ¿(P) in parallel I/O complexity and parallel computation time, compared to the single-processor external memory counterparts....

  9. Memory Modulation

    NARCIS (Netherlands)

    Roozendaal, Benno; McGaugh, James L.

    2011-01-01

    Our memories are not all created equally strong: Some experiences are well remembered while others are remembered poorly, if at all. Research on memory modulation investigates the neurobiological processes and systems that contribute to such differences in the strength of our memories. Extensive

  10. RAM-efficient external memory sorting

    DEFF Research Database (Denmark)

    Arge, Lars; Thorup, Mikkel

    2013-01-01

    In recent years a large number of problems have been considered in external memory models of computation, where the complexity measure is the number of blocks of data that are moved between slow external memory and fast internal memory (also called I/Os). In practice, however, internal memory time...... often dominates the total running time once I/O-efficiency has been obtained. In this paper we study algorithms for fundamental problems that are simultaneously I/O-efficient and internal memory efficient in the RAM model of computation....

  11. Memory Dysfunction

    Science.gov (United States)

    Matthews, Brandy R.

    2015-01-01

    Purpose of Review: This article highlights the dissociable human memory systems of episodic, semantic, and procedural memory in the context of neurologic illnesses known to adversely affect specific neuroanatomic structures relevant to each memory system. Recent Findings: Advances in functional neuroimaging and refinement of neuropsychological and bedside assessment tools continue to support a model of multiple memory systems that are distinct yet complementary and to support the potential for one system to be engaged as a compensatory strategy when a counterpart system fails. Summary: Episodic memory, the ability to recall personal episodes, is the subtype of memory most often perceived as dysfunctional by patients and informants. Medial temporal lobe structures, especially the hippocampal formation and associated cortical and subcortical structures, are most often associated with episodic memory loss. Episodic memory dysfunction may present acutely, as in concussion; transiently, as in transient global amnesia (TGA); subacutely, as in thiamine deficiency; or chronically, as in Alzheimer disease. Semantic memory refers to acquired knowledge about the world. Anterior and inferior temporal lobe structures are most often associated with semantic memory loss. The semantic variant of primary progressive aphasia (svPPA) is the paradigmatic disorder resulting in predominant semantic memory dysfunction. Working memory, associated with frontal lobe function, is the active maintenance of information in the mind that can be potentially manipulated to complete goal-directed tasks. Procedural memory, the ability to learn skills that become automatic, involves the basal ganglia, cerebellum, and supplementary motor cortex. Parkinson disease and related disorders result in procedural memory deficits. Most memory concerns warrant bedside cognitive or neuropsychological evaluation and neuroimaging to assess for specific neuropathologies and guide treatment. PMID:26039844

  12. Declarative memory.

    Science.gov (United States)

    Riedel, Wim J; Blokland, Arjan

    2015-01-01

    Declarative Memory consists of memory for events (episodic memory) and facts (semantic memory). Methods to test declarative memory are key in investigating effects of potential cognition-enhancing substances--medicinal drugs or nutrients. A number of cognitive performance tests assessing declarative episodic memory tapping verbal learning, logical memory, pattern recognition memory, and paired associates learning are described. These tests have been used as outcome variables in 34 studies in humans that have been described in the literature in the past 10 years. Also, the use of episodic tests in animal research is discussed also in relation to the drug effects in these tasks. The results show that nutritional supplementation of polyunsaturated fatty acids has been investigated most abundantly and, in a number of cases, but not all, show indications of positive effects on declarative memory, more so in elderly than in young subjects. Studies investigating effects of registered anti-Alzheimer drugs, cholinesterase inhibitors in mild cognitive impairment, show positive and negative effects on declarative memory. Studies mainly carried out in healthy volunteers investigating the effects of acute dopamine stimulation indicate enhanced memory consolidation as manifested specifically by better delayed recall, especially at time points long after learning and more so when drug is administered after learning and if word lists are longer. The animal studies reveal a different picture with respect to the effects of different drugs on memory performance. This suggests that at least for episodic memory tasks, the translational value is rather poor. For the human studies, detailed parameters of the compositions of word lists for declarative memory tests are discussed and it is concluded that tailored adaptations of tests to fit the hypothesis under study, rather than "off-the-shelf" use of existing tests, are recommended.

  13. Deciphering Neural Codes of Memory during Sleep

    Science.gov (United States)

    Chen, Zhe; Wilson, Matthew A.

    2017-01-01

    Memories of experiences are stored in the cerebral cortex. Sleep is critical for consolidating hippocampal memory of wake experiences into the neocortex. Understanding representations of neural codes of hippocampal-neocortical networks during sleep would reveal important circuit mechanisms on memory consolidation, and provide novel insights into memory and dreams. Although sleep-associated ensemble spike activity has been investigated, identifying the content of memory in sleep remains challenging. Here, we revisit important experimental findings on sleep-associated memory (i.e., neural activity patterns in sleep that reflect memory processing) and review computational approaches for analyzing sleep-associated neural codes (SANC). We focus on two analysis paradigms for sleep-associated memory, and propose a new unsupervised learning framework (“memory first, meaning later”) for unbiased assessment of SANC. PMID:28390699

  14. Can Motivation Normalize Working Memory and Task Persistence in Children with Attention-Deficit/Hyperactivity Disorder? The Effects of Money and Computer-Gaming

    Science.gov (United States)

    Dovis, Sebastiaan; van der Oord, Saskia; Wiers, Reinout W.; Prins, Pier J. M.

    2012-01-01

    Visual-spatial "Working Memory" (WM) is the most impaired executive function in children with Attention-Deficit/Hyperactivity Disorder (ADHD). Some suggest that deficits in executive functioning are caused by motivational deficits. However, there are no studies that investigate the effects of motivation on the visual-spatial WM of children with-…

  15. Can motivation normalize working memory and task persistence in children with attention-deficit/hyperactivity disorder? The effects of money and computer-gaming

    NARCIS (Netherlands)

    Dovis, S.; van der Oord, S.; Wiers, R.W.; Prins, P.J.M.

    2012-01-01

    Visual-spatial Working Memory (WM) is the most impaired executive function in children with Attention-Deficit/Hyperactivity Disorder (ADHD). Some suggest that deficits in executive functioning are caused by motivational deficits. However, there are no studies that investigate the effects of

  16. Computational modelling of Ti50Pt50-xMx shape memory alloys (M: Ni, Ir or Pd and x = 6.25-43.75 at.%)

    CSIR Research Space (South Africa)

    Modiba, Rosinah M

    2017-09-01

    Full Text Available and medical stents (Wu and Schetky, 2000; Otsuka and Kakeshita, 2002; Van Humbeeck, 1999; Duerig, Pelton, and Stockel, 1999). A wide range of alloys are known to exhibit the shape memory effect, which occurs when a material returns to, or ‘remembers’, its...

  17. Computational modelling of Ti50Pt50-xMx shape memory alloys (M: Ni, Ir or Pd and x = 6.25-43.75 at.%)

    CSIR Research Space (South Africa)

    Modiba, Rosinah M

    2017-09-01

    Full Text Available The ab initio density functional theory approach was employed to study the effect of Ni, Ir or Pd addition to the TiPt shape memory alloy. The supercell approach in VASP was used to substitute Pt with 6.25, 18.75, 25.00, 31.25 and 43.75 at.% Ni, Ir...

  18. Short-term action potential memory and electrical restitution: A cellular computational study on the stability of cardiac repolarization under dynamic pacing.

    Directory of Open Access Journals (Sweden)

    Massimiliano Zaniboni

    Full Text Available Electrical restitution (ER is a major determinant of repolarization stability and, under fast pacing rate, it reveals memory properties of the cardiac action potential (AP, whose dynamics have never been fully elucidated, nor their ionic mechanisms. Previous studies have looked at ER mainly in terms of changes in AP duration (APD when the preceding diastolic interval (DI changes and described dynamic conditions where this relationship shows hysteresis which, in turn, has been proposed as a marker of short-term AP memory and repolarization stability. By means of numerical simulations of a non-propagated human ventricular AP, we show here that measuring ER as APD versus the preceding cycle length (CL provides additional information on repolarization dynamics which is not contained in the companion formulation. We focus particularly on fast pacing rate conditions with a beat-to-beat variable CL, where memory properties emerge from APD vs CL and not from APD vs DI and should thus be stored in APD and not in DI. We provide an ion-currents characterization of such conditions under periodic and random CL variability, and show that the memory stored in APD plays a stabilizing role on AP repolarization under pacing rate perturbations. The gating kinetics of L-type calcium current seems to be the main determinant of this safety mechanism. We also show that, at fast pacing rate and under otherwise identical pacing conditions, a periodically beat-to-beat changing CL is more effective than a random one in stabilizing repolarization. In summary, we propose a novel view of short-term AP memory, differentially stored between systole and diastole, which opens a number of methodological and theoretical implications for the understanding of arrhythmia development.

  19. Memory design

    DEFF Research Database (Denmark)

    Tanderup, Sisse

    by cultural forms, often specifically by the concept of memory in philosophy, sociology and psychology, while Danish design traditionally has been focusing on form and function with frequent references to the forms of nature. Alessi's motivation for investigating the concept of memory is that it adds......Mind and Matter - Nordik 2009 Conference for Art Historians Design Matters Contributed Memory design BACKGROUND My research concerns the use of memory categories in the designs by the companies Alessi and Georg Jensen. When Alessi's designers create their products, they are usually inspired...... a cultural dimension to the design objects, enabling the objects to make an identity-forming impact. Whether or not the concept of memory plays a significant role in Danish design has not yet been elucidated fully. TERMINOLOGY The concept of "memory design" refers to the idea that design carries...

  20. Disputed Memory

    DEFF Research Database (Denmark)

    , individual and political discourse and electronic social media. Analyzing memory disputes in various local, national and transnational contexts, the chapters demonstrate the political power and social impact of painful and disputed memories. The book brings new insights into current memory disputes...... in Central, Eastern and Southeastern Europe. It contributes to the understanding of processes of memory transmission and negotiation across borders and cultures in Europe, emphasizing the interconnectedness of memory with emotions, mediation and politics....... century in the region. Written by an international group of scholars from a diversity of disciplines, the chapters approach memory disputes in methodologically innovative ways, studying representations and negotiations of disputed pasts in different media, including monuments, museum exhibitions...

  1. Computers, the Human Mind, and My In-Laws' House.

    Science.gov (United States)

    Esque, Timm J.

    1996-01-01

    Discussion of human memory, computer memory, and the storage of information focuses on a metaphor that can account for memory without storage and can set the stage for systemic research around a more comprehensive, understandable theory. (Author/LRW)

  2. Overview of emerging nonvolatile memory technologies.

    Science.gov (United States)

    Meena, Jagan Singh; Sze, Simon Min; Chand, Umesh; Tseng, Tseung-Yuen

    2014-01-01

    Nonvolatile memory technologies in Si-based electronics date back to the 1990s. Ferroelectric field-effect transistor (FeFET) was one of the most promising devices replacing the conventional Flash memory facing physical scaling limitations at those times. A variant of charge storage memory referred to as Flash memory is widely used in consumer electronic products such as cell phones and music players while NAND Flash-based solid-state disks (SSDs) are increasingly displacing hard disk drives as the primary storage device in laptops, desktops, and even data centers. The integration limit of Flash memories is approaching, and many new types of memory to replace conventional Flash memories have been proposed. Emerging memory technologies promise new memories to store more data at less cost than the expensive-to-build silicon chips used by popular consumer gadgets including digital cameras, cell phones and portable music players. They are being investigated and lead to the future as potential alternatives to existing memories in future computing systems. Emerging nonvolatile memory technologies such as magnetic random-access memory (MRAM), spin-transfer torque random-access memory (STT-RAM), ferroelectric random-access memory (FeRAM), phase-change memory (PCM), and resistive random-access memory (RRAM) combine the speed of static random-access memory (SRAM), the density of dynamic random-access memory (DRAM), and the nonvolatility of Flash memory and so become very attractive as another possibility for future memory hierarchies. Many other new classes of emerging memory technologies such as transparent and plastic, three-dimensional (3-D), and quantum dot memory technologies have also gained tremendous popularity in recent years. Subsequently, not an exaggeration to say that computer memory could soon earn the ultimate commercial validation for commercial scale-up and production the cheap plastic knockoff. Therefore, this review is devoted to the rapidly developing new

  3. Overview of emerging nonvolatile memory technologies

    Science.gov (United States)

    2014-01-01

    Nonvolatile memory technologies in Si-based electronics date back to the 1990s. Ferroelectric field-effect transistor (FeFET) was one of the most promising devices replacing the conventional Flash memory facing physical scaling limitations at those times. A variant of charge storage memory referred to as Flash memory is widely used in consumer electronic products such as cell phones and music players while NAND Flash-based solid-state disks (SSDs) are increasingly displacing hard disk drives as the primary storage device in laptops, desktops, and even data centers. The integration limit of Flash memories is approaching, and many new types of memory to replace conventional Flash memories have been proposed. Emerging memory technologies promise new memories to store more data at less cost than the expensive-to-build silicon chips used by popular consumer gadgets including digital cameras, cell phones and portable music players. They are being investigated and lead to the future as potential alternatives to existing memories in future computing systems. Emerging nonvolatile memory technologies such as magnetic random-access memory (MRAM), spin-transfer torque random-access memory (STT-RAM), ferroelectric random-access memory (FeRAM), phase-change memory (PCM), and resistive random-access memory (RRAM) combine the speed of static random-access memory (SRAM), the density of dynamic random-access memory (DRAM), and the nonvolatility of Flash memory and so become very attractive as another possibility for future memory hierarchies. Many other new classes of emerging memory technologies such as transparent and plastic, three-dimensional (3-D), and quantum dot memory technologies have also gained tremendous popularity in recent years. Subsequently, not an exaggeration to say that computer memory could soon earn the ultimate commercial validation for commercial scale-up and production the cheap plastic knockoff. Therefore, this review is devoted to the rapidly developing new

  4. Main Memory

    OpenAIRE

    Boncz, Peter; Liu, Lei; Özsu, M.

    2008-01-01

    htmlabstractPrimary storage, presently known as main memory, is the largest memory directly accessible to the CPU in the prevalent Von Neumann model and stores both data and instructions (program code). The CPU continuously reads instructions stored there and executes them. It is also called Random Access Memory (RAM), to indicate that load/store instructions can access data at any location at the same cost, is usually implemented using DRAM chips, which are connected to the CPU and other per...

  5. 32-Bit FASTBUS computer

    International Nuclear Information System (INIS)

    Blossom, J.M.; Hong, J.P.; Kellner, R.G.

    1985-01-01

    Los Alamos National Laboratory is building a 32-bit FASTBUS computer using the NATIONAL SEMICONDUCTOR 32032 central processing unit (CPU) and containing 16 million bytes of memory. The board can act both as a FASTBUS master and as a FASTBUS slave. It contains a custom direct memory access (DMA) channel which can perform 80 million bytes per second block transfers across the FASTBUS

  6. Computational Fluid Dynamics (CFD) Computations With Zonal Navier-Stokes Flow Solver (ZNSFLOW) Common High Performance Computing Scalable Software Initiative (CHSSI) Software

    National Research Council Canada - National Science Library

    Edge, Harris

    1999-01-01

    ...), computational fluid dynamics (CFD) 6 project. Under the project, a proven zonal Navier-Stokes solver was rewritten for scalable parallel performance on both shared memory and distributed memory high performance computers...

  7. The gravitational-wave memory effect

    International Nuclear Information System (INIS)

    Favata, Marc

    2010-01-01

    The nonlinear memory effect is a slowly growing, non-oscillatory contribution to the gravitational-wave amplitude. It originates from gravitational waves that are sourced by the previously emitted waves. In an ideal gravitational-wave interferometer a gravitational wave with memory causes a permanent displacement of the test masses that persists after the wave has passed. Surprisingly, the nonlinear memory affects the signal amplitude starting at leading (Newtonian-quadrupole) order. Despite this fact, the nonlinear memory is not easily extracted from current numerical relativity simulations. After reviewing the linear and nonlinear memory I summarize some recent work, including (1) computations of the memory contribution to the inspiral waveform amplitude (thus completing the waveform to third post-Newtonian order); (2) the first calculations of the nonlinear memory that include all phases of binary black hole coalescence (inspiral, merger, ringdown); and (3) realistic estimates of the detectability of the memory with LISA.

  8. Collaging Memories

    Science.gov (United States)

    Wallach, Michele

    2011-01-01

    Even middle school students can have memories of their childhoods, of an earlier time. The art of Romare Bearden and the writings of Paul Auster can be used to introduce ideas about time and memory to students and inspire works of their own. Bearden is an exceptional role model for young artists, not only because of his astounding art, but also…

  9. Memory Magic.

    Science.gov (United States)

    Hartman, Thomas G.; Nowak, Norman

    This paper outlines several "tricks" that aid students in improving their memories. The distinctions between operational and figural thought processes are noted. Operational memory is described as something that allows adults to make generalizations about numbers and the rules by which they may be combined, thus leading to easier memorization.…

  10. Memory loss

    Science.gov (United States)

    ... barbiturates or ( hypnotics ) ECT (electroconvulsive therapy) (most often short-term memory loss) Epilepsy that is not well controlled Illness that ... appointment. Medical history questions may include: Type of memory loss, such as short-term or long-term Time pattern, such as how ...

  11. Episodic Memories

    Science.gov (United States)

    Conway, Martin A.

    2009-01-01

    An account of episodic memories is developed that focuses on the types of knowledge they represent, their properties, and the functions they might serve. It is proposed that episodic memories consist of "episodic elements," summary records of experience often in the form of visual images, associated to a "conceptual frame" that provides a…

  12. Flavor Memory

    NARCIS (Netherlands)

    Mojet, Jos; Köster, Ep

    2016-01-01

    Odor, taste, texture, temperature, and pain all contribute to the perception and memory of food flavor. Flavor memory is also strongly linked to the situational aspects of previous encounters with the flavor, but does not depend on the precise recollection of its sensory features as in vision and

  13. Main Memory

    NARCIS (Netherlands)

    P.A. Boncz (Peter); L. Liu (Lei); M. Tamer Özsu

    2008-01-01

    htmlabstractPrimary storage, presently known as main memory, is the largest memory directly accessible to the CPU in the prevalent Von Neumann model and stores both data and instructions (program code). The CPU continuously reads instructions stored there and executes them. It is also called Random

  14. COMPUTERS IN SURGERY

    African Journals Online (AJOL)

    BODE

    Key words: Computers, surgery, applications. Introduction ... With improved memory, speed and processing power in an ever more compact ... with picture and voice embedment to wit. With the ... recall the tedium of anatomy, physiology and.

  15. Accessing memory

    Science.gov (United States)

    Yoon, Doe Hyun; Muralimanohar, Naveen; Chang, Jichuan; Ranganthan, Parthasarathy

    2017-09-26

    A disclosed example method involves performing simultaneous data accesses on at least first and second independently selectable logical sub-ranks to access first data via a wide internal data bus in a memory device. The memory device includes a translation buffer chip, memory chips in independently selectable logical sub-ranks, a narrow external data bus to connect the translation buffer chip to a memory controller, and the wide internal data bus between the translation buffer chip and the memory chips. A data access is performed on only the first independently selectable logical sub-rank to access second data via the wide internal data bus. The example method also involves locating a first portion of the first data, a second portion of the first data, and the second data on the narrow external data bus during separate data transfers.

  16. Performing an allreduce operation using shared memory

    Science.gov (United States)

    Archer, Charles J [Rochester, MN; Dozsa, Gabor [Ardsley, NY; Ratterman, Joseph D [Rochester, MN; Smith, Brian E [Rochester, MN

    2012-04-17

    Methods, apparatus, and products are disclosed for performing an allreduce operation using shared memory that include: receiving, by at least one of a plurality of processing cores on a compute node, an instruction to perform an allreduce operation; establishing, by the core that received the instruction, a job status object for specifying a plurality of shared memory allreduce work units, the plurality of shared memory allreduce work units together performing the allreduce operation on the compute node; determining, by an available core on the compute node, a next shared memory allreduce work unit in the job status object; and performing, by that available core on the compute node, that next shared memory allreduce work unit.

  17. Internode data communications in a parallel computer

    Science.gov (United States)

    Archer, Charles J.; Blocksome, Michael A.; Miller, Douglas R.; Parker, Jeffrey J.; Ratterman, Joseph D.; Smith, Brian E.

    2013-09-03

    Internode data communications in a parallel computer that includes compute nodes that each include main memory and a messaging unit, the messaging unit including computer memory and coupling compute nodes for data communications, in which, for each compute node at compute node boot time: a messaging unit allocates, in the messaging unit's computer memory, a predefined number of message buffers, each message buffer associated with a process to be initialized on the compute node; receives, prior to initialization of a particular process on the compute node, a data communications message intended for the particular process; and stores the data communications message in the message buffer associated with the particular process. Upon initialization of the particular process, the process establishes a messaging buffer in main memory of the compute node and copies the data communications message from the message buffer of the messaging unit into the message buffer of main memory.

  18. External Memory Pipelining Made Easy With TPIE

    OpenAIRE

    Arge, Lars; Rav, Mathias; Svendsen, Svend C.; Truelsen, Jakob

    2017-01-01

    When handling large datasets that exceed the capacity of the main memory, movement of data between main memory and external memory (disk), rather than actual (CPU) computation time, is often the bottleneck in the computation. Since data is moved between disk and main memory in large contiguous blocks, this has led to the development of a large number of I/O-efficient algorithms that minimize the number of such block movements. TPIE is one of two major libraries that have been developed to sup...

  19. Assessing Working Memory in Children: The Comprehensive Assessment Battery for Children – Working Memory (CABC-WM)

    OpenAIRE

    Cabbage, Kathryn; Brinkley, Shara; Gray, Shelley; Alt, Mary; Cowan, Nelson; Green, Samuel; Kuo, Trudy; Hogan, Tiffany P.

    2017-01-01

    The Comprehensive Assessment Battery for Children - Working Memory (CABC-WM) is a computer-based battery designed to assess different components of working memory in young school-age children. Working memory deficits have been identified in children with language-based learning disabilities, including dyslexia1 2 and language impairment3 4, but it is not clear whether these children exhibit deficits in subcomponents of working memory, such as visuospatial or phonological working memory. The C...

  20. Memory Reconsolidation.

    Science.gov (United States)

    Haubrich, Josue; Nader, Karim

    2018-01-01

    Scientific advances in the last decades uncovered that memory is not a stable, fixed entity. Apparently stable memories may become transiently labile and susceptible to modifications when retrieved due to the process of reconsolidation. Here, we review the initial evidence and the logic on which reconsolidation theory is based, the wide range of conditions in which it has been reported and recent findings further revealing the fascinating nature of this process. Special focus is given to conceptual issues of when and why reconsolidation happen and its possible outcomes. Last, we discuss the potential clinical implications of memory modifications by reconsolidation.

  1. Olfactory Memory

    Science.gov (United States)

    Eichenbaum, Howard; Robitsek, R. Jonathan

    2009-01-01

    Odor-recognition memory in rodents may provide a valuable model of cognitive aging. In a recent study we used signal detection analyses to distinguish odor recognition based on recollection versus that based on familiarity. Aged rats were selectively impaired in recollection, with relative sparing of familiarity, and the deficits in recollection were correlated with spatial memory impairments. These results complement electro-physiological findings indicating age-associated deficits in the ability of hippocampal neurons to differentiate contextual information, and this information-processing impairment may underlie the common age-associated decline in olfactory and spatial memory. PMID:19686208

  2. Quantitative Characterization of the T Cell Receptor Repertoire of Naïve and Memory Subsets Using an Integrated Experimental and Computational Pipeline Which Is Robust, Economical, and Versatile

    Directory of Open Access Journals (Sweden)

    Theres Oakes

    2017-10-01

    Full Text Available The T cell receptor (TCR repertoire can provide a personalized biomarker for infectious and non-infectious diseases. We describe a protocol for amplifying, sequencing, and analyzing TCRs which is robust, sensitive, and versatile. The key experimental step is ligation of a single-stranded oligonucleotide to the 3′ end of the TCR cDNA. This allows amplification of all possible rearrangements using a single set of primers per locus. It also introduces a unique molecular identifier to label each starting cDNA molecule. This molecular identifier is used to correct for sequence errors and for effects of differential PCR amplification efficiency, thus producing more accurate measures of the true TCR frequency within the sample. This integrated experimental and computational pipeline is applied to the analysis of human memory and naive subpopulations, and results in consistent measures of diversity and inequality. After error correction, the distribution of TCR sequence abundance in all subpopulations followed a power law over a wide range of values. The power law exponent differed between naïve and memory populations, but was consistent between individuals. The integrated experimental and analysis pipeline we describe is appropriate to studies of T cell responses in a broad range of physiological and pathological contexts.

  3. Atomic crystals resistive switching memory

    International Nuclear Information System (INIS)

    Liu Chunsen; Zhang David Wei; Zhou Peng

    2017-01-01

    Facing the growing data storage and computing demands, a high accessing speed memory with low power and non-volatile character is urgently needed. Resistive access random memory with 4F 2 cell size, switching in sub-nanosecond, cycling endurances of over 10 12 cycles, and information retention exceeding 10 years, is considered as promising next-generation non-volatile memory. However, the energy per bit is still too high to compete against static random access memory and dynamic random access memory. The sneak leakage path and metal film sheet resistance issues hinder the further scaling down. The variation of resistance between different devices and even various cycles in the same device, hold resistive access random memory back from commercialization. The emerging of atomic crystals, possessing fine interface without dangling bonds in low dimension, can provide atomic level solutions for the obsessional issues. Moreover, the unique properties of atomic crystals also enable new type resistive switching memories, which provide a brand-new direction for the resistive access random memory. (topical reviews)

  4. Multiferroic Memories

    Directory of Open Access Journals (Sweden)

    Amritendu Roy

    2012-01-01

    Full Text Available Multiferroism implies simultaneous presence of more than one ferroic characteristics such as coexistence of ferroelectric and magnetic ordering. This phenomenon has led to the development of various kinds of materials and conceptions of many novel applications such as development of a memory device utilizing the multifunctionality of the multiferroic materials leading to a multistate memory device with electrical writing and nondestructive magnetic reading operations. Though, interdependence of electrical- and magnetic-order parameters makes it difficult to accomplish the above and thus rendering the device to only two switchable states, recent research has shown that such problems can be circumvented by novel device designs such as formation of tunnel junction or by use of exchange bias. In this paper, we review the operational aspects of multiferroic memories as well as the materials used for these applications along with the designs that hold promise for the future memory devices.

  5. Milestoning with transition memory

    Science.gov (United States)

    Hawk, Alexander T.; Makarov, Dmitrii E.

    2011-12-01

    Milestoning is a method used to calculate the kinetics and thermodynamics of molecular processes occurring on time scales that are not accessible to brute force molecular dynamics (MD). In milestoning, the conformation space of the system is sectioned by hypersurfaces (milestones), an ensemble of trajectories is initialized on each milestone, and MD simulations are performed to calculate transitions between milestones. The transition probabilities and transition time distributions are then used to model the dynamics of the system with a Markov renewal process, wherein a long trajectory of the system is approximated as a succession of independent transitions between milestones. This approximation is justified if the transition probabilities and transition times are statistically independent. In practice, this amounts to a requirement that milestones are spaced such that trajectories lose position and velocity memory between subsequent transitions. Unfortunately, limiting the number of milestones limits both the resolution at which a system's properties can be analyzed, and the computational speedup achieved by the method. We propose a generalized milestoning procedure, milestoning with transition memory (MTM), which accounts for memory of previous transitions made by the system. When a reaction coordinate is used to define the milestones, the MTM procedure can be carried out at no significant additional expense as compared to conventional milestoning. To test MTM, we have applied its version that allows for the memory of the previous step to the toy model of a polymer chain undergoing Langevin dynamics in solution. We have computed the mean first passage time for the chain to attain a cyclic conformation and found that the number of milestones that can be used, without incurring significant errors in the first passage time is at least 8 times that permitted by conventional milestoning. We further demonstrate that, unlike conventional milestoning, MTM permits

  6. Color Memory

    OpenAIRE

    Pate, Monica; Raclariu, Ana-Maria; Strominger, Andrew

    2017-01-01

    A transient color flux across null infinity in classical Yang-Mills theory is considered. It is shown that a pair of test `quarks' initially in a color singlet generically acquire net color as a result of the flux. A nonlinear formula is derived for the relative color rotation of the quarks. For weak color flux the formula linearizes to the Fourier transform of the soft gluon theorem. This color memory effect is the Yang-Mills analog of the gravitational memory effect.

  7. Quantum memories: emerging applications and recent advances

    Science.gov (United States)

    Heshami, Khabat; England, Duncan G.; Humphreys, Peter C.; Bustard, Philip J.; Acosta, Victor M.; Nunn, Joshua; Sussman, Benjamin J.

    2016-01-01

    Quantum light–matter interfaces are at the heart of photonic quantum technologies. Quantum memories for photons, where non-classical states of photons are mapped onto stationary matter states and preserved for subsequent retrieval, are technical realizations enabled by exquisite control over interactions between light and matter. The ability of quantum memories to synchronize probabilistic events makes them a key component in quantum repeaters and quantum computation based on linear optics. This critical feature has motivated many groups to dedicate theoretical and experimental research to develop quantum memory devices. In recent years, exciting new applications, and more advanced developments of quantum memories, have proliferated. In this review, we outline some of the emerging applications of quantum memories in optical signal processing, quantum computation and non-linear optics. We review recent experimental and theoretical developments, and their impacts on more advanced photonic quantum technologies based on quantum memories. PMID:27695198

  8. A Computationally-Efficient, Multi-Mechanism Based Framework for the Comprehensive Modeling of the Evolutionary Behavior of Shape Memory Alloys

    Science.gov (United States)

    Saleeb, Atef F.; Vaidyanathan, Raj

    2016-01-01

    The report summarizes the accomplishments made during the 4-year duration of the project. Here, the major emphasis is placed on the different tasks performed by the two research teams; i.e., the modeling activities by the University of Akron (UA) team and the experimental and neutron diffraction studies conducted by the University of Central Florida (UCF) team, during this 4-year period. Further technical details are given in the upcoming sections by UA and UCF for each of the milestones/years (together with the corresponding figures and captions).The project majorly involved the development, validation, and application of a general theoretical model that is capable of capturing the nonlinear hysteretic responses, including pseudoelasticity, shape memory effect, rate-dependency, multi-axiality, asymmetry in tension versus compression response of shape memory alloys. Among the targeted goals for the SMA model was its ability to account for the evolutionary character response (including transient and long term behavior under sustained cycles) for both conventional and high temperature (HT) SMAs, as well as being able to simulate some of the devices which exploit these unique material systems. This required extensive (uniaxial and multi-axial) experiments needed to guide us in calibrating and characterizing the model. Moreover, since the model is formulated on the theoretical notion of internal state variables (ISVs), neutron diffraction experiments were needed to establish the linkage between the micromechanical changes and these ISVs. In addition, the design of the model should allow easy implementation in large scale finite element application to study the behavior of devices making use of these SMA materials under different loading controls. Summary of the activities, progress/achievements made during this period is given below in details for the University of Akron and the University (Section 2.0) of Central Florida (Section 3.0).

  9. Super-activating Quantum Memory with Entanglement

    OpenAIRE

    Guan, Ji; Feng, Yuan; Ying, Mingsheng

    2017-01-01

    Noiseless subsystems were proved to be an efficient and faithful approach to preserve fragile information against decoherence in quantum information processing and quantum computation. They were employed to design a general (hybrid) quantum memory cell model that can store both quantum and classical information. In this Letter, we find an interesting new phenomenon that the purely classical memory cell can be super-activated to preserve quantum states, whereas the null memory cell can only be...

  10. External-Memory Algorithms and Data Structures

    DEFF Research Database (Denmark)

    Arge, Lars; Zeh, Norbert

    2010-01-01

    The data sets involved in many modern applications are often too massive to fit in main memory of even the most powerful computers and must therefore reside on disk. Thus communication between internal and external memory, and not actual computation time, becomes the bottleneck in the computation....... This is due to the huge difference in access time of fast internal memory and slower external memory such as disks. The goal of theoretical work in the area of external memory algorithms (also called I/O algorithms or out-of-core algorithms) has been to develop algorithms that minimize the Input...... in parallel and the use of parallel disks has received a lot of theoretical attention. See below for recent surveys of theoretical results in the area of I/O-efficient algorithms. TPIE is designed to bridge the gap between the theory and practice of parallel I/O systems. It is intended to demonstrate all...

  11. Modeling reconsolidation in kernel associative memory.

    Directory of Open Access Journals (Sweden)

    Dimitri Nowicki

    Full Text Available Memory reconsolidation is a central process enabling adaptive memory and the perception of a constantly changing reality. It causes memories to be strengthened, weakened or changed following their recall. A computational model of memory reconsolidation is presented. Unlike Hopfield-type memory models, our model introduces an unbounded number of attractors that are updatable and can process real-valued, large, realistic stimuli. Our model replicates three characteristic effects of the reconsolidation process on human memory: increased association, extinction of fear memories, and the ability to track and follow gradually changing objects. In addition to this behavioral validation, a continuous time version of the reconsolidation model is introduced. This version extends average rate dynamic models of brain circuits exhibiting persistent activity to include adaptivity and an unbounded number of attractors.

  12. Emerging memory technologies design, architecture, and applications

    CERN Document Server

    2014-01-01

    This book explores the design implications of emerging, non-volatile memory (NVM) technologies on future computer memory hierarchy architecture designs. Since NVM technologies combine the speed of SRAM, the density of DRAM, and the non-volatility of Flash memory, they are very attractive as the basis for future universal memories. This book provides a holistic perspective on the topic, covering modeling, design, architecture and applications. The practical information included in this book will enable designers to exploit emerging memory technologies to improve significantly the performance/power/reliability of future, mainstream integrated circuits. • Provides a comprehensive reference on designing modern circuits with emerging, non-volatile memory technologies, such as MRAM and PCRAM; • Explores new design opportunities offered by emerging memory technologies, from a holistic perspective; • Describes topics in technology, modeling, architecture and applications; • Enables circuit designers to ex...

  13. Optimizing the Noticing of Recasts via Computer-Delivered Feedback: Evidence That Oral Input Enhancement and Working Memory Help Second Language Learning

    Science.gov (United States)

    Sagarra, Nuria; Abbuhl, Rebekha

    2013-01-01

    This study investigates whether practice with computer-administered feedback in the absence of meaning-focused interaction can help second language learners notice the corrective intent of recasts and develop linguistic accuracy. A group of 218 beginning Anglophone learners of Spanish received 1 of 4 types of automated feedback (no feedback,…

  14. A memory module for experimental data handling

    Science.gov (United States)

    De Blois, J.

    1985-02-01

    A compact CAMAC memory module for experimental data handling was developed to eliminate the need of direct memory access in computer controlled measurements. When using autonomous controllers it also makes measurements more independent of the program and enlarges the available space for programs in the memory of the micro-computer. The memory module has three modes of operation: an increment-, a list- and a fifo mode. This is achieved by connecting the main parts, being: the memory (MEM), the fifo buffer (FIFO), the address buffer (BUF), two counters (AUX and ADDR) and a readout register (ROR), by an internal 24-bit databus. The time needed for databus operations is 1 μs, for measuring cycles as well as for CAMAC cycles. The FIFO provides temporary data storage during CAMAC cycles and separates the memory part from the application part. The memory is variable from 1 to 64K (24 bits) by using different types of memory chips. The application part, which forms 1/3 of the module, will be specially designed for each application and is added to the memory chian internal connector. The memory unit will be used in Mössbauer experiments and in thermal neutron scattering experiments.

  15. A memory module for experimental data handling

    International Nuclear Information System (INIS)

    Blois, J. de

    1985-01-01

    A compact CAMAC memory module for experimental data handling was developed to eliminate the need of direct memory access in computer controlled measurements. When using autonomous controllers it also makes measurements more independent of the program and enlarges the available space for programs in the memory of the micro-computer. The memory module has three modes of operation: an increment-, a list- and a fifo mode. This is achieved by connecting the main parts, being: the memory (MEM), the fifo buffer (FIFO), the address buffer (BUF), two counters (AUX and ADDR) and a readout register (ROR), by an internal 24-bit databus. The time needed for databus operations is 1 μs, for measuring cycles as well as for CAMAC cycles. The FIFO provides temporary data storage during CAMAC cycles and separates the memory part from the application part. The memory is variable from 1 to 64K (24 bits) by using different types of memory chips. The application part, which forms 1/3 of the module, will be specially designed for each application and is added to the memory by an internal connector. The memory unit will be used in Moessbauer experiments and in thermal neutron scattering experiments. (orig.)

  16. Consciousness and working memory: Current trends and research perspectives.

    Science.gov (United States)

    Velichkovsky, Boris B

    2017-10-01

    Working memory has long been thought to be closely related to consciousness. However, recent empirical studies show that unconscious content may be maintained within working memory and that complex cognitive computations may be performed on-line. This promotes research on the exact relationships between consciousness and working memory. Current evidence for working memory being a conscious as well as an unconscious process is reviewed. Consciousness is shown to be considered a subset of working memory by major current theories of working memory. Evidence for unconscious elements in working memory is shown to come from visual masking and attentional blink paradigms, and from the studies of implicit working memory. It is concluded that more research is needed to explicate the relationship between consciousness and working memory. Future research directions regarding the relationship between consciousness and working memory are discussed. Copyright © 2017 Elsevier Inc. All rights reserved.

  17. Event boundaries and memory improvement.

    Science.gov (United States)

    Pettijohn, Kyle A; Thompson, Alexis N; Tamplin, Andrea K; Krawietz, Sabine A; Radvansky, Gabriel A

    2016-03-01

    The structure of events can influence later memory for information that is embedded in them, with evidence indicating that event boundaries can both impair and enhance memory. The current study explored whether the presence of event boundaries during encoding can structure information to improve memory. In Experiment 1, memory for a list of words was tested in which event structure was manipulated by having participants walk through a doorway, or not, halfway through the word list. In Experiment 2, memory for lists of words was tested in which event structure was manipulated using computer windows. Finally, in Experiments 3 and 4, event structure was manipulated by having event shifts described in narrative texts. The consistent finding across all of these methods and materials was that memory was better when the information was distributed across two events rather than combined into a single event. Moreover, Experiment 4 demonstrated that increasing the number of event boundaries from one to two increased the memory benefit. These results are interpreted in the context of the Event Horizon Model of event cognition. Copyright © 2015 Elsevier B.V. All rights reserved.

  18. Sensory memory for ambiguous vision.

    Science.gov (United States)

    Pearson, Joel; Brascamp, Jan

    2008-09-01

    In recent years the overlap between visual perception and memory has shed light on our understanding of both. When ambiguous images that normally cause perception to waver unpredictably are presented briefly with intervening blank periods, perception tends to freeze, locking into one interpretation. This indicates that there is a form of memory storage across the blank interval. This memory trace codes low-level characteristics of the stored stimulus. Although a trace is evident after a single perceptual instance, the trace builds over many separate stimulus presentations, indicating a flexible, variable-length time-course. This memory shares important characteristics with priming by non-ambiguous stimuli. Computational models now provide a framework to interpret many empirical observations.

  19. Analysis of the Organization of Lexical Memory

    National Research Council Canada - National Science Library

    Miller, George

    1997-01-01

    The practical outcome of the project, Analysis of the Organization of Lexical Memory, is an electronic lexical database called WordNet that can be incorporated into computer systems for processing English text...

  20. The Distributed Memory Computing Conference (5th) Held in Charleston, South Carolina on April 8-12, 1990. Volume 1. Applications

    Science.gov (United States)

    1991-03-31

    194 R. Battiti Session 8: Ray Tracing Hypercube Algorithm for Radiosity in a Ray Tracing Environment ............. 206 S.A. Hermitage, T.L...for Radiosity in a Ray Tracing Environment Shirley A. Hermitage Terrance L. Huntsberger Department of Computer Science Beverly A. Huntsberger Augusta...Abstract or transmitted, and in most cases, ignores any diffuse reflection. Radiosity , on the other hand, attempts to account for a phenomenon that ray

  1. Remote direct memory access

    Science.gov (United States)

    Archer, Charles J.; Blocksome, Michael A.

    2012-12-11

    Methods, parallel computers, and computer program products are disclosed for remote direct memory access. Embodiments include transmitting, from an origin DMA engine on an origin compute node to a plurality target DMA engines on target compute nodes, a request to send message, the request to send message specifying a data to be transferred from the origin DMA engine to data storage on each target compute node; receiving, by each target DMA engine on each target compute node, the request to send message; preparing, by each target DMA engine, to store data according to the data storage reference and the data length, including assigning a base storage address for the data storage reference; sending, by one or more of the target DMA engines, an acknowledgment message acknowledging that all the target DMA engines are prepared to receive a data transmission from the origin DMA engine; receiving, by the origin DMA engine, the acknowledgement message from the one or more of the target DMA engines; and transferring, by the origin DMA engine, data to data storage on each of the target compute nodes according to the data storage reference using a single direct put operation.

  2. Explorations in quantum computing

    CERN Document Server

    Williams, Colin P

    2011-01-01

    By the year 2020, the basic memory components of a computer will be the size of individual atoms. At such scales, the current theory of computation will become invalid. ""Quantum computing"" is reinventing the foundations of computer science and information theory in a way that is consistent with quantum physics - the most accurate model of reality currently known. Remarkably, this theory predicts that quantum computers can perform certain tasks breathtakingly faster than classical computers -- and, better yet, can accomplish mind-boggling feats such as teleporting information, breaking suppos

  3. Polymorphous computing fabric

    Science.gov (United States)

    Wolinski, Christophe Czeslaw [Los Alamos, NM; Gokhale, Maya B [Los Alamos, NM; McCabe, Kevin Peter [Los Alamos, NM

    2011-01-18

    Fabric-based computing systems and methods are disclosed. A fabric-based computing system can include a polymorphous computing fabric that can be customized on a per application basis and a host processor in communication with said polymorphous computing fabric. The polymorphous computing fabric includes a cellular architecture that can be highly parameterized to enable a customized synthesis of fabric instances for a variety of enhanced application performances thereof. A global memory concept can also be included that provides the host processor random access to all variables and instructions associated with the polymorphous computing fabric.

  4. Sparse distributed memory

    Science.gov (United States)

    Denning, Peter J.

    1989-01-01

    Sparse distributed memory was proposed be Pentti Kanerva as a realizable architecture that could store large patterns and retrieve them based on partial matches with patterns representing current sensory inputs. This memory exhibits behaviors, both in theory and in experiment, that resemble those previously unapproached by machines - e.g., rapid recognition of faces or odors, discovery of new connections between seemingly unrelated ideas, continuation of a sequence of events when given a cue from the middle, knowing that one doesn't know, or getting stuck with an answer on the tip of one's tongue. These behaviors are now within reach of machines that can be incorporated into the computing systems of robots capable of seeing, talking, and manipulating. Kanerva's theory is a break with the Western rationalistic tradition, allowing a new interpretation of learning and cognition that respects biology and the mysteries of individual human beings.

  5. The Research on Linux Memory Forensics

    Science.gov (United States)

    Zhang, Jun; Che, ShengBing

    2018-03-01

    Memory forensics is a branch of computer forensics. It does not depend on the operating system API, and analyzes operating system information from binary memory data. Based on the 64-bit Linux operating system, it analyzes system process and thread information from physical memory data. Using ELF file debugging information and propose a method for locating kernel structure member variable, it can be applied to different versions of the Linux operating system. The experimental results show that the method can successfully obtain the sytem process information from physical memory data, and can be compatible with multiple versions of the Linux kernel.

  6. A Josephson ternary associative memory cell

    International Nuclear Information System (INIS)

    Morisue, M.; Suzuki, K.

    1989-01-01

    This paper describes a three-valued content addressable memory cell using a Josephson complementary ternary logic circuit named as JCTL. The memory cell proposed here can perform three operations of searching, writing and reading in ternary logic system. The principle of the memory circuit is illustrated in detail by using the threshold-characteristics of the JCTL. In order to investigate how a high performance operation can be achieved, computer simulations have been made. Simulation results show that the cycle time of memory operation is 120psec, power consumption is about 0.5 μW/cell and tolerances of writing and reading operation are +-15% and +-24%, respectively

  7. Holographic memories

    DEFF Research Database (Denmark)

    Ramanujam, P.S.; Berg, R.H.; Hvilsted, Søren

    1999-01-01

    A Two-dimensional holographic memory for archival storage is described. Assuming a coherent transfer function, an A4 page can be stored at high resolution in an area of 1 mm(2). Recently developed side-chain liquid crystalline azobenzene polyesters are found to be suitable media for holographic...

  8. Sharing Memories

    DEFF Research Database (Denmark)

    Rodil, Kasper; Nielsen, Emil Byskov; Nielsen, Jonathan Bernstorff

    2018-01-01

    in which it was to be contextualized and through a close partnership between aphasics and their caretakers. The underlying design methodology for the MemoryBook is Participatory Design manifested through the collaboration and creations by two aphasic residents and one member of the support staff. The idea...

  9. Memory consolidation

    NARCIS (Netherlands)

    Takashima, A.; Bakker, I.; Schmid, H.-J.

    2016-01-01

    In order to make use of novel experiences and knowledge to guide our future behavior, we must keep large amounts of information accessible for retrieval. The memory system that stores this information needs to be flexible in order to rapidly incorporate incoming information, but also requires that

  10. Skilled Memory.

    Science.gov (United States)

    1980-11-06

    Woodworth, R. S. Experimental Psychology. New York: Henry Holt and Co., 1938. Yates, F. A. The art of memory. London: Rutledge and Kegan Paul, 1966. 50...Group 1 Psychologist (TAEG) ON! Branch Office Dept. of the Navy 1030 East Green Street Orlando, FL 32813 Pasadena, CA 91101 1 Dr. Richard Sorensen I

  11. Conditional load and store in a shared memory

    Science.gov (United States)

    Blumrich, Matthias A; Ohmacht, Martin

    2015-02-03

    A method, system and computer program product for implementing load-reserve and store-conditional instructions in a multi-processor computing system. The computing system includes a multitude of processor units and a shared memory cache, and each of the processor units has access to the memory cache. In one embodiment, the method comprises providing the memory cache with a series of reservation registers, and storing in these registers addresses reserved in the memory cache for the processor units as a result of issuing load-reserve requests. In this embodiment, when one of the processor units makes a request to store data in the memory cache using a store-conditional request, the reservation registers are checked to determine if an address in the memory cache is reserved for that processor unit. If an address in the memory cache is reserved for that processor, the data are stored at this address.

  12. Operational Semantics of a Weak Memory Model inspired by Go

    OpenAIRE

    Fava, Daniel Schnetzer; Stolz, Volker; Valle, Stian

    2017-01-01

    A memory model dictates which values may be returned when reading from memory. In a parallel computing setting, the memory model affects how processes communicate through shared memory. The design of a proper memory model is a balancing act. On one hand, memory models must be lax enough to allow common hardware and compiler optimizations. On the other, the more lax the model, the harder it is for developers to reason about their programs. In order to alleviate the burden on programmers, a wea...

  13. A Time-predictable Memory Network-on-Chip

    DEFF Research Database (Denmark)

    Schoeberl, Martin; Chong, David VH; Puffitsch, Wolfgang

    2014-01-01

    To derive safe bounds on worst-case execution times (WCETs), all components of a computer system need to be time-predictable: the processor pipeline, the caches, the memory controller, and memory arbitration on a multicore processor. This paper presents a solution for time-predictable memory...... arbitration and access for chip-multiprocessors. The memory network-on-chip is organized as a tree with time-division multiplexing (TDM) of accesses to the shared memory. The TDM based arbitration completely decouples processor cores and allows WCET analysis of the memory accesses on individual cores without...

  14. A short review of memory research

    Directory of Open Access Journals (Sweden)

    Igor Areh

    2004-09-01

    Full Text Available Scientific research on memory began at the end of 19th century with studies of semantic and/or long term memory. In most cases memory was interpreted as a storehouse for various data and the quality of the storehouse was usually defined by a quantity of recalled data. The research work was concentrated on specificity of the connection between memory and learning. At that time few authors developed theories which were rare, uncommon and before their time (e.g.: Bartlett, Ribot, Freud. Even after 20th century, when behavioural stimulus-response approach began to dominate, the measure of memory quality was still the quantity of memory recall. In the 1960th the rise of cognitive psychology began, the computer metaphor was born and finally the behavioural comprehension of cognitive system was surpassed. Cognitive system was understood as a computer-like interface between an organism and environment. In recent years the computer metaphor is no longer dominant. New and efficient concepts are moving forward. Quantity of data recall, as the measure of memory quality, is not so important any more – attention is focused on accuracy of memory recall.

  15. Customizable computing

    CERN Document Server

    Chen, Yu-Ting; Gill, Michael; Reinman, Glenn; Xiao, Bingjun

    2015-01-01

    Since the end of Dennard scaling in the early 2000s, improving the energy efficiency of computation has been the main concern of the research community and industry. The large energy efficiency gap between general-purpose processors and application-specific integrated circuits (ASICs) motivates the exploration of customizable architectures, where one can adapt the architecture to the workload. In this Synthesis lecture, we present an overview and introduction of the recent developments on energy-efficient customizable architectures, including customizable cores and accelerators, on-chip memory

  16. Distributed-Memory Fast Maximal Independent Set

    Energy Technology Data Exchange (ETDEWEB)

    Kanewala Appuhamilage, Thejaka Amila J.; Zalewski, Marcin J.; Lumsdaine, Andrew

    2017-09-13

    The Maximal Independent Set (MIS) graph problem arises in many applications such as computer vision, information theory, molecular biology, and process scheduling. The growing scale of MIS problems suggests the use of distributed-memory hardware as a cost-effective approach to providing necessary compute and memory resources. Luby proposed four randomized algorithms to solve the MIS problem. All those algorithms are designed focusing on shared-memory machines and are analyzed using the PRAM model. These algorithms do not have direct efficient distributed-memory implementations. In this paper, we extend two of Luby’s seminal MIS algorithms, “Luby(A)” and “Luby(B),” to distributed-memory execution, and we evaluate their performance. We compare our results with the “Filtered MIS” implementation in the Combinatorial BLAS library for two types of synthetic graph inputs.

  17. Self-Testing Static Random-Access Memory

    Science.gov (United States)

    Chau, Savio; Rennels, David

    1991-01-01

    Proposed static random-access memory for computer features improved error-detecting and -correcting capabilities. New self-testing scheme provides for detection and correction of errors at any time during normal operation - even while data being written into memory. Faults in equipment causing errors in output data detected by repeatedly testing every memory cell to determine whether it can still store both "one" and "zero", without destroying data stored in memory.

  18. Performing stencil computations

    Energy Technology Data Exchange (ETDEWEB)

    Donofrio, David

    2018-01-16

    A method and apparatus for performing stencil computations efficiently are disclosed. In one embodiment, a processor receives an offset, and in response, retrieves a value from a memory via a single instruction, where the retrieving comprises: identifying, based on the offset, one of a plurality of registers of the processor; loading an address stored in the identified register; and retrieving from the memory the value at the address.

  19. Review on Computational Electromagnetics

    Directory of Open Access Journals (Sweden)

    P. Sumithra

    2017-03-01

    Full Text Available Computational electromagnetics (CEM is applied to model the interaction of electromagnetic fields with the objects like antenna, waveguides, aircraft and their environment using Maxwell equations.  In this paper the strength and weakness of various computational electromagnetic techniques are discussed. Performance of various techniques in terms accuracy, memory and computational time for application specific tasks such as modeling RCS (Radar cross section, space applications, thin wires, antenna arrays are presented in this paper.

  20. Accurate metacognition for visual sensory memory representations.

    Science.gov (United States)

    Vandenbroucke, Annelinde R E; Sligte, Ilja G; Barrett, Adam B; Seth, Anil K; Fahrenfort, Johannes J; Lamme, Victor A F

    2014-04-01

    The capacity to attend to multiple objects in the visual field is limited. However, introspectively, people feel that they see the whole visual world at once. Some scholars suggest that this introspective feeling is based on short-lived sensory memory representations, whereas others argue that the feeling of seeing more than can be attended to is illusory. Here, we investigated this phenomenon by combining objective memory performance with subjective confidence ratings during a change-detection task. This allowed us to compute a measure of metacognition--the degree of knowledge that subjects have about the correctness of their decisions--for different stages of memory. We show that subjects store more objects in sensory memory than they can attend to but, at the same time, have similar metacognition for sensory memory and working memory representations. This suggests that these subjective impressions are not an illusion but accurate reflections of the richness of visual perception.

  1. Noise reduction in optically controlled quantum memory

    Science.gov (United States)

    Ma, Lijun; Slattery, Oliver; Tang, Xiao

    2018-05-01

    Quantum memory is an essential tool for quantum communications systems and quantum computers. An important category of quantum memory, called optically controlled quantum memory, uses a strong classical beam to control the storage and re-emission of a single-photon signal through an atomic ensemble. In this type of memory, the residual light from the strong classical control beam can cause severe noise and degrade the system performance significantly. Efficiently suppressing this noise is a requirement for the successful implementation of optically controlled quantum memories. In this paper, we briefly introduce the latest and most common approaches to quantum memory and review the various noise-reduction techniques used in implementing them.

  2. Forming of shape memory composite structures

    DEFF Research Database (Denmark)

    Santo, Loredana; Quadrini, Fabrizio; De Chiffre, Leonardo

    2013-01-01

    A new forming procedure was developed to produce shape memory composite structures having structural composite skins over a shape memory polymer core. Core material was obtained by solid state foaming of an epoxy polyester resin with remarkably shape memory properties. The composite skin consisted...... of a two-layer unidirectional thermoplastic composite (glass filled polypropylene). Skins were joined to the foamed core by hot compression without any adhesive: a very good adhesion was obtained as experimental tests confirmed. The structure of the foam core was investigated by means of computer axial...... tomography. Final shape memory composite panels were mechanically tested by three point bending before and after a shape memory step. This step consisted of a compression to reduce the panel thickness up to 60%. At the end of the bending test the panel shape was recovered by heating and a new memory step...

  3. A view of Kanerva's sparse distributed memory

    Science.gov (United States)

    Denning, P. J.

    1986-01-01

    Pentti Kanerva is working on a new class of computers, which are called pattern computers. Pattern computers may close the gap between capabilities of biological organisms to recognize and act on patterns (visual, auditory, tactile, or olfactory) and capabilities of modern computers. Combinations of numeric, symbolic, and pattern computers may one day be capable of sustaining robots. The overview of the requirements for a pattern computer, a summary of Kanerva's Sparse Distributed Memory (SDM), and examples of tasks this computer can be expected to perform well are given.

  4. Spatial memory and animal movement.

    Science.gov (United States)

    Fagan, William F; Lewis, Mark A; Auger-Méthé, Marie; Avgar, Tal; Benhamou, Simon; Breed, Greg; LaDage, Lara; Schlägel, Ulrike E; Tang, Wen-wu; Papastamatiou, Yannis P; Forester, James; Mueller, Thomas

    2013-10-01

    Memory is critical to understanding animal movement but has proven challenging to study. Advances in animal tracking technology, theoretical movement models and cognitive sciences have facilitated research in each of these fields, but also created a need for synthetic examination of the linkages between memory and animal movement. Here, we draw together research from several disciplines to understand the relationship between animal memory and movement processes. First, we frame the problem in terms of the characteristics, costs and benefits of memory as outlined in psychology and neuroscience. Next, we provide an overview of the theories and conceptual frameworks that have emerged from behavioural ecology and animal cognition. Third, we turn to movement ecology and summarise recent, rapid developments in the types and quantities of available movement data, and in the statistical measures applicable to such data. Fourth, we discuss the advantages and interrelationships of diverse modelling approaches that have been used to explore the memory-movement interface. Finally, we outline key research challenges for the memory and movement communities, focusing on data needs and mathematical and computational challenges. We conclude with a roadmap for future work in this area, outlining axes along which focused research should yield rapid progress. © 2013 John Wiley & Sons Ltd/CNRS.

  5. A Comparison of Two Paradigms for Distributed Shared Memory

    NARCIS (Netherlands)

    Levelt, W.G.; Kaashoek, M.F.; Bal, H.E.; Tanenbaum, A.S.

    1992-01-01

    Two paradigms for distributed shared memory on loosely‐coupled computing systems are compared: the shared data‐object model as used in Orca, a programming language specially designed for loosely‐coupled computing systems, and the shared virtual memory model. For both paradigms two systems are

  6. The impact of taxing working memory on negative and positive memories

    NARCIS (Netherlands)

    Engelhard, I.M.; van Uijen, S.L.; Van den Hout, M.A.

    2010-01-01

    BACKGROUND: Earlier studies have shown that horizontal eye movement (EM) during retrieval of a negative memory reduces its vividness and emotionality. This may be due to both tasks competing for working memory (WM) resources. This study examined whether playing the computer game "Tetris" also blurs

  7. Test-Retest Reliability of Computerized, Everyday Memory Measures and Traditional Memory Tests.

    Science.gov (United States)

    Youngjohn, James R.; And Others

    Test-retest reliabilities and practice effect magnitudes were considered for nine computer-simulated tasks of everyday cognition and five traditional neuropsychological tests. The nine simulated everyday memory tests were from the Memory Assessment Clinic battery as follows: (1) simple reaction time while driving; (2) divided attention (driving…

  8. Concrete Memories

    DEFF Research Database (Denmark)

    Wiegand, Frauke Katharina

    2015-01-01

    This article traces the presence of Atlantikwall bunkers in amateur holiday snapshots and discusses the ambiguous role of the bunker site in visual cultural memory. Departing from my family’s private photo collection from twenty years of vacationing at the Danish West coast, the different mundane...... and poetic appropriations and inscriptions of the bunker site are depicted. Ranging between overlooked side presences and an overwhelming visibility, the concrete remains of fascist war architecture are involved in and motivate different sensuous experiences and mnemonic appropriations. The article meets...... the bunkers’ changing visuality and the cultural topography they both actively transform and are being transformed by through juxtaposing different acts and objects of memory over time and in different visual articulations....

  9. Treadwell Memorial

    OpenAIRE

    Downey, Frances K

    2015-01-01

    This is a memorial to gold mining in Southeast Alaska. The structure takes visitors from the Treadwell trail onto the edge of a popular local beach, reclaiming a forgotten place that was once the largest gold mine in the world. A tangible tribute to this obscure period of history, this building kindles a connection between artifacts and the community. It is a liminal space, connecting ocean and mountain, past and present, civilization and wilderness. An investigation of the Treadwell Gold...

  10. Memory Analysis of the KBeast Linux Rootkit: Investigating Publicly Available Linux Rootkit Using the Volatility Memory Analysis Framework

    Science.gov (United States)

    2015-06-01

    examine how a computer forensic investigator/incident handler, without specialised computer memory or software reverse engineering skills , can successfully...memory images and malware, this new series of reports will be directed at those who must analyse Linux malware-infected memory images. The skills ...disable 1287 1000 1000 /usr/lib/policykit-1-gnome/polkit-gnome-authentication- agent-1 1310 1000 1000 /usr/lib/pulseaudio/pulse/gconf- helper 1350

  11. Privacy-Preserving Computation with Trusted Computing via Scramble-then-Compute

    OpenAIRE

    Dang Hung; Dinh Tien Tuan Anh; Chang Ee-Chien; Ooi Beng Chin

    2017-01-01

    We consider privacy-preserving computation of big data using trusted computing primitives with limited private memory. Simply ensuring that the data remains encrypted outside the trusted computing environment is insufficient to preserve data privacy, for data movement observed during computation could leak information. While it is possible to thwart such leakage using generic solution such as ORAM [42], designing efficient privacy-preserving algorithms is challenging. Besides computation effi...

  12. A Computer in Your Lap.

    Science.gov (United States)

    Byers, Joseph W.

    1991-01-01

    The most useful feature of laptop computers is portability, as one elementary school principal notes. IBM and Apple are not leaders in laptop technology. Tandy and Toshiba market relatively inexpensive models offering durability, reliable software, and sufficient memory space. (MLH)

  13. Visual memory and visual perception: when memory improves visual search.

    Science.gov (United States)

    Riou, Benoit; Lesourd, Mathieu; Brunel, Lionel; Versace, Rémy

    2011-08-01

    This study examined the relationship between memory and perception in order to identify the influence of a memory dimension in perceptual processing. Our aim was to determine whether the variation of typical size between items (i.e., the size in real life) affects visual search. In two experiments, the congruency between typical size difference and perceptual size difference was manipulated in a visual search task. We observed that congruency between the typical and perceptual size differences decreased reaction times in the visual search (Exp. 1), and noncongruency between these two differences increased reaction times in the visual search (Exp. 2). We argue that these results highlight that memory and perception share some resources and reveal the intervention of typical size difference on the computation of the perceptual size difference.

  14. Studies of Human Memory and Language Processing.

    Science.gov (United States)

    Collins, Allan M.

    The purposes of this study were to determine the nature of human semantic memory and to obtain knowledge usable in the future development of computer systems that can converse with people. The work was based on a computer model which is designed to comprehend English text, relating the text to information stored in a semantic data base that is…

  15. Quantum memory for superconducting qubits

    International Nuclear Information System (INIS)

    Pritchett, Emily J.; Geller, Michael R.

    2005-01-01

    Many protocols for quantum computation require a memory element to store qubits. We discuss the speed and accuracy with which quantum states prepared in a superconducting qubit can be stored in and later retrieved from an attached high-Q resonator. The memory fidelity depends on both the qubit-resonator coupling strength and the location of the state on the Bloch sphere. Our results show that a quantum memory demonstration should be possible with existing superconducting qubit designs, which would be an important milestone in solid-state quantum information processing. Although we specifically focus on a large-area, current-biased Josesphson-junction phase qubit coupled to the dilatational mode of a piezoelectric nanoelectromechanical disk resonator, many of our results will apply to other qubit-oscillator models

  16. Intentionally fabricated autobiographical memories

    OpenAIRE

    Justice, LV; Morrison, CM; Conway, MA

    2017-01-01

    Participants generated both autobiographical memories (AMs) that they believed to be true and intentionally fabricated autobiographical memories (IFAMs). Memories were constructed while a concurrent memory load (random 8-digit sequence) was held in mind or while there was no concurrent load. Amount and accuracy of recall of the concurrent memory load was reliably poorer following generation of IFAMs than following generation of AMs. There was no reliable effect of load on memory generation ti...

  17. STRUKTUR DAN PROSES MEMORI

    Directory of Open Access Journals (Sweden)

    Magda Bhinnety

    2015-09-01

    Full Text Available This paper describes structures and processes of human memory system according to the modal model. Sensory memory is described as the first system to store information from outside world. Short‐term memory, or now called working memory, represents a system characterized by limited ability in storing as well as retrieving information. Long‐term memory on the hand stores information larger in amount and longer than short‐term memory

  18. STRUKTUR DAN PROSES MEMORI

    OpenAIRE

    Bhinnety, Magda

    2015-01-01

    This paper describes structures and processes of human memory system according to the modal model. Sensory memory is described as the first system to store information from outside world. Short‐term memory, or now called working memory, represents a system characterized by limited ability in storing as well as retrieving information. Long‐term memory on the hand stores information larger in amount and longer than short‐term memory

  19. Polymorphous Computing Architecture (PCA) Kernel-Level Benchmarks

    National Research Council Canada - National Science Library

    Lebak, J

    2004-01-01

    .... "Computation" aspects include floating-point and integer performance, as well as the memory hierarchy, while the "communication" aspects include the network, the memory hierarchy, and the 110 capabilities...

  20. System and method for programmable bank selection for banked memory subsystems

    Energy Technology Data Exchange (ETDEWEB)

    Blumrich, Matthias A. (Ridgefield, CT); Chen, Dong (Croton on Hudson, NY); Gara, Alan G. (Mount Kisco, NY); Giampapa, Mark E. (Irvington, NY); Hoenicke, Dirk (Seebruck-Seeon, DE); Ohmacht, Martin (Yorktown Heights, NY); Salapura, Valentina (Chappaqua, NY); Sugavanam, Krishnan (Mahopac, NY)

    2010-09-07

    A programmable memory system and method for enabling one or more processor devices access to shared memory in a computing environment, the shared memory including one or more memory storage structures having addressable locations for storing data. The system comprises: one or more first logic devices associated with a respective one or more processor devices, each first logic device for receiving physical memory address signals and programmable for generating a respective memory storage structure select signal upon receipt of pre-determined address bit values at selected physical memory address bit locations; and, a second logic device responsive to each of the respective select signal for generating an address signal used for selecting a memory storage structure for processor access. The system thus enables each processor device of a computing environment memory storage access distributed across the one or more memory storage structures.

  1. Electroconvulsive therapy and memory.

    Science.gov (United States)

    Harper, R G; Wiens, A N

    1975-10-01

    Recent research on the effects of electroconvulsive therapy (ECT) on memory is critically reviewed. Despite some inconsistent findings, unilateral nondominant ECT appears to affect verbal memory less than bilateral ECT. Adequate research on multiple monitored ECT is lacking. With few exceptions, the research methodologies for assessing memory have been inadequate. Many studies have confounded learning with retention, and only very recently has long term memory been adequately studied. Standardized assessment procedures for short term and long term memory are needed, in addition to more sophisticated assessment of memory processes, the duration of memory loss, and qualitative aspects of memories.

  2. Enhancing memory performance after organic brain disease relies on retrieval processes rather than encoding or consolidation

    NARCIS (Netherlands)

    Hildebrandt, H.; Gehrmann, A.; Mödden, C.; Eling, P.A.T.M.

    2011-01-01

    Neuropsychological rehabilitation of memory performance is still a controversial topic, and rehabilitation studies have not analyzed to which stage of memory processing (encoding, consolidation, or retrieval) enhancement may be attributed. We first examined the efficacy of a computer training

  3. Electrostatically telescoping nanotube nonvolatile memory device

    International Nuclear Information System (INIS)

    Kang, Jeong Won; Jiang Qing

    2007-01-01

    We propose a nonvolatile memory based on carbon nanotubes (CNTs) serving as the key building blocks for molecular-scale computers and investigate the dynamic operations of a double-walled CNT memory element by classical molecular dynamics simulations. The localized potential energy wells achieved from both the interwall van der Waals energy and CNT-metal binding energy make the bistability of the CNT positions and the electrostatic attractive forces induced by the voltage differences lead to the reversibility of this CNT memory. The material for the electrodes should be carefully chosen to achieve the nonvolatility of this memory. The kinetic energy of the CNT shuttle experiences several rebounds induced by the collisions of the CNT onto the metal electrodes, and this is critically important to the performance of such an electrostatically telescoping CNT memory because the collision time is sufficiently long to cause a delay of the state transition

  4. Data fusion using dynamic associative memory

    Science.gov (United States)

    Lo, Titus K. Y.; Leung, Henry; Chan, Keith C. C.

    1997-07-01

    An associative memory, unlike an addressed memory used in conventional computers, is content addressable. That is, storing and retrieving information are not based on the location of the memory cell but on the content of the information. There are a number of approaches to implement an associative memory, one of which is to use a neural dynamical system where objects being memorized or recognized correspond to its basic attractors. The work presented in this paper is the investigation of applying a particular type of neural dynamical associative memory, namely the projection network, to pattern recognition and data fusion. Three types of attractors, which are fixed-point, limit- cycle, and chaotic, have been studied, evaluated and compared.

  5. Projected phase-change memory devices.

    Science.gov (United States)

    Koelmans, Wabe W; Sebastian, Abu; Jonnalagadda, Vara Prasad; Krebs, Daniel; Dellmann, Laurent; Eleftheriou, Evangelos

    2015-09-03

    Nanoscale memory devices, whose resistance depends on the history of the electric signals applied, could become critical building blocks in new computing paradigms, such as brain-inspired computing and memcomputing. However, there are key challenges to overcome, such as the high programming power required, noise and resistance drift. Here, to address these, we present the concept of a projected memory device, whose distinguishing feature is that the physical mechanism of resistance storage is decoupled from the information-retrieval process. We designed and fabricated projected memory devices based on the phase-change storage mechanism and convincingly demonstrate the concept through detailed experimentation, supported by extensive modelling and finite-element simulations. The projected memory devices exhibit remarkably low drift and excellent noise performance. We also demonstrate active control and customization of the programming characteristics of the device that reliably realize a multitude of resistance states.

  6. FPGA Based Intelligent Co-operative Processor in Memory Architecture

    DEFF Research Database (Denmark)

    Ahmed, Zaki; Sotudeh, Reza; Hussain, Dil Muhammad Akbar

    2011-01-01

    benefits of PIM, a concept of Co-operative Intelligent Memory (CIM) was developed by the intelligent system group of University of Hertfordshire, based on the previously developed Co-operative Pseudo Intelligent Memory (CPIM). This paper provides an overview on previous works (CPIM, CIM) and realization......In a continuing effort to improve computer system performance, Processor-In-Memory (PIM) architecture has emerged as an alternative solution. PIM architecture incorporates computational units and control logic directly on the memory to provide immediate access to the data. To exploit the potential...

  7. Aspects of GPU perfomance in algorithms with random memory access

    Science.gov (United States)

    Kashkovsky, Alexander V.; Shershnev, Anton A.; Vashchenkov, Pavel V.

    2017-10-01

    The numerical code for solving the Boltzmann equation on the hybrid computational cluster using the Direct Simulation Monte Carlo (DSMC) method showed that on Tesla K40 accelerators computational performance drops dramatically with increase of percentage of occupied GPU memory. Testing revealed that memory access time increases tens of times after certain critical percentage of memory is occupied. Moreover, it seems to be the common problem of all NVidia's GPUs arising from its architecture. Few modifications of the numerical algorithm were suggested to overcome this problem. One of them, based on the splitting the memory into "virtual" blocks, resulted in 2.5 times speed up.

  8. Provably unbounded memory advantage in stochastic simulation using quantum mechanics

    Science.gov (United States)

    Garner, Andrew J. P.; Liu, Qing; Thompson, Jayne; Vedral, Vlatko; Gu, mile

    2017-10-01

    Simulating the stochastic evolution of real quantities on a digital computer requires a trade-off between the precision to which these quantities are approximated, and the memory required to store them. The statistical accuracy of the simulation is thus generally limited by the internal memory available to the simulator. Here, using tools from computational mechanics, we show that quantum processors with a fixed finite memory can simulate stochastic processes of real variables to arbitrarily high precision. This demonstrates a provable, unbounded memory advantage that a quantum simulator can exhibit over its best possible classical counterpart.

  9. Provably unbounded memory advantage in stochastic simulation using quantum mechanics

    International Nuclear Information System (INIS)

    Garner, Andrew J P; Thompson, Jayne; Vedral, Vlatko; Gu, Mile; Liu, Qing

    2017-01-01

    Simulating the stochastic evolution of real quantities on a digital computer requires a trade-off between the precision to which these quantities are approximated, and the memory required to store them. The statistical accuracy of the simulation is thus generally limited by the internal memory available to the simulator. Here, using tools from computational mechanics, we show that quantum processors with a fixed finite memory can simulate stochastic processes of real variables to arbitrarily high precision. This demonstrates a provable, unbounded memory advantage that a quantum simulator can exhibit over its best possible classical counterpart. (paper)

  10. C-RAM: breaking mobile device memory barriers using the cloud

    OpenAIRE

    Pamboris, A; Pietzuch, P

    2015-01-01

    ?Mobile applications are constrained by the available memory of mobile devices. We present C-RAM, a system that uses cloud-based memory to extend the memory of mobile devices. It splits application state and its associated computation between a mobile device and a cloud node to allow applications to consume more memory, while minimising the performance impact. C-RAM thus enables developers to realise new applications or port legacy desktop applications with a large memory footprint to mobile ...

  11. Detailed sensory memory, sloppy working memory

    NARCIS (Netherlands)

    Sligte, I.G.; Vandenbroucke, A.R.E.; Scholte, H.S.; Lamme, V.A.F.

    2010-01-01

    Visual short-term memory (VSTM) enables us to actively maintain information in mind for a brief period of time after stimulus disappearance. According to recent studies, VSTM consists of three stages - iconic memory, fragile VSTM, and visual working memory - with increasingly stricter capacity

  12. Episodic memory, semantic memory, and amnesia.

    Science.gov (United States)

    Squire, L R; Zola, S M

    1998-01-01

    Episodic memory and semantic memory are two types of declarative memory. There have been two principal views about how this distinction might be reflected in the organization of memory functions in the brain. One view, that episodic memory and semantic memory are both dependent on the integrity of medial temporal lobe and midline diencephalic structures, predicts that amnesic patients with medial temporal lobe/diencephalic damage should be proportionately impaired in both episodic and semantic memory. An alternative view is that the capacity for semantic memory is spared, or partially spared, in amnesia relative to episodic memory ability. This article reviews two kinds of relevant data: 1) case studies where amnesia has occurred early in childhood, before much of an individual's semantic knowledge has been acquired, and 2) experimental studies with amnesic patients of fact and event learning, remembering and knowing, and remote memory. The data provide no compelling support for the view that episodic and semantic memory are affected differently in medial temporal lobe/diencephalic amnesia. However, episodic and semantic memory may be dissociable in those amnesic patients who additionally have severe frontal lobe damage.

  13. Exploiting Data Similarity to Reduce Memory Footprints

    Science.gov (United States)

    2011-01-01

    ure 1 illustrates. We expect the budget for an exascale system to be approximately $200M and memory costs will account for about half of that budget [21...Figure 2 shows that monetary considerations will lead to significantly less main memory relative to compute capability in exascale systems even if...J. Davenport, T. Schlagel, F. John- son, and P. Messina. A Decadal DOE Plan for Providing Exascale Applications and Technologies for DOE Mission

  14. Trinary Associative Memory Would Recognize Machine Parts

    Science.gov (United States)

    Liu, Hua-Kuang; Awwal, Abdul Ahad S.; Karim, Mohammad A.

    1991-01-01

    Trinary associative memory combines merits and overcomes major deficiencies of unipolar and bipolar logics by combining them in three-valued logic that reverts to unipolar or bipolar binary selectively, as needed to perform specific tasks. Advantage of associative memory: one obtains access to all parts of it simultaneously on basis of content, rather than address, of data. Consequently, used to exploit fully parallelism and speed of optical computing.

  15. Optical memory

    Science.gov (United States)

    Mao, Samuel S; Zhang, Yanfeng

    2013-07-02

    Optical memory comprising: a semiconductor wire, a first electrode, a second electrode, a light source, a means for producing a first voltage at the first electrode, a means for producing a second voltage at the second electrode, and a means for determining the presence of an electrical voltage across the first electrode and the second electrode exceeding a predefined voltage. The first voltage, preferably less than 0 volts, different from said second voltage. The semiconductor wire is optically transparent and has a bandgap less than the energy produced by the light source. The light source is optically connected to the semiconductor wire. The first electrode and the second electrode are electrically insulated from each other and said semiconductor wire.

  16. Make Computer Learning Stick.

    Science.gov (United States)

    Casella, Vicki

    1985-01-01

    Teachers are using computer programs in conjunction with many classroom staples such as art supplies, math manipulatives, and science reference books. Twelve software programs and related activities are described which teach visual and auditory memory and spatial relations, as well as subject areas such as anatomy and geography. (MT)

  17. SODR Memory Control Buffer Control ASIC

    Science.gov (United States)

    Hodson, Robert F.

    1994-01-01

    The Spacecraft Optical Disk Recorder (SODR) is a state of the art mass storage system for future NASA missions requiring high transmission rates and a large capacity storage system. This report covers the design and development of an SODR memory buffer control applications specific integrated circuit (ASIC). The memory buffer control ASIC has two primary functions: (1) buffering data to prevent loss of data during disk access times, (2) converting data formats from a high performance parallel interface format to a small computer systems interface format. Ten 144 p in, 50 MHz CMOS ASIC's were designed, fabricated and tested to implement the memory buffer control function.

  18. Quantum computers: Definition and implementations

    International Nuclear Information System (INIS)

    Perez-Delgado, Carlos A.; Kok, Pieter

    2011-01-01

    The DiVincenzo criteria for implementing a quantum computer have been seminal in focusing both experimental and theoretical research in quantum-information processing. These criteria were formulated specifically for the circuit model of quantum computing. However, several new models for quantum computing (paradigms) have been proposed that do not seem to fit the criteria well. Therefore, the question is what are the general criteria for implementing quantum computers. To this end, a formal operational definition of a quantum computer is introduced. It is then shown that, according to this definition, a device is a quantum computer if it obeys the following criteria: Any quantum computer must consist of a quantum memory, with an additional structure that (1) facilitates a controlled quantum evolution of the quantum memory; (2) includes a method for information theoretic cooling of the memory; and (3) provides a readout mechanism for subsets of the quantum memory. The criteria are met when the device is scalable and operates fault tolerantly. We discuss various existing quantum computing paradigms and how they fit within this framework. Finally, we present a decision tree for selecting an avenue toward building a quantum computer. This is intended to help experimentalists determine the most natural paradigm given a particular physical implementation.

  19. Memory, microprocessor, and ASIC

    CERN Document Server

    Chen, Wai-Kai

    2003-01-01

    System Timing. ROM/PROM/EPROM. SRAM. Embedded Memory. Flash Memories. Dynamic Random Access Memory. Low-Power Memory Circuits. Timing and Signal Integrity Analysis. Microprocessor Design Verification. Microprocessor Layout Method. Architecture. ASIC Design. Logic Synthesis for Field Programmable Gate Array (EPGA) Technology. Testability Concepts and DFT. ATPG and BIST. CAD Tools for BIST/DFT and Delay Faults.

  20. Infant Visual Recognition Memory

    Science.gov (United States)

    Rose, Susan A.; Feldman, Judith F.; Jankowski, Jeffery J.

    2004-01-01

    Visual recognition memory is a robust form of memory that is evident from early infancy, shows pronounced developmental change, and is influenced by many of the same factors that affect adult memory; it is surprisingly resistant to decay and interference. Infant visual recognition memory shows (a) modest reliability, (b) good discriminant…

  1. Nanoscale memory devices

    International Nuclear Information System (INIS)

    Chung, Andy; Deen, Jamal; Lee, Jeong-Soo; Meyyappan, M

    2010-01-01

    This article reviews the current status and future prospects for the use of nanomaterials and devices in memory technology. First, the status and continuing scaling trends of the flash memory are discussed. Then, a detailed discussion on technologies trying to replace flash in the near-term is provided. This includes phase change random access memory, Fe random access memory and magnetic random access memory. The long-term nanotechnology prospects for memory devices include carbon-nanotube-based memory, molecular electronics and memristors based on resistive materials such as TiO 2 . (topical review)

  2. Advanced topics in security computer system design

    International Nuclear Information System (INIS)

    Stachniak, D.E.; Lamb, W.R.

    1989-01-01

    The capability, performance, and speed of contemporary computer processors, plus the associated performance capability of the operating systems accommodating the processors, have enormously expanded the scope of possibilities for designers of nuclear power plant security computer systems. This paper addresses the choices that could be made by a designer of security computer systems working with contemporary computers and describes the improvement in functionality of contemporary security computer systems based on an optimally chosen design. Primary initial considerations concern the selection of (a) the computer hardware and (b) the operating system. Considerations for hardware selection concern processor and memory word length, memory capacity, and numerous processor features

  3. Visual memory needs categories

    OpenAIRE

    Olsson, Henrik; Poom, Leo

    2005-01-01

    Capacity limitations in the way humans store and process information in working memory have been extensively studied, and several memory systems have been distinguished. In line with previous capacity estimates for verbal memory and memory for spatial information, recent studies suggest that it is possible to retain up to four objects in visual working memory. The objects used have typically been categorically different colors and shapes. Because knowledge about categories is stored in long-t...

  4. Non-volatile memories

    CERN Document Server

    Lacaze, Pierre-Camille

    2014-01-01

    Written for scientists, researchers, and engineers, Non-volatile Memories describes the recent research and implementations in relation to the design of a new generation of non-volatile electronic memories. The objective is to replace existing memories (DRAM, SRAM, EEPROM, Flash, etc.) with a universal memory model likely to reach better performances than the current types of memory: extremely high commutation speeds, high implantation densities and retention time of information of about ten years.

  5. Memory: sins and virtues

    OpenAIRE

    Schacter, Daniel L.

    2013-01-01

    Memory plays an important role in everyday life but does not provide an exact and unchanging record of experience: research has documented that memory is a constructive process that is subject to a variety of errors and distortions. Yet these memory “sins” also reflect the operation of adaptive aspects of memory. Memory can thus be characterized as an adaptive constructive process, which plays a functional role in cognition but produces distortions, errors, or illusions as a consequence of d...

  6. Insect olfactory coding and memory at multiple timescales.

    Science.gov (United States)

    Gupta, Nitin; Stopfer, Mark

    2011-10-01

    Insects can learn, allowing them great flexibility for locating seasonal food sources and avoiding wily predators. Because insects are relatively simple and accessible to manipulation, they provide good experimental preparations for exploring mechanisms underlying sensory coding and memory. Here we review how the intertwining of memory with computation enables the coding, decoding, and storage of sensory experience at various stages of the insect olfactory system. Individual parts of this system are capable of multiplexing memories at different timescales, and conversely, memory on a given timescale can be distributed across different parts of the circuit. Our sampling of the olfactory system emphasizes the diversity of memories, and the importance of understanding these memories in the context of computations performed by different parts of a sensory system. Published by Elsevier Ltd.

  7. Hypergraph-Based Recognition Memory Model for Lifelong Experience

    Science.gov (United States)

    2014-01-01

    Cognitive agents are expected to interact with and adapt to a nonstationary dynamic environment. As an initial process of decision making in a real-world agent interaction, familiarity judgment leads the following processes for intelligence. Familiarity judgment includes knowing previously encoded data as well as completing original patterns from partial information, which are fundamental functions of recognition memory. Although previous computational memory models have attempted to reflect human behavioral properties on the recognition memory, they have been focused on static conditions without considering temporal changes in terms of lifelong learning. To provide temporal adaptability to an agent, in this paper, we suggest a computational model for recognition memory that enables lifelong learning. The proposed model is based on a hypergraph structure, and thus it allows a high-order relationship between contextual nodes and enables incremental learning. Through a simulated experiment, we investigate the optimal conditions of the memory model and validate the consistency of memory performance for lifelong learning. PMID:25371665

  8. Salam Memorial

    CERN Document Server

    Rubbia, Carlo

    1997-01-01

    by T.W.B. KIBBLE / Blackett Laboratory, Imperial College, London. Recollections of Abdus Salam at Imperial College I shall give a personal account of Professor Salam's life and work from the perspective of a colleague at Imperial College, concentrating particularly but not exclusively on the period leading up to the discovery of the electro-weak theory. If necessary I could perhaps give more detail, but only once I have given more thought to what ground I shall cover. by Sheldon Lee GLASHOW / Harvard University, Cambridge, MA, USA. Memories of Abdus Salam. My interactions with Abdus Salam, weak as they have been, extended over five decades. I regret that we never once collaborated in print or by correspondence. I visited Abdus only twice in London and twice again in Trieste, and met him at the occasional conference or summer school. Our face-to-face encounters could be counted on one's fingers and toes, but we became the best of friends. Others will discuss Abdus as an inspiring teacher, as a great scientist,...

  9. Implicit and explicit memory for spatial information in Alzheimer's disease.

    Science.gov (United States)

    Kessels, R P C; Feijen, J; Postma, A

    2005-01-01

    There is abundant evidence that memory impairment in dementia in patients with Alzheimer's disease (AD) is related to explicit, conscious forms of memory, whereas implicit, unconscious forms of memory function remain relatively intact or are less severely affected. Only a few studies have been performed on spatial memory function in AD, showing that AD patients' explicit spatial memory is impaired, possibly related to hippocampal dysfunction. However, studies on implicit spatial memory in AD are lacking. The current study set out to investigate implicit and explicit spatial memory in AD patients (n=18) using an ecologically valid computer task, in which participants had to remember the locations of various objects in common rooms. The contribution of implicit and explicit memory functions was estimated by means of the process dissociation procedure. The results show that explicit spatial memory is impaired in AD patients compared with a control group (n=21). However, no group difference was found on implicit spatial function. This indicates that spared implicit memory in AD extends to the spatial domain, while the explicit spatial memory function deteriorates. Clinically, this finding might be relevant, in that an intact implicit memory function might be helpful in overcoming problems in explicit processing. Copyright (c) 2005 S. Karger AG, Basel.

  10. III. NIH TOOLBOX COGNITION BATTERY (CB): MEASURING EPISODIC MEMORY

    OpenAIRE

    Bauer, Patricia J.; Dikmen, Sureyya S.; Heaton, Robert K.; Mungas, Dan; Slotkin, Jerry; Beaumont, Jennifer L.

    2013-01-01

    One of the most significant domains of cognition is episodic memory, which allows for rapid acquisition and long-term storage of new information. For purposes of the NIH Toolbox, we devised a new test of episodic memory. The nonverbal NIH Toolbox Picture Sequence Memory Test (TPSMT) requires participants to reproduce the order of an arbitrarily ordered sequence of pictures presented on a computer. To adjust for ability, sequence length varies from 6 to 15 pictures. Multiple trials are adminis...

  11. Active non-volatile memory post-processing

    Energy Technology Data Exchange (ETDEWEB)

    Kannan, Sudarsun; Milojicic, Dejan S.; Talwar, Vanish

    2017-04-11

    A computing node includes an active Non-Volatile Random Access Memory (NVRAM) component which includes memory and a sub-processor component. The memory is to store data chunks received from a processor core, the data chunks comprising metadata indicating a type of post-processing to be performed on data within the data chunks. The sub-processor component is to perform post-processing of said data chunks based on said metadata.

  12. Organizational memory: from expectations memory to procedural memory

    NARCIS (Netherlands)

    Ebbers, J.J.; Wijnberg, N.M.

    2009-01-01

    Organizational memory is not just the stock of knowledge about how to do things, but also of expectations of organizational members vis-à-vis each other and the organization as a whole. The central argument of this paper is that this second type of organizational memory -organizational expectations

  13. Stochastic memory: getting memory out of noise

    Science.gov (United States)

    Stotland, Alexander; di Ventra, Massimiliano

    2011-03-01

    Memory circuit elements, namely memristors, memcapacitors and meminductors, can store information without the need of a power source. These systems are generally defined in terms of deterministic equations of motion for the state variables that are responsible for memory. However, in real systems noise sources can never be eliminated completely. One would then expect noise to be detrimental for memory. Here, we show that under specific conditions on the noise intensity memory can actually be enhanced. We illustrate this phenomenon using a physical model of a memristor in which the addition of white noise into the state variable equation improves the memory and helps the operation of the system. We discuss under which conditions this effect can be realized experimentally, discuss its implications on existing memory systems discussed in the literature, and also analyze the effects of colored noise. Work supported in part by NSF.

  14. Review of quantum computation

    International Nuclear Information System (INIS)

    Lloyd, S.

    1992-01-01

    Digital computers are machines that can be programmed to perform logical and arithmetical operations. Contemporary digital computers are ''universal,'' in the sense that a program that runs on one computer can, if properly compiled, run on any other computer that has access to enough memory space and time. Any one universal computer can simulate the operation of any other; and the set of tasks that any such machine can perform is common to all universal machines. Since Bennett's discovery that computation can be carried out in a non-dissipative fashion, a number of Hamiltonian quantum-mechanical systems have been proposed whose time-evolutions over discrete intervals are equivalent to those of specific universal computers. The first quantum-mechanical treatment of computers was given by Benioff, who exhibited a Hamiltonian system with a basis whose members corresponded to the logical states of a Turing machine. In order to make the Hamiltonian local, in the sense that its structure depended only on the part of the computation being performed at that time, Benioff found it necessary to make the Hamiltonian time-dependent. Feynman discovered a way to make the computational Hamiltonian both local and time-independent by incorporating the direction of computation in the initial condition. In Feynman's quantum computer, the program is a carefully prepared wave packet that propagates through different computational states. Deutsch presented a quantum computer that exploits the possibility of existing in a superposition of computational states to perform tasks that a classical computer cannot, such as generating purely random numbers, and carrying out superpositions of computations as a method of parallel processing. In this paper, we show that such computers, by virtue of their common function, possess a common form for their quantum dynamics

  15. Collectively loading an application in a parallel computer

    Science.gov (United States)

    Aho, Michael E.; Attinella, John E.; Gooding, Thomas M.; Miller, Samuel J.; Mundy, Michael B.

    2016-01-05

    Collectively loading an application in a parallel computer, the parallel computer comprising a plurality of compute nodes, including: identifying, by a parallel computer control system, a subset of compute nodes in the parallel computer to execute a job; selecting, by the parallel computer control system, one of the subset of compute nodes in the parallel computer as a job leader compute node; retrieving, by the job leader compute node from computer memory, an application for executing the job; and broadcasting, by the job leader to the subset of compute nodes in the parallel computer, the application for executing the job.

  16. EPS Mid-Career Award 2011. Are there multiple memory systems? Tests of models of implicit and explicit memory.

    Science.gov (United States)

    Shanks, David R; Berry, Christopher J

    2012-01-01

    This article reviews recent work aimed at developing a new framework, based on signal detection theory, for understanding the relationship between explicit (e.g., recognition) and implicit (e.g., priming) memory. Within this framework, different assumptions about sources of memorial evidence can be framed. Application to experimental results provides robust evidence for a single-system model in preference to multiple-systems models. This evidence comes from several sources including studies of the effects of amnesia and ageing on explicit and implicit memory. The framework allows a range of concepts in current memory research, such as familiarity, recollection, fluency, and source memory, to be linked to implicit memory. More generally, this work emphasizes the value of modern computational modelling techniques in the study of learning and memory.

  17. Detailed Sensory Memory, Sloppy Working Memory

    OpenAIRE

    Sligte, Ilja G.; Vandenbroucke, Annelinde R. E.; Scholte, H. Steven; Lamme, Victor A. F.

    2010-01-01

    Visual short-term memory (VSTM) enables us to actively maintain information in mind for a brief period of time after stimulus disappearance. According to recent studies, VSTM consists of three stages - iconic memory, fragile VSTM, and visual working memory - with increasingly stricter capacity limits and progressively longer lifetimes. Still, the resolution (or amount of visual detail) of each VSTM stage has remained unexplored and we test this in the present study. We presented people with a...

  18. Detailed sensory memory, sloppy working memory.

    Science.gov (United States)

    Sligte, Ilja G; Vandenbroucke, Annelinde R E; Scholte, H Steven; Lamme, Victor A F

    2010-01-01

    Visual short-term memory (VSTM) enables us to actively maintain information in mind for a brief period of time after stimulus disappearance. According to recent studies, VSTM consists of three stages - iconic memory, fragile VSTM, and visual working memory - with increasingly stricter capacity limits and progressively longer lifetimes. Still, the resolution (or amount of visual detail) of each VSTM stage has remained unexplored and we test this in the present study. We presented people with a change detection task that measures the capacity of all three forms of VSTM, and we added an identification display after each change trial that required people to identify the "pre-change" object. Accurate change detection plus pre-change identification requires subjects to have a high-resolution representation of the "pre-change" object, whereas change detection or identification only can be based on the hunch that something has changed, without exactly knowing what was presented before. We observed that people maintained 6.1 objects in iconic memory, 4.6 objects in fragile VSTM, and 2.1 objects in visual working memory. Moreover, when people detected the change, they could also identify the pre-change object on 88% of the iconic memory trials, on 71% of the fragile VSTM trials and merely on 53% of the visual working memory trials. This suggests that people maintain many high-resolution representations in iconic memory and fragile VSTM, but only one high-resolution object representation in visual working memory.

  19. Consolidation of long-term memory: Evidence and alternatives.

    NARCIS (Netherlands)

    Meeter, M.; Murre, J.M.J.

    2004-01-01

    Memory loss in retrograde amnesia has long been held to be larger for recent periods than for remote periods, a pattern usually referred to as the Ribot gradient. One explanation for this gradient is consolidation of long-term memories. Several computational models of such a process have shown how

  20. Database architecture optimized for the new bottleneck: Memory access

    NARCIS (Netherlands)

    P.A. Boncz (Peter); S. Manegold (Stefan); M.L. Kersten (Martin)

    1999-01-01

    textabstractIn the past decade, advances in speed of commodity CPUs have far out-paced advances in memory latency. Main-memory access is therefore increasingly a performance bottleneck for many computer applications, including database systems. In this article, we use a simple scan test to show the

  1. Optimizing Database Architecture for the New Bottleneck: Memory Access

    NARCIS (Netherlands)

    S. Manegold (Stefan); P.A. Boncz (Peter); M.L. Kersten (Martin)

    2000-01-01

    textabstractIn the past decade, advances in speed of commodity CPUs have far out-paced advances in memory latency. Main-memory access is therefore increasingly a performance bottleneck for many computer applications, including database systems. In this article, we use a simple scan test to show the

  2. Enhancing Assisted Living Technology with Extended Visual Memory

    Directory of Open Access Journals (Sweden)

    Joo-Hwee Lim

    2011-05-01

    Full Text Available Human vision and memory are powerful cognitive faculties by which we understand the world. However, they are imperfect and further, subject to deterioration with age. We propose a cognitive-inspired computational model, Extended Visual Memory (EVM, within the Computer-Aided Vision (CAV framework, to assist human in vision-related tasks. We exploit wearable sensors such as cameras, GPS and ambient computing facilities to complement a user's vision and memory functions by answering four types of queries central to visual activities, namely, Retrieval, Understanding, Navigation and Search. Learning of EVM relies on both frequency-based and attention-driven mechanisms to store view-based visual fragments (VF, which are abstracted into high-level visual schemas (VS, both in the visual long-term memory. During inference, the visual short-term memory plays a key role in visual similarity computation between input (or its schematic representation and VF, exemplified from VS when necessary. We present an assisted living scenario, termed EViMAL (Extended Visual Memory for Assisted Living, targeted at mild dementia patients to provide novel functions such as hazard-warning, visual reminder, object look-up and event review. We envisage EVM having the potential benefits in alleviating memory loss, improving recall precision and enhancing memory capacity through external support.

  3. Breaking the memory wall in MonetDB

    NARCIS (Netherlands)

    P.A. Boncz (Peter); M.L. Kersten (Martin); S. Manegold (Stefan)

    2008-01-01

    textabstractIn the past decades, advances in speed of commodity CPUs have far outpaced advances in RAM latency. Main-memory access has therefore become a performance bottleneck for many computer applications; a phenomenon that is widely known as the "memory wall." In this paper, we report how

  4. Breaking the memory wall in MonetDB

    NARCIS (Netherlands)

    Boncz, P.A.; Kersten, M.L.; Manegold, S.

    2008-01-01

    In the past decades, advances in speed of commodity CPUs have far outpaced advances in RAM latency. Main-memory access has therefore become a performance bottleneck for many computer applications; a phenomenon that is widely known as the "memory wall." In this paper, we report how research around

  5. Contrasting single and multi-component working-memory systems in dual tasking.

    Science.gov (United States)

    Nijboer, Menno; Borst, Jelmer; van Rijn, Hedderik; Taatgen, Niels

    2016-05-01

    Working memory can be a major source of interference in dual tasking. However, there is no consensus on whether this interference is the result of a single working memory bottleneck, or of interactions between different working memory components that together form a complete working-memory system. We report a behavioral and an fMRI dataset in which working memory requirements are manipulated during multitasking. We show that a computational cognitive model that assumes a distributed version of working memory accounts for both behavioral and neuroimaging data better than a model that takes a more centralized approach. The model's working memory consists of an attentional focus, declarative memory, and a subvocalized rehearsal mechanism. Thus, the data and model favor an account where working memory interference in dual tasking is the result of interactions between different resources that together form a working-memory system. Copyright © 2016 Elsevier Inc. All rights reserved.

  6. Age effects on explicit and implicit memory

    Directory of Open Access Journals (Sweden)

    Emma eWard

    2013-09-01

    Full Text Available It is well documented that explicit memory (e.g., recognition declines with age. In contrast, many argue that implicit memory (e.g., priming is preserved in healthy aging. For example, priming on tasks such as perceptual identification is often not statistically different in groups of young and older adults. Such observations are commonly taken as evidence for distinct explicit and implicit learning/memory systems. In this article we discuss several lines of evidence that challenge this view. We describe how patterns of differential age-related decline may arise from differences in the ways in which the two forms of memory are commonly measured, and review recent research suggesting that under improved measurement methods, implicit memory is not age-invariant. Formal computational models are of considerable utility in revealing the nature of underlying systems. We report the results of applying single and multiple-systems models to data on age effects in implicit and explicit memory. Model comparison clearly favours the single-system view. Implications for the memory systems debate are discussed.

  7. Age effects on explicit and implicit memory.

    Science.gov (United States)

    Ward, Emma V; Berry, Christopher J; Shanks, David R

    2013-01-01

    It is well-documented that explicit memory (e.g., recognition) declines with age. In contrast, many argue that implicit memory (e.g., priming) is preserved in healthy aging. For example, priming on tasks such as perceptual identification is often not statistically different in groups of young and older adults. Such observations are commonly taken as evidence for distinct explicit and implicit learning/memory systems. In this article we discuss several lines of evidence that challenge this view. We describe how patterns of differential age-related decline may arise from differences in the ways in which the two forms of memory are commonly measured, and review recent research suggesting that under improved measurement methods, implicit memory is not age-invariant. Formal computational models are of considerable utility in revealing the nature of underlying systems. We report the results of applying single and multiple-systems models to data on age effects in implicit and explicit memory. Model comparison clearly favors the single-system view. Implications for the memory systems debate are discussed.

  8. Bulk-memory processor for data acquisition

    International Nuclear Information System (INIS)

    Nelson, R.O.; McMillan, D.E.; Sunier, J.W.; Meier, M.; Poore, R.V.

    1981-01-01

    To meet the diverse needs and data rate requirements at the Van de Graaff and Weapons Neutron Research (WNR) facilities, a bulk memory system has been implemented which includes a fast and flexible processor. This bulk memory processor (BMP) utilizes bit slice and microcode techniques and features a 24 bit wide internal architecture allowing direct addressing of up to 16 megawords of memory and histogramming up to 16 million counts per channel without overflow. The BMP is interfaced to the MOSTEK MK 8000 bulk memory system and to the standard MODCOMP computer I/O bus. Coding for the BMP both at the microcode level and with macro instructions is supported. The generalized data acquisition system has been extended to support the BMP in a manner transparent to the user

  9. Evolving spiking networks with variable resistive memories.

    Science.gov (United States)

    Howard, Gerard; Bull, Larry; de Lacy Costello, Ben; Gale, Ella; Adamatzky, Andrew

    2014-01-01

    Neuromorphic computing is a brainlike information processing paradigm that requires adaptive learning mechanisms. A spiking neuro-evolutionary system is used for this purpose; plastic resistive memories are implemented as synapses in spiking neural networks. The evolutionary design process exploits parameter self-adaptation and allows the topology and synaptic weights to be evolved for each network in an autonomous manner. Variable resistive memories are the focus of this research; each synapse has its own conductance profile which modifies the plastic behaviour of the device and may be altered during evolution. These variable resistive networks are evaluated on a noisy robotic dynamic-reward scenario against two static resistive memories and a system containing standard connections only. The results indicate that the extra behavioural degrees of freedom available to the networks incorporating variable resistive memories enable them to outperform the comparative synapse types.

  10. The neural bases of orthographic working memory

    Directory of Open Access Journals (Sweden)

    Jeremy Purcell

    2014-04-01

    First, these results reveal a neurotopography of OWM lesion sites that is well-aligned with results from neuroimaging of orthographic working memory in neurally intact participants (Rapp & Dufor, 2011. Second, the dorsal neurotopography of the OWM lesion overlap is clearly distinct from what has been reported for lesions associated with either lexical or sublexical deficits (e.g., Henry, Beeson, Stark, & Rapcsak, 2007; Rapcsak & Beeson, 2004; these have, respectively, been identified with the inferior occipital/temporal and superior temporal/inferior parietal regions. These neurotopographic distinctions support the claims of the computational distinctiveness of long-term vs. working memory operations. The specific lesion loci raise a number of questions to be discussed regarding: (a the selectivity of these regions and associated deficits to orthographic working memory vs. working memory more generally (b the possibility that different lesion sub-regions may correspond to different components of the OWM system.

  11. Prospective memory, working memory, retrospective memory and self-rated memory performance in persons with intellectual disability

    OpenAIRE

    Levén, Anna; Lyxell, Björn; Andersson, Jan; Danielsson, Henrik; Rönnberg, Jerker

    2008-01-01

    The purpose of the present study was to examine the relationship between prospective memory, working memory, retrospective memory and self-rated memory capacity in adults with and without intellectual disability. Prospective memory was investigated by means of a picture-based task. Working memory was measured as performance on span tasks. Retrospective memory was scored as recall of subject performed tasks. Self-ratings of memory performance were based on the prospective and retrospective mem...

  12. Phenomenological validity of an OCD-memory model and the remember/know distinction

    NARCIS (Netherlands)

    van den Hout, M.; Kindt, M.

    2003-01-01

    In earlier experiments using interactive computer animation with healthy subjects, it was found that displaying compulsive-like repeated checking behavior affects memory. That is, checking does not alter actual memory accuracy, but it does affect 'meta-memory': as checking continues, recollections

  13. Coping with Memory Loss

    Science.gov (United States)

    ... Consumers Home For Consumers Consumer Updates Coping With Memory Loss Share Tweet Linkedin Pin it More sharing ... be evaluated by a health professional. What Causes Memory Loss? Anything that affects cognition—the process of ...

  14. Memory and Aging

    Science.gov (United States)

    Memory and Aging Losing keys, misplacing a wallet, or forgetting someone’s name are common experiences. But for people nearing or over age 65, such memory lapses can be frightening. They wonder if they ...

  15. Tracing Cultural Memory

    DEFF Research Database (Denmark)

    Wiegand, Frauke Katharina

    by their encounters – to address a question that thirty years of ground - breaking research into memory has not yet sufficiently answered: What can we learn about the dynamics of cultural memory by examining mundane accounts of touristic encounters with sites of memory? From Blaavand Beach in Western Denmark......We encounter, relate to and make use of our past and that of others in multifarious and increasingly mobile ways. Tourism is one of the main paths for encountering sites of memory. This thesis examines tourists’ creative appropriations of sites of memory – the objects and future memories inspired...... of memory. They highlight the role of mundane uses of the past and indicate the need for cross - disciplinary research on the visual and on memory...

  16. Propagation of soil moisture memory to runoff and evapotranspiration

    Science.gov (United States)

    Orth, R.; Seneviratne, S. I.

    2012-10-01

    As a key variable of the land-climate system soil moisture is a main driver of runoff and evapotranspiration under certain conditions. Soil moisture furthermore exhibits outstanding memory (persistence) characteristics. Also for runoff many studies report distinct low frequency variations that represent a memory. Using data from over 100 near-natural catchments located across Europe we investigate in this study the connection between soil moisture memory and the respective memory of runoff and evapotranspiration on different time scales. For this purpose we use a simple water balance model in which dependencies of runoff (normalized by precipitation) and evapotranspiration (normalized by radiation) on soil moisture are fitted using runoff observations. The model therefore allows to compute memory of soil moisture, runoff and evapotranspiration on catchment scale. We find considerable memory in soil moisture and runoff in many parts of the continent, and evapotranspiration also displays some memory on a monthly time scale in some catchments. We show that the memory of runoff and evapotranspiration jointly depend on soil moisture memory and on the strength of the coupling of runoff and evapotranspiration to soil moisture. Furthermore we find that the coupling strengths of runoff and evapotranspiration to soil moisture depend on the shape of the fitted dependencies and on the variance of the meteorological forcing. To better interpret the magnitude of the respective memories across Europe we finally provide a new perspective on hydrological memory by relating it to the mean duration required to recover from anomalies exceeding a certain threshold.

  17. Short-term memory in networks of dissociated cortical neurons.

    Science.gov (United States)

    Dranias, Mark R; Ju, Han; Rajaram, Ezhilarasan; VanDongen, Antonius M J

    2013-01-30

    Short-term memory refers to the ability to store small amounts of stimulus-specific information for a short period of time. It is supported by both fading and hidden memory processes. Fading memory relies on recurrent activity patterns in a neuronal network, whereas hidden memory is encoded using synaptic mechanisms, such as facilitation, which persist even when neurons fall silent. We have used a novel computational and optogenetic approach to investigate whether these same memory processes hypothesized to support pattern recognition and short-term memory in vivo, exist in vitro. Electrophysiological activity was recorded from primary cultures of dissociated rat cortical neurons plated on multielectrode arrays. Cultures were transfected with ChannelRhodopsin-2 and optically stimulated using random dot stimuli. The pattern of neuronal activity resulting from this stimulation was analyzed using classification algorithms that enabled the identification of stimulus-specific memories. Fading memories for different stimuli, encoded in ongoing neural activity, persisted and could be distinguished from each other for as long as 1 s after stimulation was terminated. Hidden memories were detected by altered responses of neurons to additional stimulation, and this effect persisted longer than 1 s. Interestingly, network bursts seem to eliminate hidden memories. These results are similar to those that have been reported from similar experiments in vivo and demonstrate that mechanisms of information processing and short-term memory can be studied using cultured neuronal networks, thereby setting the stage for therapeutic applications using this platform.

  18. Bank switched memory interface for an image processor

    International Nuclear Information System (INIS)

    Barron, M.; Downward, J.

    1980-09-01

    A commercially available image processor is interfaced to a PDP-11/45 through an 8K window of memory addresses. When the image processor was not in use it was desired to be able to use the 8K address space as real memory. The standard method of accomplishing this would have been to use UNIBUS switches to switch in either the physical 8K bank of memory or the image processor memory. This method has the disadvantage of being rather expensive. As a simple alternative, a device was built to selectively enable or disable either an 8K bank of memory or the image processor memory. To enable the image processor under program control, GEN is contracted in size, the memory is disabled, a device partition for the image processor is created above GEN, and the image processor memory is enabled. The process is reversed to restore memory to GEN. The hardware to enable/disable the image and computer memories is controlled using spare bits from a DR-11K output register. The image processor and physical memory can be switched in or out on line with no adverse affects on the system's operation

  19. Emotional Memory Persists Longer than Event Memory

    Science.gov (United States)

    Kuriyama, Kenichi; Soshi, Takahiro; Fujii, Takeshi; Kim, Yoshiharu

    2010-01-01

    The interaction between amygdala-driven and hippocampus-driven activities is expected to explain why emotion enhances episodic memory recognition. However, overwhelming behavioral evidence regarding the emotion-induced enhancement of immediate and delayed episodic memory recognition has not been obtained in humans. We found that the recognition…

  20. Gravitational-wave memory revisited: Memory from the merger and recoil of binary black holes

    International Nuclear Information System (INIS)

    Favata, Marc

    2009-01-01

    Gravitational-wave memory refers to the permanent displacement of the test masses in an idealized (freely-falling) gravitational-wave interferometer. Inspiraling binaries produce a particularly interesting form of memory-the Christodoulou memory. Although it originates from nonlinear interactions at 2.5 post-Newtonian order, the Christodoulou memory affects the gravitational-wave amplitude at leading (Newtonian) order. Previous calculations have computed this non-oscillatory amplitude correction during the inspiral phase of binary coalescence. Using an 'effective-one-body' description calibrated with the results of numerical relativity simulations, the evolution of the memory during the inspiral, merger, and ringdown phases, as well as the memory's final saturation value, are calculated. Using this model for the memory, the prospects for its detection are examined, particularly for supermassive black hole binary coalescences that LISA will detect with high signal-to-noise ratios. Coalescing binary black holes also experience center-of-mass recoil due to the anisotropic emission of gravitational radiation. These recoils can manifest themselves in the gravitational-wave signal in the form of a 'linear' memory and a Doppler shift of the quasi-normal-mode frequencies. The prospects for observing these effects are also discussed.

  1. Music, memory and emotion

    Science.gov (United States)

    Jäncke, Lutz

    2008-01-01

    Because emotions enhance memory processes and music evokes strong emotions, music could be involved in forming memories, either about pieces of music or about episodes and information associated with particular music. A recent study in BMC Neuroscience has given new insights into the role of emotion in musical memory. PMID:18710596

  2. Attending to auditory memory.

    Science.gov (United States)

    Zimmermann, Jacqueline F; Moscovitch, Morris; Alain, Claude

    2016-06-01

    Attention to memory describes the process of attending to memory traces when the object is no longer present. It has been studied primarily for representations of visual stimuli with only few studies examining attention to sound object representations in short-term memory. Here, we review the interplay of attention and auditory memory with an emphasis on 1) attending to auditory memory in the absence of related external stimuli (i.e., reflective attention) and 2) effects of existing memory on guiding attention. Attention to auditory memory is discussed in the context of change deafness, and we argue that failures to detect changes in our auditory environments are most likely the result of a faulty comparison system of incoming and stored information. Also, objects are the primary building blocks of auditory attention, but attention can also be directed to individual features (e.g., pitch). We review short-term and long-term memory guided modulation of attention based on characteristic features, location, and/or semantic properties of auditory objects, and propose that auditory attention to memory pathways emerge after sensory memory. A neural model for auditory attention to memory is developed, which comprises two separate pathways in the parietal cortex, one involved in attention to higher-order features and the other involved in attention to sensory information. This article is part of a Special Issue entitled SI: Auditory working memory. Copyright © 2015 Elsevier B.V. All rights reserved.

  3. Saving Malta's music memory

    OpenAIRE

    Sant, Toni

    2013-01-01

    Maltese music is being lost. Along with it Malta loses its culture, way of life, and memories. Dr Toni Sant is trying to change this trend through the Malta Music Memory Project (M3P) http://www.um.edu.mt/think/saving-maltas-music-memory-2/

  4. Associative Memory Acceptors.

    Science.gov (United States)

    Card, Roger

    The properties of an associative memory are examined in this paper from the viewpoint of automata theory. A device called an associative memory acceptor is studied under real-time operation. The family "L" of languages accepted by real-time associative memory acceptors is shown to properly contain the family of languages accepted by one-tape,…

  5. Generation and Context Memory

    Science.gov (United States)

    Mulligan, Neil W.; Lozito, Jeffrey P.; Rosner, Zachary A.

    2006-01-01

    Generation enhances memory for occurrence but may not enhance other aspects of memory. The present study further delineates the negative generation effect in context memory reported in N. W. Mulligan (2004). First, the negative generation effect occurred for perceptual attributes of the target item (its color and font) but not for extratarget…

  6. Music, memory and emotion.

    Science.gov (United States)

    Jäncke, Lutz

    2008-08-08

    Because emotions enhance memory processes and music evokes strong emotions, music could be involved in forming memories, either about pieces of music or about episodes and information associated with particular music. A recent study in BMC Neuroscience has given new insights into the role of emotion in musical memory.

  7. Computational models of neuromodulation.

    Science.gov (United States)

    Fellous, J M; Linster, C

    1998-05-15

    Computational modeling of neural substrates provides an excellent theoretical framework for the understanding of the computational roles of neuromodulation. In this review, we illustrate, with a large number of modeling studies, the specific computations performed by neuromodulation in the context of various neural models of invertebrate and vertebrate preparations. We base our characterization of neuromodulations on their computational and functional roles rather than on anatomical or chemical criteria. We review the main framework in which neuromodulation has been studied theoretically (central pattern generation and oscillations, sensory processing, memory and information integration). Finally, we present a detailed mathematical overview of how neuromodulation has been implemented at the single cell and network levels in modeling studies. Overall, neuromodulation is found to increase and control computational complexity.

  8. Offline computing and networking

    International Nuclear Information System (INIS)

    Appel, J.A.; Avery, P.; Chartrand, G.

    1985-01-01

    This note summarizes the work of the Offline Computing and Networking Group. The report is divided into two sections; the first deals with the computing and networking requirements and the second with the proposed way to satisfy those requirements. In considering the requirements, we have considered two types of computing problems. The first is CPU-intensive activity such as production data analysis (reducing raw data to DST), production Monte Carlo, or engineering calculations. The second is physicist-intensive computing such as program development, hardware design, physics analysis, and detector studies. For both types of computing, we examine a variety of issues. These included a set of quantitative questions: how much CPU power (for turn-around and for through-put), how much memory, mass-storage, bandwidth, and so on. There are also very important qualitative issues: what features must be provided by the operating system, what tools are needed for program design, code management, database management, and for graphics

  9. Dynamical behaviour of neuronal networks iterated with memory

    International Nuclear Information System (INIS)

    Melatagia, P.M.; Ndoundam, R.; Tchuente, M.

    2005-11-01

    We study memory iteration where the updating consider a longer history of each site and the set of interaction matrices is palindromic. We analyze two different ways of updating the networks: parallel iteration with memory and sequential iteration with memory that we introduce in this paper. For parallel iteration, we define Lyapunov functional which permits us to characterize the periods behaviour and explicitly bounds the transient lengths of neural networks iterated with memory. For sequential iteration, we use an algebraic invariant to characterize the periods behaviour of the studied model of neural computation. (author)

  10. Malware Memory Analysis of the IVYL Linux Rootkit: Investigating a Publicly Available Linux Rootkit Using the Volatility Memory Analysis Framework

    Science.gov (United States)

    2015-04-01

    report is to examine how a computer forensic investigator/incident handler, without specialised computer memory or software reverse engineering skills ...The skills amassed by incident handlers and investigators alike while using Volatility to examine Windows memory images will be of some help...bin/pulseaudio --start --log-target=syslog 1362 1000 1000 nautilus 1366 1000 1000 /usr/lib/pulseaudio/pulse/gconf- helper 1370 1000 1000 nm-applet

  11. Implementation of Parallel Dynamic Simulation on Shared-Memory vs. Distributed-Memory Environments

    Energy Technology Data Exchange (ETDEWEB)

    Jin, Shuangshuang; Chen, Yousu; Wu, Di; Diao, Ruisheng; Huang, Zhenyu

    2015-12-09

    Power system dynamic simulation computes the system response to a sequence of large disturbance, such as sudden changes in generation or load, or a network short circuit followed by protective branch switching operation. It consists of a large set of differential and algebraic equations, which is computational intensive and challenging to solve using single-processor based dynamic simulation solution. High-performance computing (HPC) based parallel computing is a very promising technology to speed up the computation and facilitate the simulation process. This paper presents two different parallel implementations of power grid dynamic simulation using Open Multi-processing (OpenMP) on shared-memory platform, and Message Passing Interface (MPI) on distributed-memory clusters, respectively. The difference of the parallel simulation algorithms and architectures of the two HPC technologies are illustrated, and their performances for running parallel dynamic simulation are compared and demonstrated.

  12. ECT and memory loss.

    Science.gov (United States)

    Squire, L R

    1977-09-01

    The author reviews several studies that clarify the nature of the memory loss associated with ECT. Bilateral ECT produced greater anterograde memory loss than right unilateral ECT and more extensive retrograde amnesia than unilateral ECT. Reactivating memories just before ECT did not produce amnesia. Capacity for new learning recovered substantially by several months after ECT, but memory complaints were common in individuals who had received bilateral ECT. Other things being equal, right unilateral ECT seems preferable to bilateral ECT because the risks to memory associated with unilateral ECT are smaller.

  13. Quantum random access memory

    OpenAIRE

    Giovannetti, Vittorio; Lloyd, Seth; Maccone, Lorenzo

    2007-01-01

    A random access memory (RAM) uses n bits to randomly address N=2^n distinct memory cells. A quantum random access memory (qRAM) uses n qubits to address any quantum superposition of N memory cells. We present an architecture that exponentially reduces the requirements for a memory call: O(log N) switches need be thrown instead of the N used in conventional (classical or quantum) RAM designs. This yields a more robust qRAM algorithm, as it in general requires entanglement among exponentially l...

  14. Towards realising high-speed large-bandwidth quantum memory

    Institute of Scientific and Technical Information of China (English)

    SHI BaoSen; DING DongSheng

    2016-01-01

    Indispensable for quantum communication and quantum computation,quantum memory executes on demand storage and retrieval of quantum states such as those of a single photon,an entangled pair or squeezed states.Among the various forms of quantum memory,Raman quantum memory has advantages forits broadband and high-speed characteristics,which results in a huge potential for applications in quantum networks and quantum computation.However,realising Raman quantum memory with true single photons and photonic entanglementis challenging.In this review,after briefly introducing the main benchmarks in the development of quantum memory and describing the state of the art,we focus on our recent experimental progress inquantum memorystorage of quantum states using the Raman scheme.

  15. The Processing Using Memory Paradigm:In-DRAM Bulk Copy, Initialization, Bitwise AND and OR

    OpenAIRE

    Seshadri, Vivek; Mutlu, Onur

    2016-01-01

    In existing systems, the off-chip memory interface allows the memory controller to perform only read or write operations. Therefore, to perform any operation, the processor must first read the source data and then write the result back to memory after performing the operation. This approach consumes high latency, bandwidth, and energy for operations that work on a large amount of data. Several works have proposed techniques to process data near memory by adding a small amount of compute logic...

  16. Memory dynamics under stress.

    Science.gov (United States)

    Quaedflieg, Conny W E M; Schwabe, Lars

    2018-03-01

    Stressful events have a major impact on memory. They modulate memory formation in a time-dependent manner, closely linked to the temporal profile of action of major stress mediators, in particular catecholamines and glucocorticoids. Shortly after stressor onset, rapidly acting catecholamines and fast, non-genomic glucocorticoid actions direct cognitive resources to the processing and consolidation of the ongoing threat. In parallel, control of memory is biased towards rather rigid systems, promoting habitual forms of memory allowing efficient processing under stress, at the expense of "cognitive" systems supporting memory flexibility and specificity. In this review, we discuss the implications of this shift in the balance of multiple memory systems for the dynamics of the memory trace. Specifically, stress appears to hinder the incorporation of contextual details into the memory trace, to impede the integration of new information into existing knowledge structures, to impair the flexible generalisation across past experiences, and to hamper the modification of memories in light of new information. Delayed, genomic glucocorticoid actions might reverse the control of memory, thus restoring homeostasis and "cognitive" control of memory again.

  17. Detailed sensory memory, sloppy working memory

    Directory of Open Access Journals (Sweden)

    Ilja G Sligte

    2010-10-01

    Full Text Available Visual short-term memory (VSTM enables us to actively maintain information in mind for a brief period of time after stimulus disappearance. According to recent studies, VSTM consists of three stages - iconic memory, fragile VSTM, and visual working memory - with increasingly stricter capacity limits and progressively longer lifetimes. Still, the resolution (or amount of visual detail of each VSTM stage has remained unexplored and we test this in the present study. We presented people with a change detection task that measures the capacity of all three forms of VSTM, and we added an identification display after each change trial that required people to identify the pre-change object. Accurate change detection plus pre-change identification requires subjects to have a high-resolution representation of the pre-change object, whereas change detection or identification only can be based on the hunch that something has changed, without exactly knowing what was presented before. We observed that people maintained 6.1 objects in iconic memory, 4.6 objects in fragile VSTM and 2.1 objects in visual working memory. Moreover, when people detected the change, they could also identify the pre-change object on 88 percent of the iconic memory trials, on 71 percent of the fragile VSTM trials and merely on 53 percent of the visual working memory trials. This suggests that people maintain many high-resolution representations in iconic memory and fragile VSTM, but only one high-resolution object representation in visual working memory.

  18. NAND flash memory technologies

    CERN Document Server

    Aritome, Seiichi

    2016-01-01

    This book discusses basic and advanced NAND flash memory technologies, including the principle of NAND flash, memory cell technologies, multi-bits cell technologies, scaling challenges of memory cell, reliability, and 3-dimensional cell as the future technology. Chapter 1 describes the background and early history of NAND flash. The basic device structures and operations are described in Chapter 2. Next, the author discusses the memory cell technologies focused on scaling in Chapter 3, and introduces the advanced operations for multi-level cells in Chapter 4. The physical limitations for scaling are examined in Chapter 5, and Chapter 6 describes the reliability of NAND flash memory. Chapter 7 examines 3-dimensional (3D) NAND flash memory cells and discusses the pros and cons in structure, process, operations, scalability, and performance. In Chapter 8, challenges of 3D NAND flash memory are dis ussed. Finally, in Chapter 9, the author summarizes and describes the prospect of technologies and market for the fu...

  19. Extreme Scale Computing Studies

    Science.gov (United States)

    2010-12-01

    systems that would fall under the Exascale rubric . In this chapter, we first discuss the attributes by which achievement of the label “Exascale” may be...Carrington, and E. Strohmaier. A Genetic Algorithms Approach to Modeling the Performance of Memory-bound Computations. Reno, NV, November 2007. ACM/IEEE... genetic stochasticity (random mating, mutation, etc). Outcomes are thus stochastic as well, and ecologists wish to ask questions like, “What is the

  20. Computing Equilibrium Chemical Compositions

    Science.gov (United States)

    Mcbride, Bonnie J.; Gordon, Sanford

    1995-01-01

    Chemical Equilibrium With Transport Properties, 1993 (CET93) computer program provides data on chemical-equilibrium compositions. Aids calculation of thermodynamic properties of chemical systems. Information essential in design and analysis of such equipment as compressors, turbines, nozzles, engines, shock tubes, heat exchangers, and chemical-processing equipment. CET93/PC is version of CET93 specifically designed to run within 640K memory limit of MS-DOS operating system. CET93/PC written in FORTRAN.

  1. Stochastic memory: Memory enhancement due to noise

    Science.gov (United States)

    Stotland, Alexander; di Ventra, Massimiliano

    2012-01-01

    There are certain classes of resistors, capacitors, and inductors that, when subject to a periodic input of appropriate frequency, develop hysteresis loops in their characteristic response. Here we show that the hysteresis of such memory elements can also be induced by white noise of appropriate intensity even at very low frequencies of the external driving field. We illustrate this phenomenon using a physical model of memory resistor realized by TiO2 thin films sandwiched between metallic electrodes and discuss under which conditions this effect can be observed experimentally. We also discuss its implications on existing memory systems described in the literature and the role of colored noise.

  2. Disk access controller for Multi 8 computer

    International Nuclear Information System (INIS)

    Segalard, Jean

    1970-01-01

    After having presented the initial characteristics and weaknesses of the software provided for the control of a memory disk coupled with a Multi 8 computer, the author reports the development and improvement of this controller software. He presents the different constitutive parts of the computer and the operation of the disk coupling and of the direct access to memory. He reports the development of the disk access controller: software organisation, loader, subprograms and statements

  3. Investigation of Cloud Computing: Applications and Challenges

    OpenAIRE

    Amid Khatibi Bardsiri; Anis Vosoogh; Fatemeh Ahoojoosh

    2014-01-01

    Cloud computing is a model for saving data or knowledge in distance servers through Internet. It can be save the required memory space and reduce cost of extending memory capacity in users’ own machines and etc., Therefore, Cloud Computing has several benefits for individuals as well as organizations. It provides protection for personal and organizational data. Further, with the help of cloud service, one business owner, organization manager or service provider will be able to make privacy an...

  4. Cycle accurate and cycle reproducible memory for an FPGA based hardware accelerator

    Science.gov (United States)

    Asaad, Sameh W.; Kapur, Mohit

    2016-03-15

    A method, system and computer program product are disclosed for using a Field Programmable Gate Array (FPGA) to simulate operations of a device under test (DUT). The DUT includes a device memory having a number of input ports, and the FPGA is associated with a target memory having a second number of input ports, the second number being less than the first number. In one embodiment, a given set of inputs is applied to the device memory at a frequency Fd and in a defined cycle of time, and the given set of inputs is applied to the target memory at a frequency Ft. Ft is greater than Fd and cycle accuracy is maintained between the device memory and the target memory. In an embodiment, a cycle accurate model of the DUT memory is created by separating the DUT memory interface protocol from the target memory storage array.

  5. SAR: a fast computer for CAMAC data acquisition

    International Nuclear Information System (INIS)

    Bricaud, B.; Faivre, J.C.; Pain, J.

    1979-01-01

    An original 32-bit computer architecture has been designed, based on bit-slice microprocessors, around the AMD 2901. A 32 bit instruction set was defined with a 200 ns execution time per instruction. Basic memory capacity is equally divided into two 32K 32-bit zones named Program memory and Data memory. The computer has a Camac Branch interface; during a Camac transfer activation, which lasts seven cycles, five cycles are free for processing

  6. Neural Global Pattern Similarity Underlies True and False Memories.

    Science.gov (United States)

    Ye, Zhifang; Zhu, Bi; Zhuang, Liping; Lu, Zhonglin; Chen, Chuansheng; Xue, Gui

    2016-06-22

    The neural processes giving rise to human memory strength signals remain poorly understood. Inspired by formal computational models that posit a central role of global matching in memory strength, we tested a novel hypothesis that the strengths of both true and false memories arise from the global similarity of an item's neural activation pattern during retrieval to that of all the studied items during encoding (i.e., the encoding-retrieval neural global pattern similarity [ER-nGPS]). We revealed multiple ER-nGPS signals that carried distinct information and contributed differentially to true and false memories: Whereas the ER-nGPS in the parietal regions reflected semantic similarity and was scaled with the recognition strengths of both true and false memories, ER-nGPS in the visual cortex contributed solely to true memory. Moreover, ER-nGPS differences between the parietal and visual cortices were correlated with frontal monitoring processes. By combining computational and neuroimaging approaches, our results advance a mechanistic understanding of memory strength in recognition. What neural processes give rise to memory strength signals, and lead to our conscious feelings of familiarity? Using fMRI, we found that the memory strength of a given item depends not only on how it was encoded during learning, but also on the similarity of its neural representation with other studied items. The global neural matching signal, mainly in the parietal lobule, could account for the memory strengths of both studied and unstudied items. Interestingly, a different global matching signal, originated from the visual cortex, could distinguish true from false memories. The findings reveal multiple neural mechanisms underlying the memory strengths of events registered in the brain. Copyright © 2016 the authors 0270-6474/16/366792-11$15.00/0.

  7. Chemical memory reactions induced bursting dynamics in gene expression.

    Science.gov (United States)

    Tian, Tianhai

    2013-01-01

    Memory is a ubiquitous phenomenon in biological systems in which the present system state is not entirely determined by the current conditions but also depends on the time evolutionary path of the system. Specifically, many memorial phenomena are characterized by chemical memory reactions that may fire under particular system conditions. These conditional chemical reactions contradict to the extant stochastic approaches for modeling chemical kinetics and have increasingly posed significant challenges to mathematical modeling and computer simulation. To tackle the challenge, I proposed a novel theory consisting of the memory chemical master equations and memory stochastic simulation algorithm. A stochastic model for single-gene expression was proposed to illustrate the key function of memory reactions in inducing bursting dynamics of gene expression that has been observed in experiments recently. The importance of memory reactions has been further validated by the stochastic model of the p53-MDM2 core module. Simulations showed that memory reactions is a major mechanism for realizing both sustained oscillations of p53 protein numbers in single cells and damped oscillations over a population of cells. These successful applications of the memory modeling framework suggested that this innovative theory is an effective and powerful tool to study memory process and conditional chemical reactions in a wide range of complex biological systems.

  8. Modeling Coevolution between Language and Memory Capacity during Language Origin

    Science.gov (United States)

    Gong, Tao; Shuai, Lan

    2015-01-01

    Memory is essential to many cognitive tasks including language. Apart from empirical studies of memory effects on language acquisition and use, there lack sufficient evolutionary explorations on whether a high level of memory capacity is prerequisite for language and whether language origin could influence memory capacity. In line with evolutionary theories that natural selection refined language-related cognitive abilities, we advocated a coevolution scenario between language and memory capacity, which incorporated the genetic transmission of individual memory capacity, cultural transmission of idiolects, and natural and cultural selections on individual reproduction and language teaching. To illustrate the coevolution dynamics, we adopted a multi-agent computational model simulating the emergence of lexical items and simple syntax through iterated communications. Simulations showed that: along with the origin of a communal language, an initially-low memory capacity for acquired linguistic knowledge was boosted; and such coherent increase in linguistic understandability and memory capacities reflected a language-memory coevolution; and such coevolution stopped till memory capacities became sufficient for language communications. Statistical analyses revealed that the coevolution was realized mainly by natural selection based on individual communicative success in cultural transmissions. This work elaborated the biology-culture parallelism of language evolution, demonstrated the driving force of culturally-constituted factors for natural selection of individual cognitive abilities, and suggested that the degree difference in language-related cognitive abilities between humans and nonhuman animals could result from a coevolution with language. PMID:26544876

  9. Concurrent Operations of O2-Tree on Shared Memory Multicore Architectures

    OpenAIRE

    Daniel Ohene-Kwofie; E. J. Otoo1, Gideon Nimako

    2014-01-01

    Modern computer architectures provide high performance computing capability by having multiple CPU cores. Such systems are also typically associated with very large main-memory capacities, thereby allowing them to be used for fast processing of in-memory database applications. However, most of the concurrency control mechanism associated with the index structures of these memory resident databases do not scale well, under high transaction rates. This paper presents the O2-Tree, a fast main me...

  10. Computer science II essentials

    CERN Document Server

    Raus, Randall

    2012-01-01

    REA's Essentials provide quick and easy access to critical information in a variety of different fields, ranging from the most basic to the most advanced. As its name implies, these concise, comprehensive study guides summarize the essentials of the field covered. Essentials are helpful when preparing for exams, doing homework and will remain a lasting reference source for students, teachers, and professionals. Computer Science II includes organization of a computer, memory and input/output, coding, data structures, and program development. Also included is an overview of the most commonly

  11. The contributions of handedness and working memory to episodic memory.

    Science.gov (United States)

    Sahu, Aparna; Christman, Stephen D; Propper, Ruth E

    2016-11-01

    Past studies have independently shown associations of working memory and degree of handedness with episodic memory retrieval. The current study takes a step ahead by examining whether handedness and working memory independently predict episodic memory. In agreement with past studies, there was an inconsistent-handed advantage for episodic memory; however, this advantage was absent for working memory tasks. Furthermore, regression analyses showed handedness, and complex working memory predicted episodic memory performance at different times. Results are discussed in light of theories of episodic memory and hemispheric interaction.

  12. Assessing Working Memory in Children: The Comprehensive Assessment Battery for Children - Working Memory (CABC-WM).

    Science.gov (United States)

    Cabbage, Kathryn; Brinkley, Shara; Gray, Shelley; Alt, Mary; Cowan, Nelson; Green, Samuel; Kuo, Trudy; Hogan, Tiffany P

    2017-06-12

    The Comprehensive Assessment Battery for Children - Working Memory (CABC-WM) is a computer-based battery designed to assess different components of working memory in young school-age children. Working memory deficits have been identified in children with language-based learning disabilities, including dyslexia 1 , 2 and language impairment 3 , 4 , but it is not clear whether these children exhibit deficits in subcomponents of working memory, such as visuospatial or phonological working memory. The CABC-WM is administered on a desktop computer with a touchscreen interface and was specifically developed to be engaging and motivating for children. Although the long-term goal of the CABC-WM is to provide individualized working memory profiles in children, the present study focuses on the initial success and utility of the CABC-WM for measuring central executive, visuospatial, phonological loop, and binding constructs in children with typical development. Immediate next steps are to administer the CABC-WM to children with specific language impairment, dyslexia, and comorbid specific language impairment and dyslexia.

  13. Memory for speech and speech for memory.

    Science.gov (United States)

    Locke, J L; Kutz, K J

    1975-03-01

    Thirty kindergarteners, 15 who substituted /w/ for /r/ and 15 with correct articulation, received two perception tests and a memory test that included /w/ and /r/ in minimally contrastive syllables. Although both groups had nearly perfect perception of the experimenter's productions of /w/ and /r/, misarticulating subjects perceived their own tape-recorded w/r productions as /w/. In the memory task these same misarticulating subjects committed significantly more /w/-/r/ confusions in unspoken recall. The discussion considers why people subvocally rehearse; a developmental period in which children do not rehearse; ways subvocalization may aid recall, including motor and acoustic encoding; an echoic store that provides additional recall support if subjects rehearse vocally, and perception of self- and other- produced phonemes by misarticulating children-including its relevance to a motor theory of perception. Evidence is presented that speech for memory can be sufficiently impaired to cause memory disorder. Conceptions that restrict speech disorder to an impairment of communication are challenged.

  14. Psychophysiology of prospective memory.

    Science.gov (United States)

    Rothen, Nicolas; Meier, Beat

    2014-01-01

    Prospective memory involves the self-initiated retrieval of an intention upon an appropriate retrieval cue. Cue identification can be considered as an orienting reaction and may thus trigger a psychophysiological response. Here we present two experiments in which skin conductance responses (SCRs) elicited by prospective memory cues were compared to SCRs elicited by aversive stimuli to test whether a single prospective memory cue triggers a similar SCR as an aversive stimulus. In Experiment 2 we also assessed whether cue specificity had a differential influence on prospective memory performance and on SCRs. We found that detecting a single prospective memory cue is as likely to elicit a SCR as an aversive stimulus. Missed prospective memory cues also elicited SCRs. On a behavioural level, specific intentions led to better prospective memory performance. However, on a psychophysiological level specificity had no influence. More generally, the results indicate reliable SCRs for prospective memory cues and point to psychophysiological measures as valuable approach, which offers a new way to study one-off prospective memory tasks. Moreover, the findings are consistent with a theory that posits multiple prospective memory retrieval stages.

  15. Scientific developments of liquid crystal-based optical memory: a review

    Science.gov (United States)

    Prakash, Jai; Chandran, Achu; Biradar, Ashok M.

    2017-01-01

    The memory behavior in liquid crystals (LCs), although rarely observed, has made very significant headway over the past three decades since their discovery in nematic type LCs. It has gone from a mere scientific curiosity to application in variety of commodities. The memory element formed by numerous LCs have been protected by patents, and some commercialized, and used as compensation to non-volatile memory devices, and as memory in personal computers and digital cameras. They also have the low cost, large area, high speed, and high density memory needed for advanced computers and digital electronics. Short and long duration memory behavior for industrial applications have been obtained from several LC materials, and an LC memory with interesting features and applications has been demonstrated using numerous LCs. However, considerable challenges still exist in searching for highly efficient, stable, and long-lifespan materials and methods so that the development of useful memory devices is possible. This review focuses on the scientific and technological approach of fascinating applications of LC-based memory. We address the introduction, development status, novel design and engineering principles, and parameters of LC memory. We also address how the amalgamation of LCs could bring significant change/improvement in memory effects in the emerging field of nanotechnology, and the application of LC memory as the active component for futuristic and interesting memory devices.

  16. A Working Memory Test Battery: Java-Based Collection of Seven Working Memory Tasks

    Directory of Open Access Journals (Sweden)

    James M Stone

    2015-06-01

    Full Text Available Working memory is a key construct within cognitive science. It is an important theory in its own right, but the influence of working memory is enriched due to the widespread evidence that measures of its capacity are linked to a variety of functions in wider cognition. To facilitate the active research environment into this topic, we describe seven computer-based tasks that provide estimates of short-term and working memory incorporating both visuospatial and verbal material. The memory span tasks provided are; digit span, matrix span, arrow span, reading span, operation span, rotation span, and symmetry span. These tasks are built to be simple to use, flexible to adapt to the specific needs of the research design, and are open source. All files can be downloaded from the project website http://www.cognitivetools.uk and the source code is available via Github.

  17. Next generation spin torque memories

    CERN Document Server

    Kaushik, Brajesh Kumar; Kulkarni, Anant Aravind; Prajapati, Sanjay

    2017-01-01

    This book offers detailed insights into spin transfer torque (STT) based devices, circuits and memories. Starting with the basic concepts and device physics, it then addresses advanced STT applications and discusses the outlook for this cutting-edge technology. It also describes the architectures, performance parameters, fabrication, and the prospects of STT based devices. Further, moving from the device to the system perspective it presents a non-volatile computing architecture composed of STT based magneto-resistive and all-spin logic devices and demonstrates that efficient STT based magneto-resistive and all-spin logic devices can turn the dream of instant on/off non-volatile computing into reality.

  18. Intentionally fabricated autobiographical memories.

    Science.gov (United States)

    Justice, Lucy V; Morrison, Catriona M; Conway, Martin A

    2018-02-01

    Participants generated both autobiographical memories (AMs) that they believed to be true and intentionally fabricated autobiographical memories (IFAMs). Memories were constructed while a concurrent memory load (random 8-digit sequence) was held in mind or while there was no concurrent load. Amount and accuracy of recall of the concurrent memory load was reliably poorer following generation of IFAMs than following generation of AMs. There was no reliable effect of load on memory generation times; however, IFAMs always took longer to construct than AMs. Finally, replicating previous findings, fewer IFAMs had a field perspective than AMs, IFAMs were less vivid than AMs, and IFAMs contained more motion words (indicative of increased cognitive load). Taken together, these findings show a pattern of systematic differences that mark out IFAMs, and they also show that IFAMs can be identified indirectly by lowered performance on concurrent tasks that increase cognitive load.

  19. Shape memory polymers

    Energy Technology Data Exchange (ETDEWEB)

    Wilson, Thomas S.; Bearinger, Jane P.

    2017-08-29

    New shape memory polymer compositions, methods for synthesizing new shape memory polymers, and apparatus comprising an actuator and a shape memory polymer wherein the shape memory polymer comprises at least a portion of the actuator. A shape memory polymer comprising a polymer composition which physically forms a network structure wherein the polymer composition has shape-memory behavior and can be formed into a permanent primary shape, re-formed into a stable secondary shape, and controllably actuated to recover the permanent primary shape. Polymers have optimal aliphatic network structures due to minimization of dangling chains by using monomers that are symmetrical and that have matching amine and hydroxl groups providing polymers and polymer foams with clarity, tight (narrow temperature range) single transitions, and high shape recovery and recovery force that are especially useful for implanting in the human body.

  20. Time for memory

    DEFF Research Database (Denmark)

    Murakami, Kyoko

    2012-01-01

    This article is a continuous dialogue on memory triggered by Brockmeier’s (2010) article. I drift away from the conventionalization of the archive as a spatial metaphor for memory in order to consider the greater possibility of “time” for conceptualizing memory. The concept of time is central...... in terms of autobiographical memory. The second category of time is discussed, drawing on Augustine and Bergson amongst others. Bergson’s notion of duration has been considered as a promising concept for a better understanding of autobiographical memory. Psychological phenomena such as autobiographical...... memory should embrace not only spatial dimension, but also a temporal dimension, in which a constant flow of irreversible time, where multiplicity, momentarily, dynamic stability and becoming and emergence of novelty can be observed....

  1. Shape memory polymers

    Science.gov (United States)

    Wilson, Thomas S.; Bearinger, Jane P.

    2015-06-09

    New shape memory polymer compositions, methods for synthesizing new shape memory polymers, and apparatus comprising an actuator and a shape memory polymer wherein the shape memory polymer comprises at least a portion of the actuator. A shape memory polymer comprising a polymer composition which physically forms a network structure wherein the polymer composition has shape-memory behavior and can be formed into a permanent primary shape, re-formed into a stable secondary shape, and controllably actuated to recover the permanent primary shape. Polymers have optimal aliphatic network structures due to minimization of dangling chains by using monomers that are symmetrical and that have matching amine and hydroxyl groups providing polymers and polymer foams with clarity, tight (narrow temperature range) single transitions, and high shape recovery and recovery force that are especially useful for implanting in the human body.

  2. Zone memories and pseudorandom addressing

    International Nuclear Information System (INIS)

    Marino, D.; Mirizzi, N.; Stella, R.; Visaggio, G.

    1975-01-01

    A quantitative comparison between zone memories, pseudorandom addressed memories and an alternative special purpose memory (spread zone memory) in which the distance between any two transformed descriptors, at first adjacent, is independent of the descriptors pair and results the maximum one is presented. This memory has not been particularly considered at present in spite of its efficiency and its simple implementation

  3. An introduction to digital computing

    CERN Document Server

    George, F H

    2014-01-01

    An Introduction to Digital Computing provides information pertinent to the fundamental aspects of digital computing. This book represents a major step towards the universal availability of programmed material.Organized into four chapters, this book begins with an overview of the fundamental workings of the computer, including the way it handles simple arithmetic problems. This text then provides a brief survey of the basic features of a typical computer that is divided into three sections, namely, the input and output system, the memory system for data storage, and a processing system. Other c

  4. Music, memory and emotion

    OpenAIRE

    J?ncke, Lutz

    2008-01-01

    Because emotions enhance memory processes and music evokes strong emotions, music could be involved in forming memories, either about pieces of music or about episodes and information associated with particular music. A recent study in BMC Neuroscience has given new insights into the role of emotion in musical memory. Music has a prominent role in the everyday life of many people. Whether it is for recreation, distraction or mood enhancement, a lot of people listen to music from early in t...

  5. Making Memories Matter

    OpenAIRE

    Gold, Paul E.; Korol, Donna L.

    2012-01-01

    This article reviews some of the neuroendocrine bases by which emotional events regulate brain mechanisms of learning and memory. In laboratory rodents, there is extensive evidence that epinephrine influences memory processing through an inverted-U relationship, at which moderate levels enhance and high levels impair memory. These effects are, in large part, mediated by increases in blood glucose levels subsequent to epinephrine release, which then provide support for the brain processes en...

  6. Emotion and Autobiographical Memory

    Directory of Open Access Journals (Sweden)

    Nuray Sarp

    2011-09-01

    Full Text Available Self and mind are constituted with the cumulative effects of significant life events. This description is regarded as a given explicitly or implicitly in vari-ous theories of personality. Such an acknowledgment inevitably brings together these theories on two basic concepts. The first one is the emotions that give meaning to experiences and the second one is the memory which is related to the storage of these experiences. The part of the memory which is responsible for the storage and retrieval of life events is the autobiographical memory. Besides the development of personality, emotions and autobiographical memory are important in the development of and maintenance of psychopathology. Therefore, these two concepts have both longitudinal and cross-sectional functions in understanding human beings. In case of psychopathology, understanding emotions and autobiographical memory developmentally, aids in understanding the internal susceptibility factors. In addition, understanding how these two structures work and influence each other in an acute event would help to understand the etiological mechanisms of mental disorders. In the literature, theories that include both of these structures and that have clinical implications, are inconclusive. Theories on memory generally focus on cognitive and semantic structures while neglecting emotions, whereas theories on emotions generally neglect memory and its organization. There are only a few theories that cover both of these two concepts. In the present article, these theories that include both emotions and autobiographical memory in the same framework (i.e. Self Memory System, Associative Network Theory, Structural and Contextual theories and Affect Regulation Theory were discussed to see the full picture. Taken together, these theories seem to have the potential to suggest data-driven models in understanding and explaining symptoms such as flashbacks, dissociation, amnesia, over general memory seen in

  7. Islamic Myths and Memories

    DEFF Research Database (Denmark)

    Islamic myths and collective memory are very much alive in today’s localized struggles for identity, and are deployed in the ongoing construction of worldwide cultural networks. This book brings the theoretical perspectives of myth-making and collective memory to the study of Islam and globalizat....... It shows how contemporary Islamic thinkers and movements respond to the challenges of globalization by preserving, reviving, reshaping, or transforming myths and memories....

  8. Memory T Cell Migration

    OpenAIRE

    Qianqian eZhang; Qianqian eZhang; Fadi G. Lakkis

    2015-01-01

    Immunological memory is a key feature of adaptive immunity. It provides the organism with long-lived and robust protection against infection. In organ transplantation, memory T cells pose a significant threat by causing allograft rejection that is generally resistant to immunosuppressive therapy. Therefore, a more thorough understanding of memory T cell biology is needed to improve the survival of transplanted organs without compromising the host’s ability to fight infections. This review...

  9. Iconic memory requires attention

    OpenAIRE

    Persuh, Marjan; Genzer, Boris; Melara, Robert D.

    2012-01-01

    Two experiments investigated whether attention plays a role in iconic memory, employing either a change detection paradigm (Experiment 1) or a partial-report paradigm (Experiment 2). In each experiment, attention was taxed during initial display presentation, focusing the manipulation on consolidation of information into iconic memory, prior to transfer into working memory. Observers were able to maintain high levels of performance (accuracy of change detection or categorization) even when co...

  10. NONLINEAR GRAVITATIONAL-WAVE MEMORY FROM BINARY BLACK HOLE MERGERS

    International Nuclear Information System (INIS)

    Favata, Marc

    2009-01-01

    Some astrophysical sources of gravitational waves can produce a 'memory effect', which causes a permanent displacement of the test masses in a freely falling gravitational-wave detector. The Christodoulou memory is a particularly interesting nonlinear form of memory that arises from the gravitational-wave stress-energy tensor's contribution to the distant gravitational-wave field. This nonlinear memory contributes a nonoscillatory component to the gravitational-wave signal at leading (Newtonian-quadrupole) order in the waveform amplitude. Previous computations of the memory and its detectability considered only the inspiral phase of binary black hole coalescence. Using an 'effective-one-body' (EOB) approach calibrated to numerical relativity simulations, as well as a simple fully analytic model, the Christodoulou memory is computed for the inspiral, merger, and ringdown. The memory will be very difficult to detect with ground-based interferometers, but is likely to be observable in supermassive black hole mergers with LISA out to redshifts z ∼< 2. Detection of the nonlinear memory could serve as an experimental test of the ability of gravity to 'gravitate'.

  11. Memories Persist in Silence

    Directory of Open Access Journals (Sweden)

    Sandra Patricia Arenas Grisales

    2012-08-01

    Full Text Available This article exposes the hypothesis that memory artifacts, created to commemorate the victims of armed conflict in Colombia, are an expression of the underground memories and a way of political action in the midst of war. We analyze three cases of creations of memory artifacts in Medellín, Colombia, as forms of suffering, perceiving and resisting the power of armed groups in Medellín. The silence, inherent in these objects, should not be treated as an absence of language, but as another form of expression of memory. Silence is a tactic used to overcome losses and reset everyday life in contexts of protracted violence.

  12. The future of memory

    Science.gov (United States)

    Marinella, M.

    In the not too distant future, the traditional memory and storage hierarchy of may be replaced by a single Storage Class Memory (SCM) device integrated on or near the logic processor. Traditional magnetic hard drives, NAND flash, DRAM, and higher level caches (L2 and up) will be replaced with a single high performance memory device. The Storage Class Memory paradigm will require high speed (read/write), excellent endurance (> 1012), nonvolatility (retention > 10 years), and low switching energies (memory (PCM). All of these devices show potential well beyond that of current flash technologies and research efforts are underway to improve the endurance, write speeds, and scalabilities to be on-par with DRAM. This progress has interesting implications for space electronics: each of these emerging device technologies show excellent resistance to the types of radiation typically found in space applications. Commercially developed, high density storage class memory-based systems may include a memory that is physically radiation hard, and suitable for space applications without major shielding efforts. This paper reviews the Storage Class Memory concept, emerging memory devices, and possible applicability to radiation hardened electronics for space.

  13. Models of Working Memory

    National Research Council Canada - National Science Library

    Miyake, Akira

    1997-01-01

    .... Understanding the mechanisms and structures underlying working memory is, hence, one of the most important scientific issues that need to be addressed to improve the efficiency and performance...

  14. Subversion: The Neglected Aspect of Computer Security.

    Science.gov (United States)

    1980-06-01

    it into the memory of the computer . These are called flows on covert channels... A simple covert channel is the running time of a program . Because... program and, in doing so, gives it ’permission’ to perform its covert functions. Not only will most computer systems not prevent the employment of such a...R. Schell, Major, USAF, June 1974. 109 11. Lackey, R.p., "Penetration of Computer Systems, an Overviev , Honeywell Computer Journal, Vol. 8, no. 21974

  15. Single-item memory, associative memory, and the human hippocampus

    OpenAIRE

    Gold, Jeffrey J.; Hopkins, Ramona O.; Squire, Larry R.

    2006-01-01

    We tested recognition memory for items and associations in memory-impaired patients with bilateral lesions thought to be limited to the hippocampal region. In Experiment 1 (Combined memory test), participants studied words and then took a memory test in which studied words, new words, studied word pairs, and recombined word pairs were presented in a mixed order. In Experiment 2 (Separated memory test), participants studied single words and then took a memory test involving studied word and ne...

  16. Memory reconsolidation mediates the updating of hippocampal memory content

    OpenAIRE

    Jonathan L C Lee

    2010-01-01

    The retrieval or reactivation of a memory places it into a labile state, requiring a process of reconsolidation to restabilize it. This retrieval-induced plasticity is a potential mechanism for the modification of the existing memory. Following previous data supportive of a functional role for memory reconsolidation in the modification of memory strength, here I show that hippocampal memory reconsolidation also supports the updating of contextual memory content. Using a procedure that se...

  17. A real-time multichannel memory controller and optimal mapping of memory clients to memory channels

    NARCIS (Netherlands)

    Gomony, M.D.; Akesson, K.B.; Goossens, K.G.W.

    2015-01-01

    Ever-increasing demands for main memory bandwidth and memory speed/power tradeoff led to the introduction of memories with multiple memory channels, such as Wide IO DRAM. Efficient utilization of a multichannel memory as a shared resource in multiprocessor real-time systems depends on mapping of the

  18. Memory bottlenecks and memory contention in multi-core Monte Carlo transport codes

    International Nuclear Information System (INIS)

    Tramm, J.R.; Siegel, A.R.

    2013-01-01

    The simulation of whole nuclear cores through the use of Monte Carlo codes requires an impracticably long time-to-solution. We have extracted a kernel that executes only the most computationally expensive steps of the Monte Carlo particle transport algorithm - the calculation of macroscopic cross sections - in an effort to expose bottlenecks within multi-core, shared memory architectures. (authors)

  19. Memory-efficient analysis of dense functional connectomes

    Directory of Open Access Journals (Sweden)

    Kristian Loewe

    2016-11-01

    Full Text Available The functioning of the human brain relies on the interplay and integration of numerous individual units within a complex network. To identify network configurations characteristic of specific cognitive tasks or mental illnesses, functional connectomes can be constructed based on the assessment of synchronous fMRI activity at separate brain sites, and then analyzed using graph-theoretical concepts. In most previous studies, relatively coarse parcellations of the brain were used to define regions as graphical nodes. Such parcellated connectomes are highly dependent on parcellation quality because regional and functional boundaries need to be relatively consistent for the results to be interpretable. In contrast, dense connectomes are not subject to this limitation, since the parcellation inherent to the data is used to define graphical nodes, also allowing for a more detailed spatial mapping of connectivity patterns. However, dense connectomes are associated with considerable computational demands in terms of both time and memory requirements. The memory required to explicitly store dense connectomes in main memory can render their analysis infeasible, especially when considering high-resolution data or analyses across multiple subjects or conditions. Here, we present an object-based matrix representation that achieves a very low memory footprint by computing matrix elements on demand instead of explicitly storing them. In doing so, memory required for a dense connectome is reduced to the amount needed to store the underlying time series data. Based on theoretical considerations and benchmarks, different matrix object implementations and additional programs (based on available Matlab functions and Matlab-based third-party software are compared with regard to their computational efficiency in terms of memory requirements and computation time. The matrix implementation based on on-demand computations has very low memory requirements thus enabling

  20. Acoustic Masking in Primary Memory

    Science.gov (United States)

    Colle, Herbert A.; Welsh, Alan

    1976-01-01

    Two experiments are reported to investigate the theory that since auditory sensory memory is used to store memory information, concurrent auditory stimulation should destroy memory information and thus reduce recall performance. (Author/RM)

  1. Introduction to magnetic random-access memory

    CERN Document Server

    Dieny, Bernard; Lee, Kyung-Jin

    2017-01-01

    Magnetic random-access memory (MRAM) is poised to replace traditional computer memory based on complementary metal-oxide semiconductors (CMOS). MRAM will surpass all other types of memory devices in terms of nonvolatility, low energy dissipation, fast switching speed, radiation hardness, and durability. Although toggle-MRAM is currently a commercial product, it is clear that future developments in MRAM will be based on spin-transfer torque, which makes use of electrons’ spin angular momentum instead of their charge. MRAM will require an amalgamation of magnetics and microelectronics technologies. However, researchers and developers in magnetics and in microelectronics attend different technical conferences, publish in different journals, use different tools, and have different backgrounds in condensed-matter physics, electrical engineering, and materials science. This book is an introduction to MRAM for microelectronics engineers written by specialists in magnetic mat rials and devices. It presents the bas...

  2. Low-memory iterative density fitting.

    Science.gov (United States)

    Grajciar, Lukáš

    2015-07-30

    A new low-memory modification of the density fitting approximation based on a combination of a continuous fast multipole method (CFMM) and a preconditioned conjugate gradient solver is presented. Iterative conjugate gradient solver uses preconditioners formed from blocks of the Coulomb metric matrix that decrease the number of iterations needed for convergence by up to one order of magnitude. The matrix-vector products needed within the iterative algorithm are calculated using CFMM, which evaluates them with the linear scaling memory requirements only. Compared with the standard density fitting implementation, up to 15-fold reduction of the memory requirements is achieved for the most efficient preconditioner at a cost of only 25% increase in computational time. The potential of the method is demonstrated by performing density functional theory calculations for zeolite fragment with 2592 atoms and 121,248 auxiliary basis functions on a single 12-core CPU workstation. © 2015 Wiley Periodicals, Inc.

  3. Quantum memory for images: A quantum hologram

    International Nuclear Information System (INIS)

    Vasilyev, Denis V.; Sokolov, Ivan V.; Polzik, Eugene S.

    2008-01-01

    Matter-light quantum interface and quantum memory for light are important ingredients of quantum information protocols, such as quantum networks, distributed quantum computation, etc. [P. Zoller et al., Eur. Phys. J. D 36, 203 (2005)]. In this paper we present a spatially multimode scheme for quantum memory for light, which we call a quantum hologram. Our approach uses a multiatom ensemble which has been shown to be efficient for a single spatial mode quantum memory. Due to the multiatom nature of the ensemble and to the optical parallelism it is capable of storing many spatial modes, a feature critical for the present proposal. A quantum hologram with the fidelity exceeding that of classical hologram will be able to store quantum features of an image, such as multimode superposition and entangled quantum states, something that a standard hologram is unable to achieve

  4. Multiprocessor shared-memory information exchange

    International Nuclear Information System (INIS)

    Santoline, L.L.; Bowers, M.D.; Crew, A.W.; Roslund, C.J.; Ghrist, W.D. III

    1989-01-01

    In distributed microprocessor-based instrumentation and control systems, the inter-and intra-subsystem communication requirements ultimately form the basis for the overall system architecture. This paper describes a software protocol which addresses the intra-subsystem communications problem. Specifically the protocol allows for multiple processors to exchange information via a shared-memory interface. The authors primary goal is to provide a reliable means for information to be exchanged between central application processor boards (masters) and dedicated function processor boards (slaves) in a single computer chassis. The resultant Multiprocessor Shared-Memory Information Exchange (MSMIE) protocol, a standard master-slave shared-memory interface suitable for use in nuclear safety systems, is designed to pass unidirectional buffers of information between the processors while providing a minimum, deterministic cycle time for this data exchange

  5. Quantum capacity of dephasing channels with memory

    International Nuclear Information System (INIS)

    D'Arrigo, A; Benenti, G; Falci, G

    2007-01-01

    We show that the amount of coherent quantum information that can be reliably transmitted down a dephasing channel with memory is maximized by separable input states. In particular, we model the channel as a Markov chain or a multimode environment of oscillators. While in the first model, the maximization is achieved for the maximally mixed input state, in the latter it is convenient to exploit the presence of a decoherence-protected subspace generated by memory effects. We explicitly compute the quantum channel capacity for the first model while numerical simulations suggest a lower bound for the latter. In both cases memory effects enhance the coherent information. We present results valid for arbitrary input size

  6. `Unlearning' has a stabilizing effect in collective memories

    Science.gov (United States)

    Hopfield, J. J.; Feinstein, D. I.; Palmer, R. G.

    1983-07-01

    Crick and Mitchison1 have presented a hypothesis for the functional role of dream sleep involving an `unlearning' process. We have independently carried out mathematical and computer modelling of learning and `unlearning' in a collective neural network of 30-1,000 neurones. The model network has a content-addressable memory or `associative memory' which allows it to learn and store many memories. A particular memory can be evoked in its entirety when the network is stimulated by any adequate-sized subpart of the information of that memory2. But different memories of the same size are not equally easy to recall. Also, when memories are learned, spurious memories are also created and can also be evoked. Applying an `unlearning' process, similar to the learning processes but with a reversed sign and starting from a noise input, enhances the performance of the network in accessing real memories and in minimizing spurious ones. Although our model was not motivated by higher nervous function, our system displays behaviours which are strikingly parallel to those needed for the hypothesized role of `unlearning' in rapid eye movement (REM) sleep.

  7. FFT transformed quantitative EEG analysis of short term memory load.

    Science.gov (United States)

    Singh, Yogesh; Singh, Jayvardhan; Sharma, Ratna; Talwar, Anjana

    2015-07-01

    The EEG is considered as building block of functional signaling in the brain. The role of EEG oscillations in human information processing has been intensively investigated. To study the quantitative EEG correlates of short term memory load as assessed through Sternberg memory test. The study was conducted on 34 healthy male student volunteers. The intervention consisted of Sternberg memory test, which runs on a version of the Sternberg memory scanning paradigm software on a computer. Electroencephalography (EEG) was recorded from 19 scalp locations according to 10-20 international system of electrode placement. EEG signals were analyzed offline. To overcome the problems of fixed band system, individual alpha frequency (IAF) based frequency band selection method was adopted. The outcome measures were FFT transformed absolute powers in the six bands at 19 electrode positions. Sternberg memory test served as model of short term memory load. Correlation analysis of EEG during memory task was reflected as decreased absolute power in Upper alpha band in nearly all the electrode positions; increased power in Theta band at Fronto-Temporal region and Lower 1 alpha band at Fronto-Central region. Lower 2 alpha, Beta and Gamma band power remained unchanged. Short term memory load has distinct electroencephalographic correlates resembling the mentally stressed state. This is evident from decreased power in Upper alpha band (corresponding to Alpha band of traditional EEG system) which is representative band of relaxed mental state. Fronto-temporal Theta power changes may reflect the encoding and execution of memory task.

  8. Time Series with Long Memory

    OpenAIRE

    西埜, 晴久

    2004-01-01

    The paper investigates an application of long-memory processes to economic time series. We show properties of long-memory processes, which are motivated to model a long-memory phenomenon in economic time series. An FARIMA model is described as an example of long-memory model in statistical terms. The paper explains basic limit theorems and estimation methods for long-memory processes in order to apply long-memory models to economic time series.

  9. Memory systems interaction in the pigeon: working and reference memory.

    Science.gov (United States)

    Roberts, William A; Strang, Caroline; Macpherson, Krista

    2015-04-01

    Pigeons' performance on a working memory task, symbolic delayed matching-to-sample, was used to examine the interaction between working memory and reference memory. Reference memory was established by training pigeons to discriminate between the comparison cues used in delayed matching as S+ and S- stimuli. Delayed matching retention tests then measured accuracy when working and reference memory were congruent and incongruent. In 4 experiments, it was shown that the interaction between working and reference memory is reciprocal: Strengthening either type of memory leads to a decrease in the influence of the other type of memory. A process dissociation procedure analysis of the data from Experiment 4 showed independence of working and reference memory, and a model of working memory and reference memory interaction was shown to predict the findings reported in the 4 experiments. (PsycINFO Database Record (c) 2015 APA, all rights reserved).

  10. Encoding, Consolidation, and Retrieval of Contextual Memory: Differential Involvement of Dorsal CA3 and CA1 Hippocampal Subregions

    Science.gov (United States)

    Daumas, Stephanie; Halley, Helene; Frances, Bernard; Lassalle, Jean-Michel

    2005-01-01

    Studies on human and animals shed light on the unique hippocampus contributions to relational memory. However, the particular role of each hippocampal subregion in memory processing is still not clear. Hippocampal computational models and theories have emphasized a unique function in memory for each hippocampal subregion, with the CA3 area acting…

  11. Human Learning and Memory

    Science.gov (United States)

    Lieberman, David A.

    2012-01-01

    This innovative textbook is the first to integrate learning and memory, behaviour, and cognition. It focuses on fascinating human research in both memory and learning (while also bringing in important animal studies) and brings the reader up to date with the latest developments in the subject. Students are encouraged to think critically: key…

  12. Memory and technology

    Directory of Open Access Journals (Sweden)

    Olimpia Niglio

    2017-12-01

    Full Text Available The concept of "memory" has different meanings when analyzed within specific cultural contexts. In general, the memory expresses the ability of man to keep track of events, information, sensations, ideas, experiences, and recall this consciousness as soon as certain motivations make necessary the contribution of past experience.

  13. Shape memory alloys

    International Nuclear Information System (INIS)

    Kaszuwara, W.

    2004-01-01

    Shape memory alloys (SMA), when deformed, have the ability of returning, in certain circumstances, to their initial shape. Deformations related to this phenomenon are for polycrystals 1-8% and up to 15% for monocrystals. The deformation energy is in the range of 10 6 - 10 7 J/m 3 . The deformation is caused by martensitic transformation in the material. Shape memory alloys exhibit one directional or two directional shape memory effect as well as pseudoelastic effect. Shape change is activated by temperature change, which limits working frequency of SMA to 10 2 Hz. Other group of alloys exhibit magnetic shape memory effect. In these alloys martensitic transformation is triggered by magnetic field, thus their working frequency can be higher. Composites containing shape memory alloys can also be used as shape memory materials (applied in vibration damping devices). Another group of composite materials is called heterostructures, in which SMA alloys are incorporated in a form of thin layers The heterostructures can be used as microactuators in microelectromechanical systems (MEMS). Basic SMA comprise: Ni-Ti, Cu (Cu-Zn,Cu-Al, Cu-Sn) and Fe (Fe-Mn, Fe-Cr-Ni) alloys. Shape memory alloys find applications in such areas: automatics, safety and medical devices and many domestic appliances. Currently the most important appears to be research on magnetic shape memory materials and high temperature SMA. Vital from application point of view are composite materials especially those containing several intelligent materials. (author)

  14. Human memory search

    NARCIS (Netherlands)

    Davelaar, E.J.; Raaijmakers, J.G.W.; Hills, T.T.; Robbins, T.W.; Todd, P.M.

    2012-01-01

    The importance of understanding human memory search is hard to exaggerate: we build and live our lives based on what whe remember. This chapter explores the characteristics of memory search, with special emphasis on the use of retrieval cues. We introduce the dependent measures that are obtained

  15. Static memory devices

    NARCIS (Netherlands)

    2012-01-01

    A semiconductor memory device includes n-wells (22) and p-wells (24) used to make up a plurality of memory cell elements (40). The n-wells (22) and p-5 wells (24) can be back-biased to improve reading and writing performance. One of the n-wells and p-wells can be globally biased while the other one

  16. Human Memory: The Basics

    Science.gov (United States)

    Martinez, Michael E.

    2010-01-01

    The human mind has two types of memory: short-term and long-term. In all types of learning, it is best to use that structure rather than to fight against it. One way to do that is to ensure that learners can fit new information into patterns that can be stored in and more easily retrieved from long-term memory.

  17. Working Memory and Attitudes

    Science.gov (United States)

    Jung, Eun Sook; Reid, Norman

    2009-01-01

    Working memory capacity has been shown to be an important factor in controlling understanding in the sciences. Attitudes related to studies in the sciences are also known to be important in relation to success in learning. It might be argued that if working memory capacity is a rate controlling feature of learning and success in understanding…

  18. Reading and Working Memory

    Science.gov (United States)

    Baddeley, Alan

    1984-01-01

    Outlines the concept of working memory, with particular reference to a hypothetical subcomponent, the articulatory loop. Discusses the role of the loop in fluent adult reading, then examines the reading performance of adults with deficits in auditory verbal memory, showing that a capacity to articulate is not necessary for the effective…

  19. The memory of volatility

    Directory of Open Access Journals (Sweden)

    Kai R. Wenger

    2018-03-01

    Full Text Available The focus of the volatility literature on forecasting and the predominance of theconceptually simpler HAR model over long memory stochastic volatility models has led to the factthat the actual degree of memory estimates has rarely been considered. Estimates in the literaturerange roughly between 0.4 and 0.6 - that is from the higher stationary to the lower non-stationaryregion. This difference, however, has important practical implications - such as the existence or nonexistenceof the fourth moment of the return distribution. Inference on the memory order is complicatedby the presence of measurement error in realized volatility and the potential of spurious long memory.In this paper we provide a comprehensive analysis of the memory in variances of international stockindices and exchange rates. On the one hand, we find that the variance of exchange rates is subject tospurious long memory and the true memory parameter is in the higher stationary range. Stock indexvariances, on the other hand, are free of low frequency contaminations and the memory is in the lowernon-stationary range. These results are obtained using state of the art local Whittle methods that allowconsistent estimation in presence of perturbations or low frequency contaminations.

  20. Predicting Reasoning from Memory

    Science.gov (United States)

    Heit, Evan; Hayes, Brett K.

    2011-01-01

    In an effort to assess the relations between reasoning and memory, in 8 experiments, the authors examined how well responses on an inductive reasoning task are predicted from responses on a recognition memory task for the same picture stimuli. Across several experimental manipulations, such as varying study time, presentation frequency, and the…