WorldWideScience

Sample records for distributed memory machine

  1. PRISMA database machine: A distributed, main-memory approach

    NARCIS (Netherlands)

    Schmidt, J.W.; Apers, Peter M.G.; Ceri, S.; Kersten, Martin L.; Oerlemans, Hans C.M.; Missikoff, M.

    1988-01-01

    The PRISMA project is a large-scale research effort in the design and implementation of a highly parallel machine for data and knowledge processing. The PRISMA database machine is a distributed, main-memory database management system implemented in an object-oriented language that runs on top of a

  2. A general purpose subroutine for fast fourier transform on a distributed memory parallel machine

    Science.gov (United States)

    Dubey, A.; Zubair, M.; Grosch, C. E.

    1992-01-01

    One issue which is central in developing a general purpose Fast Fourier Transform (FFT) subroutine on a distributed memory parallel machine is the data distribution. It is possible that different users would like to use the FFT routine with different data distributions. Thus, there is a need to design FFT schemes on distributed memory parallel machines which can support a variety of data distributions. An FFT implementation on a distributed memory parallel machine which works for a number of data distributions commonly encountered in scientific applications is presented. The problem of rearranging the data after computing the FFT is also addressed. The performance of the implementation on a distributed memory parallel machine Intel iPSC/860 is evaluated.

  3. Computational cost of isogeometric multi-frontal solvers on parallel distributed memory machines

    KAUST Repository

    Woźniak, Maciej; Paszyński, Maciej R.; Pardo, D.; Dalcin, Lisandro; Calo, Victor M.

    2015-01-01

    This paper derives theoretical estimates of the computational cost for isogeometric multi-frontal direct solver executed on parallel distributed memory machines. We show theoretically that for the Cp-1 global continuity of the isogeometric solution

  4. Languages, compilers and run-time environments for distributed memory machines

    CERN Document Server

    Saltz, J

    1992-01-01

    Papers presented within this volume cover a wide range of topics related to programming distributed memory machines. Distributed memory architectures, although having the potential to supply the very high levels of performance required to support future computing needs, present awkward programming problems. The major issue is to design methods which enable compilers to generate efficient distributed memory programs from relatively machine independent program specifications. This book is the compilation of papers describing a wide range of research efforts aimed at easing the task of programmin

  5. PGHPF – An Optimizing High Performance Fortran Compiler for Distributed Memory Machines

    Directory of Open Access Journals (Sweden)

    Zeki Bozkus

    1997-01-01

    Full Text Available High Performance Fortran (HPF is the first widely supported, efficient, and portable parallel programming language for shared and distributed memory systems. HPF is realized through a set of directive-based extensions to Fortran 90. It enables application developers and Fortran end-users to write compact, portable, and efficient software that will compile and execute on workstations, shared memory servers, clusters, traditional supercomputers, or massively parallel processors. This article describes a production-quality HPF compiler for a set of parallel machines. Compilation techniques such as data and computation distribution, communication generation, run-time support, and optimization issues are elaborated as the basis for an HPF compiler implementation on distributed memory machines. The performance of this compiler on benchmark programs demonstrates that high efficiency can be achieved executing HPF code on parallel architectures.

  6. A database for on-line event analysis on a distributed memory machine

    CERN Document Server

    Argante, E; Van der Stok, P D V; Willers, Ian Malcolm

    1995-01-01

    Parallel in-memory databases can enhance the structuring and parallelization of programs used in High Energy Physics (HEP). Efficient database access routines are used as communication primitives which hide the communication topology in contrast to the more explicit communications like PVM or MPI. A parallel in-memory database, called SPIDER, has been implemented on a 32 node Meiko CS-2 distributed memory machine. The spider primitives generate a lower overhead than the one generated by PVM or PMI. The event reconstruction program, CPREAD of the CPLEAR experiment, has been used as a test case. Performance measurerate generated by CPLEAR.

  7. Contention Modeling for Multithreaded Distributed Shared Memory Machines: The Cray XMT

    Energy Technology Data Exchange (ETDEWEB)

    Secchi, Simone; Tumeo, Antonino; Villa, Oreste

    2011-07-27

    Distributed Shared Memory (DSM) machines are a wide class of multi-processor computing systems where a large virtually-shared address space is mapped on a network of physically distributed memories. High memory latency and network contention are two of the main factors that limit performance scaling of such architectures. Modern high-performance computing DSM systems have evolved toward exploitation of massive hardware multi-threading and fine-grained memory hashing to tolerate irregular latencies, avoid network hot-spots and enable high scaling. In order to model the performance of such large-scale machines, parallel simulation has been proved to be a promising approach to achieve good accuracy in reasonable times. One of the most critical factors in solving the simulation speed-accuracy trade-off is network modeling. The Cray XMT is a massively multi-threaded supercomputing architecture that belongs to the DSM class, since it implements a globally-shared address space abstraction on top of a physically distributed memory substrate. In this paper, we discuss the development of a contention-aware network model intended to be integrated in a full-system XMT simulator. We start by measuring the effects of network contention in a 128-processor XMT machine and then investigate the trade-off that exists between simulation accuracy and speed, by comparing three network models which operate at different levels of accuracy. The comparison and model validation is performed by executing a string-matching algorithm on the full-system simulator and on the XMT, using three datasets that generate noticeably different contention patterns.

  8. A data base for on-line event analysis on a distributed memory machine

    International Nuclear Information System (INIS)

    Argante, E.; Meesters, M.R.J.; Willers, I.; Stok, P. van der

    1996-01-01

    Parallel in-memory databases can enhance the structuring and parallelization of programs used in High Energy Physics (HEP). Efficient database access routines are used as communication primitives which hide the communication topology in contrast to the more explicit communications like PVM or MPI. A parallel in-memory database, called SPIDER, has been implemented on a 32 node Meiko CS-2 distributed memory machine. The SPIDER primitives generate a lower overhead than the one generated by PVM or MPI. The even reconstruction program, CPREAD, of the CLEAR experiment, has been used as test case. Performance measurements showed that CPREAD interfaced to SPIDER can easily cope with the event rate generated by CPLEAR. (author)

  9. Computational cost of isogeometric multi-frontal solvers on parallel distributed memory machines

    KAUST Repository

    Woźniak, Maciej

    2015-02-01

    This paper derives theoretical estimates of the computational cost for isogeometric multi-frontal direct solver executed on parallel distributed memory machines. We show theoretically that for the Cp-1 global continuity of the isogeometric solution, both the computational cost and the communication cost of a direct solver are of order O(log(N)p2) for the one dimensional (1D) case, O(Np2) for the two dimensional (2D) case, and O(N4/3p2) for the three dimensional (3D) case, where N is the number of degrees of freedom and p is the polynomial order of the B-spline basis functions. The theoretical estimates are verified by numerical experiments performed with three parallel multi-frontal direct solvers: MUMPS, PaStiX and SuperLU, available through PETIGA toolkit built on top of PETSc. Numerical results confirm these theoretical estimates both in terms of p and N. For a given problem size, the strong efficiency rapidly decreases as the number of processors increases, becoming about 20% for 256 processors for a 3D example with 1283 unknowns and linear B-splines with C0 global continuity, and 15% for a 3D example with 643 unknowns and quartic B-splines with C3 global continuity. At the same time, one cannot arbitrarily increase the problem size, since the memory required by higher order continuity spaces is large, quickly consuming all the available memory resources even in the parallel distributed memory version. Numerical results also suggest that the use of distributed parallel machines is highly beneficial when solving higher order continuity spaces, although the number of processors that one can efficiently employ is somehow limited.

  10. Investigating Solution Convergence in a Global Ocean Model Using a 2048-Processor Cluster of Distributed Shared Memory Machines

    Directory of Open Access Journals (Sweden)

    Chris Hill

    2007-01-01

    Full Text Available Up to 1920 processors of a cluster of distributed shared memory machines at the NASA Ames Research Center are being used to simulate ocean circulation globally at horizontal resolutions of 1/4, 1/8, and 1/16-degree with the Massachusetts Institute of Technology General Circulation Model, a finite volume code that can scale to large numbers of processors. The study aims to understand physical processes responsible for skill improvements as resolution is increased and to gain insight into what resolution is sufficient for particular purposes. This paper focuses on the computational aspects of reaching the technical objective of efficiently performing these global eddy-resolving ocean simulations. At 1/16-degree resolution the model grid contains 1.2 billion cells. At this resolution it is possible to simulate approximately one month of ocean dynamics in about 17 hours of wallclock time with a model timestep of two minutes on a cluster of four 512-way NUMA Altix systems. The Altix systems' large main memory and I/O subsystems allow computation and disk storage of rich sets of diagnostics during each integration, supporting the scientific objective to develop a better understanding of global ocean circulation model solution convergence as model resolution is increased.

  11. Sparse distributed memory

    Science.gov (United States)

    Denning, Peter J.

    1989-01-01

    Sparse distributed memory was proposed be Pentti Kanerva as a realizable architecture that could store large patterns and retrieve them based on partial matches with patterns representing current sensory inputs. This memory exhibits behaviors, both in theory and in experiment, that resemble those previously unapproached by machines - e.g., rapid recognition of faces or odors, discovery of new connections between seemingly unrelated ideas, continuation of a sequence of events when given a cue from the middle, knowing that one doesn't know, or getting stuck with an answer on the tip of one's tongue. These behaviors are now within reach of machines that can be incorporated into the computing systems of robots capable of seeing, talking, and manipulating. Kanerva's theory is a break with the Western rationalistic tradition, allowing a new interpretation of learning and cognition that respects biology and the mysteries of individual human beings.

  12. Structured Memory for Neural Turing Machines

    OpenAIRE

    Zhang, Wei; Yu, Yang; Zhou, Bowen

    2015-01-01

    Neural Turing Machines (NTM) contain memory component that simulates "working memory" in the brain to store and retrieve information to ease simple algorithms learning. So far, only linearly organized memory is proposed, and during experiments, we observed that the model does not always converge, and overfits easily when handling certain tasks. We think memory component is key to some faulty behaviors of NTM, and better organization of memory component could help fight those problems. In this...

  13. Sparse distributed memory overview

    Science.gov (United States)

    Raugh, Mike

    1990-01-01

    The Sparse Distributed Memory (SDM) project is investigating the theory and applications of massively parallel computing architecture, called sparse distributed memory, that will support the storage and retrieval of sensory and motor patterns characteristic of autonomous systems. The immediate objectives of the project are centered in studies of the memory itself and in the use of the memory to solve problems in speech, vision, and robotics. Investigation of methods for encoding sensory data is an important part of the research. Examples of NASA missions that may benefit from this work are Space Station, planetary rovers, and solar exploration. Sparse distributed memory offers promising technology for systems that must learn through experience and be capable of adapting to new circumstances, and for operating any large complex system requiring automatic monitoring and control. Sparse distributed memory is a massively parallel architecture motivated by efforts to understand how the human brain works. Sparse distributed memory is an associative memory, able to retrieve information from cues that only partially match patterns stored in the memory. It is able to store long temporal sequences derived from the behavior of a complex system, such as progressive records of the system's sensory data and correlated records of the system's motor controls.

  14. Distributed-memory matrix computations

    DEFF Research Database (Denmark)

    Balle, Susanne Mølleskov

    1995-01-01

    The main goal of this project is to investigate, develop, and implement algorithms for numerical linear algebra on parallel computers in order to acquire expertise in methods for parallel computations. An important motivation for analyzaing and investigating the potential for parallelism in these......The main goal of this project is to investigate, develop, and implement algorithms for numerical linear algebra on parallel computers in order to acquire expertise in methods for parallel computations. An important motivation for analyzaing and investigating the potential for parallelism...... in these algorithms is that many scientific applications rely heavily on the performance of the involved dense linear algebra building blocks. Even though we consider the distributed-memory as well as the shared-memory programming paradigm, the major part of the thesis is dedicated to distributed-memory architectures....... We emphasize distributed-memory massively parallel computers - such as the Connection Machines model CM-200 and model CM-5/CM-5E - available to us at UNI-C and at Thinking Machines Corporation. The CM-200 was at the time this project started one of the few existing massively parallel computers...

  15. Probability distribution of machining center failures

    International Nuclear Information System (INIS)

    Jia Yazhou; Wang Molin; Jia Zhixin

    1995-01-01

    Through field tracing research for 24 Chinese cutter-changeable CNC machine tools (machining centers) over a period of one year, a database of operation and maintenance for machining centers was built, the failure data was fitted to the Weibull distribution and the exponential distribution, the effectiveness was tested, and the failure distribution pattern of machining centers was found. Finally, the reliability characterizations for machining centers are proposed

  16. Untyped Memory in the Java Virtual Machine

    DEFF Research Database (Denmark)

    Gal, Andreas; Probst, Christian; Franz, Michael

    2005-01-01

    We have implemented a virtual execution environment that executes legacy binary code on top of the type-safe Java Virtual Machine by recompiling native code instructions to type-safe bytecode. As it is essentially impossible to infer static typing into untyped machine code, our system emulates...... untyped memory on top of Java’s type system. While this approach allows to execute native code on any off-the-shelf JVM, the resulting runtime performance is poor. We propose a set of virtual machine extensions that add type-unsafe memory objects to JVM. We contend that these JVM extensions do not relax...... Java’s type system as the same functionality can be achieved in pure Java, albeit much less efficiently....

  17. Distributed-Memory Fast Maximal Independent Set

    Energy Technology Data Exchange (ETDEWEB)

    Kanewala Appuhamilage, Thejaka Amila J.; Zalewski, Marcin J.; Lumsdaine, Andrew

    2017-09-13

    The Maximal Independent Set (MIS) graph problem arises in many applications such as computer vision, information theory, molecular biology, and process scheduling. The growing scale of MIS problems suggests the use of distributed-memory hardware as a cost-effective approach to providing necessary compute and memory resources. Luby proposed four randomized algorithms to solve the MIS problem. All those algorithms are designed focusing on shared-memory machines and are analyzed using the PRAM model. These algorithms do not have direct efficient distributed-memory implementations. In this paper, we extend two of Luby’s seminal MIS algorithms, “Luby(A)” and “Luby(B),” to distributed-memory execution, and we evaluate their performance. We compare our results with the “Filtered MIS” implementation in the Combinatorial BLAS library for two types of synthetic graph inputs.

  18. A distributed algorithm for machine learning

    Science.gov (United States)

    Chen, Shihong

    2018-04-01

    This paper considers a distributed learning problem in which a group of machines in a connected network, each learning its own local dataset, aim to reach a consensus at an optimal model, by exchanging information only with their neighbors but without transmitting data. A distributed algorithm is proposed to solve this problem under appropriate assumptions.

  19. Nonvolatile Memory Materials for Neuromorphic Intelligent Machines.

    Science.gov (United States)

    Jeong, Doo Seok; Hwang, Cheol Seong

    2018-04-18

    Recent progress in deep learning extends the capability of artificial intelligence to various practical tasks, making the deep neural network (DNN) an extremely versatile hypothesis. While such DNN is virtually built on contemporary data centers of the von Neumann architecture, physical (in part) DNN of non-von Neumann architecture, also known as neuromorphic computing, can remarkably improve learning and inference efficiency. Particularly, resistance-based nonvolatile random access memory (NVRAM) highlights its handy and efficient application to the multiply-accumulate (MAC) operation in an analog manner. Here, an overview is given of the available types of resistance-based NVRAMs and their technological maturity from the material- and device-points of view. Examples within the strategy are subsequently addressed in comparison with their benchmarks (virtual DNN in deep learning). A spiking neural network (SNN) is another type of neural network that is more biologically plausible than the DNN. The successful incorporation of resistance-based NVRAM in SNN-based neuromorphic computing offers an efficient solution to the MAC operation and spike timing-based learning in nature. This strategy is exemplified from a material perspective. Intelligent machines are categorized according to their architecture and learning type. Also, the functionality and usefulness of NVRAM-based neuromorphic computing are addressed. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  20. Memory Based Machine Intelligence Techniques in VLSI hardware

    OpenAIRE

    James, Alex Pappachen

    2012-01-01

    We briefly introduce the memory based approaches to emulate machine intelligence in VLSI hardware, describing the challenges and advantages. Implementation of artificial intelligence techniques in VLSI hardware is a practical and difficult problem. Deep architectures, hierarchical temporal memories and memory networks are some of the contemporary approaches in this area of research. The techniques attempt to emulate low level intelligence tasks and aim at providing scalable solutions to high ...

  1. Document Classification Using Distributed Machine Learning

    OpenAIRE

    Aydin, Galip; Hallac, Ibrahim Riza

    2018-01-01

    In this paper, we investigate the performance and success rates of Na\\"ive Bayes Classification Algorithm for automatic classification of Turkish news into predetermined categories like economy, life, health etc. We use Apache Big Data technologies such as Hadoop, HDFS, Spark and Mahout, and apply these distributed technologies to Machine Learning.

  2. Techniques for Reducing Consistency-Related Communication in Distributed Shared Memory System

    OpenAIRE

    Zwaenepoel, W; Bennett, J.K.; Carter, J.B.

    1995-01-01

    Distributed shared memory 8DSM) is an abstraction of shared memory on a distributed memory machine. Hardware DSM systems support this abstraction at the architecture level; software DSM systems support the abstraction within the runtime system. One of the key problems in building an efficient software DSM system is to reduce the amount of communication needed to keep the distributed memories consistent. In this paper we present four techniques for doing so: 1) software release consistency; 2)...

  3. Trinary Associative Memory Would Recognize Machine Parts

    Science.gov (United States)

    Liu, Hua-Kuang; Awwal, Abdul Ahad S.; Karim, Mohammad A.

    1991-01-01

    Trinary associative memory combines merits and overcomes major deficiencies of unipolar and bipolar logics by combining them in three-valued logic that reverts to unipolar or bipolar binary selectively, as needed to perform specific tasks. Advantage of associative memory: one obtains access to all parts of it simultaneously on basis of content, rather than address, of data. Consequently, used to exploit fully parallelism and speed of optical computing.

  4. Over-Distribution in Source Memory

    Science.gov (United States)

    Brainerd, C. J.; Reyna, V. F.; Holliday, R. E.; Nakamura, K.

    2012-01-01

    Semantic false memories are confounded with a second type of error, over-distribution, in which items are attributed to contradictory episodic states. Over-distribution errors have proved to be more common than false memories when the two are disentangled. We investigated whether over-distribution is prevalent in another classic false memory paradigm: source monitoring. It is. Conventional false memory responses (source misattributions) were predominantly over-distribution errors, but unlike semantic false memory, over-distribution also accounted for more than half of true memory responses (correct source attributions). Experimental control of over-distribution was achieved via a series of manipulations that affected either recollection of contextual details or item memory (concreteness, frequency, list-order, number of presentation contexts, and individual differences in verbatim memory). A theoretical model was used to analyze the data (conjoint process dissociation) that predicts that predicts that (a) over-distribution is directly proportional to item memory but inversely proportional to recollection and (b) item memory is not a necessary precondition for recollection of contextual details. The results were consistent with both predictions. PMID:21942494

  5. Total recall in distributive associative memories

    Science.gov (United States)

    Danforth, Douglas G.

    1991-01-01

    Iterative error correction of asymptotically large associative memories is equivalent to a one-step learning rule. This rule is the inverse of the activation function of the memory. Spectral representations of nonlinear activation functions are used to obtain the inverse in closed form for Sparse Distributed Memory, Selected-Coordinate Design, and Radial Basis Functions.

  6. Machine parts recognition using a trinary associative memory

    Science.gov (United States)

    Awwal, Abdul Ahad S.; Karim, Mohammad A.; Liu, Hua-Kuang

    1989-01-01

    The convergence mechanism of vectors in Hopfield's neural network in relation to recognition of partially known patterns is studied in terms of both inner products and Hamming distance. It has been shown that Hamming distance should not always be used in determining the convergence of vectors. Instead, inner product weighting coefficients play a more dominant role in certain data representations for determining the convergence mechanism. A trinary neuron representation for associative memory is found to be more effective for associative recall. Applications of the trinary associative memory to reconstruct machine part images that are partially missing are demonstrated by means of computer simulation as examples of the usefulness of this approach.

  7. A discrete Fourier transform for virtual memory machines

    Science.gov (United States)

    Galant, David C.

    1992-01-01

    An algebraic theory of the Discrete Fourier Transform is developed in great detail. Examination of the details of the theory leads to a computationally efficient fast Fourier transform for the use on computers with virtual memory. Such an algorithm is of great use on modern desktop machines. A FORTRAN coded version of the algorithm is given for the case when the sequence of numbers to be transformed is a power of two.

  8. A variable-mode stator consequent pole memory machine

    Science.gov (United States)

    Yang, Hui; Lyu, Shukang; Lin, Heyun; Zhu, Z. Q.

    2018-05-01

    In this paper, a variable-mode concept is proposed for the speed range extension of a stator-consequent-pole memory machine (SCPMM). An integrated permanent magnet (PM) and electrically excited control scheme is utilized to simplify the flux-weakening control instead of relatively complicated continuous PM magnetization control. Due to the nature of memory machine, the magnetization state of low coercive force (LCF) magnets can be easily changed by applying either a positive or negative current pulse. Therefore, the number of PM poles may be changed to satisfy the specific performance requirement under different speed ranges, i.e. the machine with all PM poles can offer high torque output while that with half PM poles provides wide constant power range. In addition, the SCPMM with non-magnetized PMs can be considered as a dual-three phase electrically excited reluctance machine, which can be fed by an open-winding based dual inverters that provide direct current (DC) bias excitation to further extend the speed range. The effectiveness of the proposed variable-mode operation for extending its operating region and improving the system reliability is verified by both finite element analysis (FEA) and experiments.

  9. Distributed learning enhances relational memory consolidation.

    Science.gov (United States)

    Litman, Leib; Davachi, Lila

    2008-09-01

    It has long been known that distributed learning (DL) provides a mnemonic advantage over massed learning (ML). However, the underlying mechanisms that drive this robust mnemonic effect remain largely unknown. In two experiments, we show that DL across a 24 hr interval does not enhance immediate memory performance but instead slows the rate of forgetting relative to ML. Furthermore, we demonstrate that this savings in forgetting is specific to relational, but not item, memory. In the context of extant theories and knowledge of memory consolidation, these results suggest that an important mechanism underlying the mnemonic benefit of DL is enhanced memory consolidation. We speculate that synaptic strengthening mechanisms supporting long-term memory consolidation may be differentially mediated by the spacing of memory reactivation. These findings have broad implications for the scientific study of episodic memory consolidation and, more generally, for educational curriculum development and policy.

  10. Parallel discrete ordinates algorithms on distributed and common memory systems

    International Nuclear Information System (INIS)

    Wienke, B.R.; Hiromoto, R.E.; Brickner, R.G.

    1987-01-01

    The S/sub n/ algorithm employs iterative techniques in solving the linear Boltzmann equation. These methods, both ordered and chaotic, were compared on both the Denelcor HEP and the Intel hypercube. Strategies are linked to the organization and accessibility of memory (common memory versus distributed memory architectures), with common concern for acquisition of global information. Apart from this, the inherent parallelism of the algorithm maps directly onto the two architectures. Results comparing execution times, speedup, and efficiency are based on a representative 16-group (full upscatter and downscatter) sample problem. Calculations were performed on both the Los Alamos National Laboratory (LANL) Denelcor HEP and the LANL Intel hypercube. The Denelcor HEP is a 64-bit multi-instruction, multidate MIMD machine consisting of up to 16 process execution modules (PEMs), each capable of executing 64 processes concurrently. Each PEM can cooperate on a job, or run several unrelated jobs, and share a common global memory through a crossbar switch. The Intel hypercube, on the other hand, is a distributed memory system composed of 128 processing elements, each with its own local memory. Processing elements are connected in a nearest-neighbor hypercube configuration and sharing of data among processors requires execution of explicit message-passing constructs

  11. A compositional reservoir simulator on distributed memory parallel computers

    International Nuclear Information System (INIS)

    Rame, M.; Delshad, M.

    1995-01-01

    This paper presents the application of distributed memory parallel computes to field scale reservoir simulations using a parallel version of UTCHEM, The University of Texas Chemical Flooding Simulator. The model is a general purpose highly vectorized chemical compositional simulator that can simulate a wide range of displacement processes at both field and laboratory scales. The original simulator was modified to run on both distributed memory parallel machines (Intel iPSC/960 and Delta, Connection Machine 5, Kendall Square 1 and 2, and CRAY T3D) and a cluster of workstations. A domain decomposition approach has been taken towards parallelization of the code. A portion of the discrete reservoir model is assigned to each processor by a set-up routine that attempts a data layout as even as possible from the load-balance standpoint. Each of these subdomains is extended so that data can be shared between adjacent processors for stencil computation. The added routines that make parallel execution possible are written in a modular fashion that makes the porting to new parallel platforms straight forward. Results of the distributed memory computing performance of Parallel simulator are presented for field scale applications such as tracer flood and polymer flood. A comparison of the wall-clock times for same problems on a vector supercomputer is also presented

  12. Stochastic Distribution of Wear of Carbide Tools during Machining ...

    African Journals Online (AJOL)

    Journal of the Nigerian Association of Mathematical Physics ... The stochastic point model was used to determine the rate of wear distribution of the carbide tool ... Keywords: cutting speed, feed rate, machining time, tool life, reliability, wear.

  13. Construction and Application of an AMR Algorithm for Distributed Memory Computers

    OpenAIRE

    Deiterding, Ralf

    2003-01-01

    While the parallelization of blockstructured adaptive mesh refinement techniques is relatively straight-forward on shared memory architectures, appropriate distribution strategies for the emerging generation of distributed memory machines are a topic of on-going research. In this paper, a locality-preserving domain decomposition is proposed that partitions the entire AMR hierarchy from the base level on. It is shown that the approach reduces the communication costs and simplifies the im...

  14. SuperLU{_}DIST: A scalable distributed-memory sparse direct solver for unsymmetric linear systems

    Energy Technology Data Exchange (ETDEWEB)

    Li, Xiaoye S.; Demmel, James W.

    2002-03-27

    In this paper, we present the main algorithmic features in the software package SuperLU{_}DIST, a distributed-memory sparse direct solver for large sets of linear equations. We give in detail our parallelization strategies, with focus on scalability issues, and demonstrate the parallel performance and scalability on current machines. The solver is based on sparse Gaussian elimination, with an innovative static pivoting strategy proposed earlier by the authors. The main advantage of static pivoting over classical partial pivoting is that it permits a priori determination of data structures and communication pattern for sparse Gaussian elimination, which makes it more scalable on distributed memory machines. Based on this a priori knowledge, we designed highly parallel and scalable algorithms for both LU decomposition and triangular solve and we show that they are suitable for large-scale distributed memory machines.

  15. The Effects of Different Electrode Types for Obtaining Surface Machining Shape on Shape Memory Alloy Using Electrochemical Machining

    Science.gov (United States)

    Choi, S. G.; Kim, S. H.; Choi, W. K.; Moon, G. C.; Lee, E. S.

    2017-06-01

    Shape memory alloy (SMA) is important material used for the medicine and aerospace industry due to its characteristics called the shape memory effect, which involves the recovery of deformed alloy to its original state through the application of temperature or stress. Consumers in modern society demand stability in parts. Electrochemical machining is one of the methods for obtained these stabilities in parts requirements. These parts of shape memory alloy require fine patterns in some applications. In order to machine a fine pattern, the electrochemical machining method is suitable. For precision electrochemical machining using different shape electrodes, the current density should be controlled precisely. And electrode shape is required for precise electrochemical machining. It is possible to obtain precise square holes on the SMA if the insulation layer controlled the unnecessary current between electrode and workpiece. If it is adjusting the unnecessary current to obtain the desired shape, it will be a great contribution to the medical industry and the aerospace industry. It is possible to process a desired shape to the shape memory alloy by micro controlling the unnecessary current. In case of the square electrode without insulation layer, it derives inexact square holes due to the unnecessary current. The results using the insulated electrode in only side show precise square holes. The removal rate improved in case of insulated electrode than others because insulation layer concentrate the applied current to the machining zone.

  16. Optimizing Distributed Machine Learning for Large Scale EEG Data Set

    Directory of Open Access Journals (Sweden)

    M Bilal Shaikh

    2017-06-01

    Full Text Available Distributed Machine Learning (DML has gained its importance more than ever in this era of Big Data. There are a lot of challenges to scale machine learning techniques on distributed platforms. When it comes to scalability, improving the processor technology for high level computation of data is at its limit, however increasing machine nodes and distributing data along with computation looks as a viable solution. Different frameworks   and platforms are available to solve DML problems. These platforms provide automated random data distribution of datasets which miss the power of user defined intelligent data partitioning based on domain knowledge. We have conducted an empirical study which uses an EEG Data Set collected through P300 Speller component of an ERP (Event Related Potential which is widely used in BCI problems; it helps in translating the intention of subject w h i l e performing any cognitive task. EEG data contains noise due to waves generated by other activities in the brain which contaminates true P300Speller. Use of Machine Learning techniques could help in detecting errors made by P300 Speller. We are solving this classification problem by partitioning data into different chunks and preparing distributed models using Elastic CV Classifier. To present a case of optimizing distributed machine learning, we propose an intelligent user defined data partitioning approach that could impact on the accuracy of distributed machine learners on average. Our results show better average AUC as compared to average AUC obtained after applying random data partitioning which gives no control to user over data partitioning. It improves the average accuracy of distributed learner due to the domain specific intelligent partitioning by the user. Our customized approach achieves 0.66 AUC on individual sessions and 0.75 AUC on mixed sessions, whereas random / uncontrolled data distribution records 0.63 AUC.

  17. The Distributed Nature of Working Memory

    NARCIS (Netherlands)

    Christophel, Thomas B.; Klink, P. Christiaan; Spitzer, Bernhard; Roelfsema, Pieter R.; Haynes, John-Dylan

    2017-01-01

    Studies in humans and non-human primates have provided evidence for storage of working memory contents in multiple regions ranging from sensory to parietal and prefrontal cortex. We discuss potential explanations for these distributed representations: (i) features in sensory regions versus

  18. Distribution Learning in Evolutionary Strategies and Restricted Boltzmann Machines

    DEFF Research Database (Denmark)

    Krause, Oswin

    The thesis is concerned with learning distributions in the two settings of Evolutionary Strategies (ESs) and Restricted Boltzmann Machines (RBMs). In both cases, the distributions are learned from samples, albeit with different goals. Evolutionary Strategies are concerned with finding an optimum ...

  19. A distributed computer system for digitising machines

    International Nuclear Information System (INIS)

    Bairstow, R.; Barlow, J.; Waters, M.; Watson, J.

    1977-07-01

    This paper describes a Distributed Computing System, based on micro computers, for the monitoring and control of digitising tables used by the Rutherford Laboratory Bubble Chamber Research Group in the measurement of bubble chamber photographs. (author)

  20. Distributed terascale volume visualization using distributed shared virtual memory

    KAUST Repository

    Beyer, Johanna; Hadwiger, Markus; Schneider, Jens; Jeong, Wonki; Pfister, Hanspeter

    2011-01-01

    Table 1 illustrates the impact of different distribution unit sizes, different screen resolutions, and numbers of GPU nodes. We use two and four GPUs (NVIDIA Quadro 5000 with 2.5 GB memory) and a mouse cortex EM dataset (see Figure 2) of resolution

  1. Effect of tellurium on machinability and mechanical property of CuAlMnZn shape memory alloy

    International Nuclear Information System (INIS)

    Liu Na; Li Zhou; Xu Genying; Feng Ze; Gong Shu; Zhu Lilong; Liang Shuquan

    2011-01-01

    Highlights: → A novel free-machining Cu-7.5Al-9.7Mn-3.4Zn-0.3Te (wt.%) shape memory alloy has been developed. → The size of dispersed particles with richer Te is 2-5 μm. → The CuAlMnZnTe alloy has good machinability which approached that of BZn15-24-1.5 due to the addition of Te. → Its shape memory property keeps the same as that of CuAlMnZn alloy with free Te. → The CuAlMnZn shape memory alloy with and without Te both have good ductile as annealed at 700 deg. C for 15 min. - Abstract: The microstructure transition, shape memory effect, machinability and mechanical property of the CuAlMnZn alloy with and without Te have been studied using X-ray diffraction analysis, chips observation and scanning electron microscopy (SEM), tensile strength test and differential scanning calorimeter (DSC), and semi-quantitative shape memory effect (SME) test. The particles with richer Te dispersedly distributed in grain interior and boundary with size of 2-5 μm. After the addition of Te, the CuAlMnZnTe alloy machinability has been effectively increased to approach that of BZn15-24-1.5 and its shape memory property remains the same as the one of CuAlMnZn alloy. The CuAlMnZn shape memory alloys with and without Te both have good ductility as annealed at 700 deg. C for 15 min.

  2. Tool set for distributed real-time machine control

    Science.gov (United States)

    Carrott, Andrew J.; Wright, Christopher D.; West, Andrew A.; Harrison, Robert; Weston, Richard H.

    1997-01-01

    Demands for increased control capabilities require next generation manufacturing machines to comprise intelligent building elements, physically located at the point where the control functionality is required. Networks of modular intelligent controllers are increasingly designed into manufacturing machines and usable standards are slowly emerging. To implement a control system using off-the-shelf intelligent devices from multi-vendor sources requires a number of well defined activities, including (a) the specification and selection of interoperable control system components, (b) device independent application programming and (c) device configuration, management, monitoring and control. This paper briefly discusses the support for the above machine lifecycle activities through the development of an integrated computing environment populated with an extendable software toolset. The toolset supports machine builder activities such as initial control logic specification, logic analysis, machine modeling, mechanical verification, application programming, automatic code generation, simulation/test, version control, distributed run-time support and documentation. The environment itself consists of system management tools and a distributed object-oriented database which provides storage for the outputs from machine lifecycle activities and specific target control solutions.

  3. Distributed terascale volume visualization using distributed shared virtual memory

    KAUST Repository

    Beyer, Johanna

    2011-10-01

    Table 1 illustrates the impact of different distribution unit sizes, different screen resolutions, and numbers of GPU nodes. We use two and four GPUs (NVIDIA Quadro 5000 with 2.5 GB memory) and a mouse cortex EM dataset (see Figure 2) of resolution 21,494 x 25,790 x 1,850 = 955GB. The size of the virtual distribution units significantly influences the data distribution between nodes. Small distribution units result in a high depth complexity for compositing. Large distribution units lead to a low utilization of GPUs, because in the worst case only a single distribution unit will be in view, which is rendered by only a single node. The choice of an optimal distribution unit size depends on three major factors: the output screen resolution, the block cache size on each node, and the number of nodes. Currently, we are working on optimizing the compositing step and network communication between nodes. © 2011 IEEE.

  4. Optimizing NEURON Simulation Environment Using Remote Memory Access with Recursive Doubling on Distributed Memory Systems.

    Science.gov (United States)

    Shehzad, Danish; Bozkuş, Zeki

    2016-01-01

    Increase in complexity of neuronal network models escalated the efforts to make NEURON simulation environment efficient. The computational neuroscientists divided the equations into subnets amongst multiple processors for achieving better hardware performance. On parallel machines for neuronal networks, interprocessor spikes exchange consumes large section of overall simulation time. In NEURON for communication between processors Message Passing Interface (MPI) is used. MPI_Allgather collective is exercised for spikes exchange after each interval across distributed memory systems. The increase in number of processors though results in achieving concurrency and better performance but it inversely affects MPI_Allgather which increases communication time between processors. This necessitates improving communication methodology to decrease the spikes exchange time over distributed memory systems. This work has improved MPI_Allgather method using Remote Memory Access (RMA) by moving two-sided communication to one-sided communication, and use of recursive doubling mechanism facilitates achieving efficient communication between the processors in precise steps. This approach enhanced communication concurrency and has improved overall runtime making NEURON more efficient for simulation of large neuronal network models.

  5. Optimizing NEURON Simulation Environment Using Remote Memory Access with Recursive Doubling on Distributed Memory Systems

    Directory of Open Access Journals (Sweden)

    Danish Shehzad

    2016-01-01

    Full Text Available Increase in complexity of neuronal network models escalated the efforts to make NEURON simulation environment efficient. The computational neuroscientists divided the equations into subnets amongst multiple processors for achieving better hardware performance. On parallel machines for neuronal networks, interprocessor spikes exchange consumes large section of overall simulation time. In NEURON for communication between processors Message Passing Interface (MPI is used. MPI_Allgather collective is exercised for spikes exchange after each interval across distributed memory systems. The increase in number of processors though results in achieving concurrency and better performance but it inversely affects MPI_Allgather which increases communication time between processors. This necessitates improving communication methodology to decrease the spikes exchange time over distributed memory systems. This work has improved MPI_Allgather method using Remote Memory Access (RMA by moving two-sided communication to one-sided communication, and use of recursive doubling mechanism facilitates achieving efficient communication between the processors in precise steps. This approach enhanced communication concurrency and has improved overall runtime making NEURON more efficient for simulation of large neuronal network models.

  6. Distributed Control System Design for Portable PC Based CNC Machine

    Directory of Open Access Journals (Sweden)

    Roni Permana Saputra

    2014-07-01

    Full Text Available The demand on automated machining has been increased and emerges improvement research to achieve many goals such as portability, low cost manufacturability, interoperability, and simplicity in machine usage. These improvements are conducted without ignoring the performance analysis and usability evaluation. This research has designed a distributed control system in purpose to control a portable CNC machine. The design consists of main processing unit, secondary processing unit, motor control, and motor driver. A preliminary simulation has been conducted for performance analysis including linear accuracy and circular accuracy. The results achieved in the simulation provide linear accuracy up to 2 μm with total cost for the whole processing unit is up to 5 million IDR.

  7. A view of Kanerva's sparse distributed memory

    Science.gov (United States)

    Denning, P. J.

    1986-01-01

    Pentti Kanerva is working on a new class of computers, which are called pattern computers. Pattern computers may close the gap between capabilities of biological organisms to recognize and act on patterns (visual, auditory, tactile, or olfactory) and capabilities of modern computers. Combinations of numeric, symbolic, and pattern computers may one day be capable of sustaining robots. The overview of the requirements for a pattern computer, a summary of Kanerva's Sparse Distributed Memory (SDM), and examples of tasks this computer can be expected to perform well are given.

  8. Distributed Memory Parallel Computing with SEAWAT

    Science.gov (United States)

    Verkaik, J.; Huizer, S.; van Engelen, J.; Oude Essink, G.; Ram, R.; Vuik, K.

    2017-12-01

    Fresh groundwater reserves in coastal aquifers are threatened by sea-level rise, extreme weather conditions, increasing urbanization and associated groundwater extraction rates. To counteract these threats, accurate high-resolution numerical models are required to optimize the management of these precious reserves. The major model drawbacks are long run times and large memory requirements, limiting the predictive power of these models. Distributed memory parallel computing is an efficient technique for reducing run times and memory requirements, where the problem is divided over multiple processor cores. A new Parallel Krylov Solver (PKS) for SEAWAT is presented. PKS has recently been applied to MODFLOW and includes Conjugate Gradient (CG) and Biconjugate Gradient Stabilized (BiCGSTAB) linear accelerators. Both accelerators are preconditioned by an overlapping additive Schwarz preconditioner in a way that: a) subdomains are partitioned using Recursive Coordinate Bisection (RCB) load balancing, b) each subdomain uses local memory only and communicates with other subdomains by Message Passing Interface (MPI) within the linear accelerator, c) it is fully integrated in SEAWAT. Within SEAWAT, the PKS-CG solver replaces the Preconditioned Conjugate Gradient (PCG) solver for solving the variable-density groundwater flow equation and the PKS-BiCGSTAB solver replaces the Generalized Conjugate Gradient (GCG) solver for solving the advection-diffusion equation. PKS supports the third-order Total Variation Diminishing (TVD) scheme for computing advection. Benchmarks were performed on the Dutch national supercomputer (https://userinfo.surfsara.nl/systems/cartesius) using up to 128 cores, for a synthetic 3D Henry model (100 million cells) and the real-life Sand Engine model ( 10 million cells). The Sand Engine model was used to investigate the potential effect of the long-term morphological evolution of a large sand replenishment and climate change on fresh groundwater resources

  9. Migration of vectorized iterative solvers to distributed memory architectures

    Energy Technology Data Exchange (ETDEWEB)

    Pommerell, C. [AT& T Bell Labs., Murray Hill, NJ (United States); Ruehl, R. [CSCS-ETH, Manno (Switzerland)

    1994-12-31

    Both necessity and opportunity motivate the use of high-performance computers for iterative linear solvers. Necessity results from the size of the problems being solved-smaller problems are often better handled by direct methods. Opportunity arises from the formulation of the iterative methods in terms of simple linear algebra operations, even if this {open_quote}natural{close_quotes} parallelism is not easy to exploit in irregularly structured sparse matrices and with good preconditioners. As a result, high-performance implementations of iterative solvers have attracted a lot of interest in recent years. Most efforts are geared to vectorize or parallelize the dominating operation-structured or unstructured sparse matrix-vector multiplication, or to increase locality and parallelism by reformulating the algorithm-reducing global synchronization in inner products or local data exchange in preconditioners. Target architectures for iterative solvers currently include mostly vector supercomputers and architectures with one or few optimized (e.g., super-scalar and/or super-pipelined RISC) processors and hierarchical memory systems. More recently, parallel computers with physically distributed memory and a better price/performance ratio have been offered by vendors as a very interesting alternative to vector supercomputers. However, programming comfort on such distributed memory parallel processors (DMPPs) still lags behind. Here the authors are concerned with iterative solvers and their changing computing environment. In particular, they are considering migration from traditional vector supercomputers to DMPPs. Application requirements force one to use flexible and portable libraries. They want to extend the portability of iterative solvers rather than reimplementing everything for each new machine, or even for each new architecture.

  10. Strategies and Principles of Distributed Machine Learning on Big Data

    Directory of Open Access Journals (Sweden)

    Eric P. Xing

    2016-06-01

    Full Text Available The rise of big data has led to new demands for machine learning (ML systems to learn complex models, with millions to billions of parameters, that promise adequate capacity to digest massive datasets and offer powerful predictive analytics (such as high-dimensional latent features, intermediate representations, and decision functions thereupon. In order to run ML algorithms at such scales, on a distributed cluster with tens to thousands of machines, it is often the case that significant engineering efforts are required—and one might fairly ask whether such engineering truly falls within the domain of ML research. Taking the view that “big” ML systems can benefit greatly from ML-rooted statistical and algorithmic insights—and that ML researchers should therefore not shy away from such systems design—we discuss a series of principles and strategies distilled from our recent efforts on industrial-scale ML solutions. These principles and strategies span a continuum from application, to engineering, and to theoretical research and development of big ML systems and architectures, with the goal of understanding how to make them efficient, generally applicable, and supported with convergence and scaling guarantees. They concern four key questions that traditionally receive little attention in ML research: How can an ML program be distributed over a cluster? How can ML computation be bridged with inter-machine communication? How can such communication be performed? What should be communicated between machines? By exposing underlying statistical and algorithmic characteristics unique to ML programs but not typically seen in traditional computer programs, and by dissecting successful cases to reveal how we have harnessed these principles to design and develop both high-performance distributed ML software as well as general-purpose ML frameworks, we present opportunities for ML researchers and practitioners to further shape and enlarge the area

  11. Distributed Extreme Learning Machine for Nonlinear Learning over Network

    Directory of Open Access Journals (Sweden)

    Songyan Huang

    2015-02-01

    Full Text Available Distributed data collection and analysis over a network are ubiquitous, especially over a wireless sensor network (WSN. To our knowledge, the data model used in most of the distributed algorithms is linear. However, in real applications, the linearity of systems is not always guaranteed. In nonlinear cases, the single hidden layer feedforward neural network (SLFN with radial basis function (RBF hidden neurons has the ability to approximate any continuous functions and, thus, may be used as the nonlinear learning system. However, confined by the communication cost, using the distributed version of the conventional algorithms to train the neural network directly is usually prohibited. Fortunately, based on the theorems provided in the extreme learning machine (ELM literature, we only need to compute the output weights of the SLFN. Computing the output weights itself is a linear learning problem, although the input-output mapping of the overall SLFN is still nonlinear. Using the distributed algorithmto cooperatively compute the output weights of the SLFN, we obtain a distributed extreme learning machine (dELM for nonlinear learning in this paper. This dELM is applied to the regression problem and classification problem to demonstrate its effectiveness and advantages.

  12. X-ray evaluation of residual stress distributions within surface machined layer generated by surface machining and sequential welding

    International Nuclear Information System (INIS)

    Taniguchi, Yuu; Okano, Shigetaka; Mochizuki, Masahito

    2017-01-01

    The excessive tensile residual stress generated by welding after surface machining may be an important factor to cause stress corrosion cracking (SCC) in nuclear power plants. Therefore we need to understand and control the residual stress distribution appropriately. In this study, residual stress distributions within surface machined layer generated by surface machining and sequential welding were evaluated by X-ray diffraction method. Depth directional distributions were also investigated by electrolytic polishing. In addition, to consider the effect of work hardened layer on the residual stress distributions, we also measured full width at half maximum (FWHM) obtained from X-ray diffraction. Testing material was a low-carbon austenitic stainless steel type SUS316L. Test specimens were prepared by surface machining with different cutting conditions. Then, bead-on-plate welding under the same welding condition was carried out on the test specimens with different surface machined layer. As a result, the tensile residual stress generated by surface machining increased with increasing cutting speed and showed nearly uniform distributions on the surface. Furthermore, the tensile residual stress drastically decreased with increasing measurement depth within surface machined layer. Then, the residual stress approached 0 MPa after the compressive value showed. FWHM also decreased drastically with increasing measurement depth and almost constant value from a certain depth, which was almost equal regardless of the machining condition, within surface machined layer in all specimens. After welding, the transverse distribution of the longitudinal residual stress varied in the area apart from the weld center according to machining conditions and had a maximum value in heat affected zone. The magnitude of the maximum residual stress was almost equal regardless of the machining condition and decreased with increasing measurement depth within surface machined layer. Finally, the

  13. 34 CFR 395.8 - Distribution and use of income from vending machines on Federal property.

    Science.gov (United States)

    2010-07-01

    ... 34 Education 2 2010-07-01 2010-07-01 false Distribution and use of income from vending machines on... use of income from vending machines on Federal property. (a) Vending machine income from vending... the basis of each prior year's operation, except that vending machine income shall not accrue to any...

  14. The distribution of controlled drugs on banknotes via counting machines.

    Science.gov (United States)

    Carter, James F; Sleeman, Richard; Parry, Joanna

    2003-03-27

    Bundles of paper, similar to sterling banknotes, were counted in banks in England and Wales. Subsequent analysis showed that the counting process, both by machine and by hand, transferred nanogram amounts of cocaine to the paper. Crystalline material, similar to cocaine hydrochloride, could be observed on the surface of the paper following counting. The geographical distribution of contamination broadly followed Government statistics for cocaine usage within the UK. Diacetylmorphine, Delta(9)-tetrahydrocannabinol (THC) and 3,4-methylenedioxymethylamphetamine (MDMA) were not detected during this study.

  15. Distributed Shared Memory for the Cell Broadband Engine (DSMCBE)

    DEFF Research Database (Denmark)

    Larsen, Morten Nørgaard; Skovhede, Kenneth; Vinter, Brian

    2009-01-01

    in and out of non-coherent local storage blocks for each special processor element. In this paper we present a software library, namely the Distributed Shared Memory for the Cell Broadband Engine (DSMCBE). By using techniques known from distributed shared memory DSMCBE allows programmers to program the CELL...

  16. Ring interconnection for distributed memory automation and computing system

    Energy Technology Data Exchange (ETDEWEB)

    Vinogradov, V I [Inst. for Nuclear Research of the Russian Academy of Sciences, Moscow (Russian Federation)

    1996-12-31

    Problems of development of measurement, acquisition and central systems based on a distributed memory and a ring interface are discussed. It has been found that the RAM LINK-type protocol can be used for ringlet links in non-symmetrical distributed memory architecture multiprocessor system interaction. 5 refs.

  17. Translation techniques for distributed-shared memory programming models

    Energy Technology Data Exchange (ETDEWEB)

    Fuller, Douglas James [Iowa State Univ., Ames, IA (United States)

    2005-01-01

    The high performance computing community has experienced an explosive improvement in distributed-shared memory hardware. Driven by increasing real-world problem complexity, this explosion has ushered in vast numbers of new systems. Each new system presents new challenges to programmers and application developers. Part of the challenge is adapting to new architectures with new performance characteristics. Different vendors release systems with widely varying architectures that perform differently in different situations. Furthermore, since vendors need only provide a single performance number (total MFLOPS, typically for a single benchmark), they only have strong incentive initially to optimize the API of their choice. Consequently, only a fraction of the available APIs are well optimized on most systems. This causes issues porting and writing maintainable software, let alone issues for programmers burdened with mastering each new API as it is released. Also, programmers wishing to use a certain machine must choose their API based on the underlying hardware instead of the application. This thesis argues that a flexible, extensible translator for distributed-shared memory APIs can help address some of these issues. For example, a translator might take as input code in one API and output an equivalent program in another. Such a translator could provide instant porting for applications to new systems that do not support the application's library or language natively. While open-source APIs are abundant, they do not perform optimally everywhere. A translator would also allow performance testing using a single base code translated to a number of different APIs. Most significantly, this type of translator frees programmers to select the most appropriate API for a given application based on the application (and developer) itself instead of the underlying hardware.

  18. Adaptive Dynamic Process Scheduling on Distributed Memory Parallel Computers

    Directory of Open Access Journals (Sweden)

    Wei Shu

    1994-01-01

    Full Text Available One of the challenges in programming distributed memory parallel machines is deciding how to allocate work to processors. This problem is particularly important for computations with unpredictable dynamic behaviors or irregular structures. We present a scheme for dynamic scheduling of medium-grained processes that is useful in this context. The adaptive contracting within neighborhood (ACWN is a dynamic, distributed, load-dependent, and scalable scheme. It deals with dynamic and unpredictable creation of processes and adapts to different systems. The scheme is described and contrasted with two other schemes that have been proposed in this context, namely the randomized allocation and the gradient model. The performance of the three schemes on an Intel iPSC/2 hypercube is presented and analyzed. The experimental results show that even though the ACWN algorithm incurs somewhat larger overhead than the randomized allocation, it achieves better performance in most cases due to its adaptiveness. Its feature of quickly spreading the work helps it outperform the gradient model in performance and scalability.

  19. Memory-assisted measurement-device-independent quantum key distribution

    Science.gov (United States)

    Panayi, Christiana; Razavi, Mohsen; Ma, Xiongfeng; Lütkenhaus, Norbert

    2014-04-01

    A protocol with the potential of beating the existing distance records for conventional quantum key distribution (QKD) systems is proposed. It borrows ideas from quantum repeaters by using memories in the middle of the link, and that of measurement-device-independent QKD, which only requires optical source equipment at the user's end. For certain memories with short access times, our scheme allows a higher repetition rate than that of quantum repeaters with single-mode memories, thereby requiring lower coherence times. By accounting for various sources of nonideality, such as memory decoherence, dark counts, misalignment errors, and background noise, as well as timing issues with memories, we develop a mathematical framework within which we can compare QKD systems with and without memories. In particular, we show that with the state-of-the-art technology for quantum memories, it is potentially possible to devise memory-assisted QKD systems that, at certain distances of practical interest, outperform current QKD implementations.

  20. Developing a software for tracking the memory states of the machines in the LHCb Filter Farm

    CERN Document Server

    Jain, Harshit

    2017-01-01

    The LHCb Event Filter Farm consists of more than 1500 server nodes with a total amount of roughly 65 TB operating memory .The memory is crucial for the success of the LHCb experiment, since the proton-proton collisions are temporarily stored on these memory modules. Unfortunately, the aging nodes of the server farm occasionally suffer losses of their memory modules. The lower the available memory, the lower performance we can get out of it. Inducing the users or administrators to pay attention to this matter is inefficient. One needs to upgrade it to an acceptable way. The aim of this project was to develop a software to monitor a set of test machines. The software stores the data of the memory sticks in advance in a database which will be used for future reference. Then it checks the memory sticks at a future time instant to find any failures. In the case of any such losses the software looks up in the database to find out which memory sticks have lost and displays all information of those sticks in a log fi...

  1. A Comparison of Two Paradigms for Distributed Shared Memory

    NARCIS (Netherlands)

    Levelt, W.G.; Kaashoek, M.F.; Bal, H.E.; Tanenbaum, A.S.

    1992-01-01

    Two paradigms for distributed shared memory on loosely‐coupled computing systems are compared: the shared data‐object model as used in Orca, a programming language specially designed for loosely‐coupled computing systems, and the shared virtual memory model. For both paradigms two systems are

  2. Distributed trace using central performance counter memory

    Science.gov (United States)

    Satterfield, David L.; Sexton, James C.

    2013-01-22

    A plurality of processing cores, are central storage unit having at least memory connected in a daisy chain manner, forming a daisy chain ring layout on an integrated chip. At least one of the plurality of processing cores places trace data on the daisy chain connection for transmitting the trace data to the central storage unit, and the central storage unit detects the trace data and stores the trace data in the memory co-located in with the central storage unit.

  3. Influence of magnet eddy current on magnetization characteristics of variable flux memory machine

    Science.gov (United States)

    Yang, Hui; Lin, Heyun; Zhu, Z. Q.; Lyu, Shukang

    2018-05-01

    In this paper, the magnet eddy current characteristics of a newly developed variable flux memory machine (VFMM) is investigated. Firstly, the machine structure, non-linear hysteresis characteristics and eddy current modeling of low coercive force magnet are described, respectively. Besides, the PM eddy current behaviors when applying the demagnetizing current pulses are unveiled and investigated. The mismatch of the required demagnetization currents between the cases with or without considering the magnet eddy current is identified. In addition, the influences of the magnet eddy current on the demagnetization effect of VFMM are analyzed. Finally, a prototype is manufactured and tested to verify the theoretical analyses.

  4. Massively Parallel Polar Decomposition on Distributed-Memory Systems

    KAUST Repository

    Ltaief, Hatem; Sukkari, Dalal E.; Esposito, Aniello; Nakatsukasa, Yuji; Keyes, David E.

    2018-01-01

    We present a high-performance implementation of the Polar Decomposition (PD) on distributed-memory systems. Building upon on the QR-based Dynamically Weighted Halley (QDWH) algorithm, the key idea lies in finding the best rational approximation

  5. Working Memory and Distributed Vocabulary Learning.

    Science.gov (United States)

    Atkins, Paul W. B.; Baddeley, Alan D.

    1998-01-01

    Tested the hypothesis that individual differences in immediate-verbal-memory span predict success in second-language vocabulary acquisition. In the two-session study, adult subjects learned 56 English-Finnish translations. Tested one week later, subjects were less likely to remember those words they had difficulty learning, even though they had…

  6. A model for Intelligent Random Access Memory architecture (IRAM) cellular automata algorithms on the Associative String Processing machine (ASTRA)

    CERN Document Server

    Rohrbach, F; Vesztergombi, G

    1997-01-01

    In the near future, the computer performance will be completely determined by how long it takes to access memory. There are bottle-necks in memory latency and memory-to processor interface bandwidth. The IRAM initiative could be the answer by putting Processor-In-Memory (PIM). Starting from the massively parallel processing concept, one reached a similar conclusion. The MPPC (Massively Parallel Processing Collaboration) project and the 8K processor ASTRA machine (Associative String Test bench for Research \\& Applications) developed at CERN \\cite{kuala} can be regarded as a forerunner of the IRAM concept. The computing power of the ASTRA machine, regarded as an IRAM with 64 one-bit processors on a 64$\\times$64 bit-matrix memory chip machine, has been demonstrated by running statistical physics algorithms: one-dimensional stochastic cellular automata, as a simple model for dynamical phase transitions. As a relevant result for physics, the damage spreading of this model has been investigated.

  7. Efficient implementations of block sparse matrix operations on shared memory vector machines

    International Nuclear Information System (INIS)

    Washio, T.; Maruyama, K.; Osoda, T.; Doi, S.; Shimizu, F.

    2000-01-01

    In this paper, we propose vectorization and shared memory-parallelization techniques for block-type random sparse matrix operations in finite element (FEM) applications. Here, a block corresponds to unknowns on one node in the FEM mesh and we assume that the block size is constant over the mesh. First, we discuss some basic vectorization ideas (the jagged diagonal (JAD) format and the segmented scan algorithm) for the sparse matrix-vector product. Then, we extend these ideas to the shared memory parallelization. After that, we show that the techniques can be applied not only to the sparse matrix-vector product but also to the sparse matrix-matrix product, the incomplete or complete sparse LU factorization and preconditioning. Finally, we report the performance evaluation results obtained on an NEC SX-4 shared memory vector machine for linear systems in some FEM applications. (author)

  8. Prediction of residual stress distributions due to surface machining and welding and crack growth simulation under residual stress distribution

    International Nuclear Information System (INIS)

    Ihara, Ryohei; Katsuyama, JInya; Onizawa, Kunio; Hashimoto, Tadafumi; Mikami, Yoshiki; Mochizuki, Masahito

    2011-01-01

    Research highlights: → Residual stress distributions due to welding and machining are evaluated by XRD and FEM. → Residual stress due to machining shows higher tensile stress than welding near the surface. → Crack growth analysis is performed using calculated residual stress. → Crack growth result is affected machining rather than welding. → Machining is an important factor for crack growth. - Abstract: In nuclear power plants, stress corrosion cracking (SCC) has been observed near the weld zone of the core shroud and primary loop recirculation (PLR) pipes made of low-carbon austenitic stainless steel Type 316L. The joining process of pipes usually includes surface machining and welding. Both processes induce residual stresses, and residual stresses are thus important factors in the occurrence and propagation of SCC. In this study, the finite element method (FEM) was used to estimate residual stress distributions generated by butt welding and surface machining. The thermoelastic-plastic analysis was performed for the welding simulation, and the thermo-mechanical coupled analysis based on the Johnson-Cook material model was performed for the surface machining simulation. In addition, a crack growth analysis based on the stress intensity factor (SIF) calculation was performed using the calculated residual stress distributions that are generated by welding and surface machining. The surface machining analysis showed that tensile residual stress due to surface machining only exists approximately 0.2 mm from the machined surface, and the surface residual stress increases with cutting speed. The crack growth analysis showed that the crack depth is affected by both surface machining and welding, and the crack length is more affected by surface machining than by welding.

  9. Memory-assisted measurement-device-independent quantum key distribution

    International Nuclear Information System (INIS)

    Panayi, Christiana; Razavi, Mohsen; Ma, Xiongfeng; Lütkenhaus, Norbert

    2014-01-01

    A protocol with the potential of beating the existing distance records for conventional quantum key distribution (QKD) systems is proposed. It borrows ideas from quantum repeaters by using memories in the middle of the link, and that of measurement-device-independent QKD, which only requires optical source equipment at the user's end. For certain memories with short access times, our scheme allows a higher repetition rate than that of quantum repeaters with single-mode memories, thereby requiring lower coherence times. By accounting for various sources of nonideality, such as memory decoherence, dark counts, misalignment errors, and background noise, as well as timing issues with memories, we develop a mathematical framework within which we can compare QKD systems with and without memories. In particular, we show that with the state-of-the-art technology for quantum memories, it is potentially possible to devise memory-assisted QKD systems that, at certain distances of practical interest, outperform current QKD implementations. (paper)

  10. Monte Carlo photon transport on shared memory and distributed memory parallel processors

    International Nuclear Information System (INIS)

    Martin, W.R.; Wan, T.C.; Abdel-Rahman, T.S.; Mudge, T.N.; Miura, K.

    1987-01-01

    Parallelized Monte Carlo algorithms for analyzing photon transport in an inertially confined fusion (ICF) plasma are considered. Algorithms were developed for shared memory (vector and scalar) and distributed memory (scalar) parallel processors. The shared memory algorithm was implemented on the IBM 3090/400, and timing results are presented for dedicated runs with two, three, and four processors. Two alternative distributed memory algorithms (replication and dispatching) were implemented on a hypercube parallel processor (1 through 64 nodes). The replication algorithm yields essentially full efficiency for all cube sizes; with the 64-node configuration, the absolute performance is nearly the same as with the CRAY X-MP. The dispatching algorithm also yields efficiencies above 80% in a large simulation for the 64-processor configuration

  11. A Parameter Communication Optimization Strategy for Distributed Machine Learning in Sensors.

    Science.gov (United States)

    Zhang, Jilin; Tu, Hangdi; Ren, Yongjian; Wan, Jian; Zhou, Li; Li, Mingwei; Wang, Jue; Yu, Lifeng; Zhao, Chang; Zhang, Lei

    2017-09-21

    In order to utilize the distributed characteristic of sensors, distributed machine learning has become the mainstream approach, but the different computing capability of sensors and network delays greatly influence the accuracy and the convergence rate of the machine learning model. Our paper describes a reasonable parameter communication optimization strategy to balance the training overhead and the communication overhead. We extend the fault tolerance of iterative-convergent machine learning algorithms and propose the Dynamic Finite Fault Tolerance (DFFT). Based on the DFFT, we implement a parameter communication optimization strategy for distributed machine learning, named Dynamic Synchronous Parallel Strategy (DSP), which uses the performance monitoring model to dynamically adjust the parameter synchronization strategy between worker nodes and the Parameter Server (PS). This strategy makes full use of the computing power of each sensor, ensures the accuracy of the machine learning model, and avoids the situation that the model training is disturbed by any tasks unrelated to the sensors.

  12. Machine learning of network metrics in ATLAS Distributed Data Management

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00218873; The ATLAS collaboration; Toler, Wesley; Vamosi, Ralf; Bogado Garcia, Joaquin Ignacio

    2017-01-01

    The increasing volume of physics data poses a critical challenge to the ATLAS experiment. In anticipation of high luminosity physics, automation of everyday data management tasks has become necessary. Previously many of these tasks required human decision-making and operation. Recent advances in hardware and software have made it possible to entrust more complicated duties to automated systems using models trained by machine learning algorithms. In this contribution we show results from one of our ongoing automation efforts that focuses on network metrics. First, we describe our machine learning framework built atop the ATLAS Analytics Platform. This framework can automatically extract and aggregate data, train models with various machine learning algorithms, and eventually score the resulting models and parameters. Second, we use these models to forecast metrics relevant for network-aware job scheduling and data brokering. We show the characteristics of the data and evaluate the forecasting accuracy of our m...

  13. Machine learning of network metrics in ATLAS Distributed Data Management

    Science.gov (United States)

    Lassnig, Mario; Toler, Wesley; Vamosi, Ralf; Bogado, Joaquin; ATLAS Collaboration

    2017-10-01

    The increasing volume of physics data poses a critical challenge to the ATLAS experiment. In anticipation of high luminosity physics, automation of everyday data management tasks has become necessary. Previously many of these tasks required human decision-making and operation. Recent advances in hardware and software have made it possible to entrust more complicated duties to automated systems using models trained by machine learning algorithms. In this contribution we show results from one of our ongoing automation efforts that focuses on network metrics. First, we describe our machine learning framework built atop the ATLAS Analytics Platform. This framework can automatically extract and aggregate data, train models with various machine learning algorithms, and eventually score the resulting models and parameters. Second, we use these models to forecast metrics relevant for networkaware job scheduling and data brokering. We show the characteristics of the data and evaluate the forecasting accuracy of our models.

  14. The performance of disk arrays in shared-memory database machines

    Science.gov (United States)

    Katz, Randy H.; Hong, Wei

    1993-01-01

    In this paper, we examine how disk arrays and shared memory multiprocessors lead to an effective method for constructing database machines for general-purpose complex query processing. We show that disk arrays can lead to cost-effective storage systems if they are configured from suitably small formfactor disk drives. We introduce the storage system metric data temperature as a way to evaluate how well a disk configuration can sustain its workload, and we show that disk arrays can sustain the same data temperature as a more expensive mirrored-disk configuration. We use the metric to evaluate the performance of disk arrays in XPRS, an operational shared-memory multiprocessor database system being developed at the University of California, Berkeley.

  15. Unraveling Network-induced Memory Contention: Deeper Insights with Machine Learning

    International Nuclear Information System (INIS)

    Groves, Taylor Liles; Grant, Ryan; Gonzales, Aaron; Arnold, Dorian

    2017-01-01

    Remote Direct Memory Access (RDMA) is expected to be an integral communication mechanism for future exascale systems enabling asynchronous data transfers, so that applications may fully utilize CPU resources while simultaneously sharing data amongst remote nodes. We examine Network-induced Memory Contention (NiMC) on Infiniband networks. We expose the interactions between RDMA, main-memory and cache, when applications and out-of-band services compete for memory resources. We then explore NiMCs resulting impact on application-level performance. For a range of hardware technologies and HPC workloads, we quantify NiMC and show that NiMCs impact grows with scale resulting in up to 3X performance degradation at scales as small as 8K processes even in applications that previously have been shown to be performance resilient in the presence of noise. In addition, this work examines the problem of predicting NiMC's impact on applications by leveraging machine learning and easily accessible performance counters. This approach provides additional insights about the root cause of NiMC and facilitates dynamic selection of potential solutions. Finally, we evaluated three potential techniques to reduce NiMCs impact, namely hardware offloading, core reservation and network throttling.

  16. A distributed-memory hierarchical solver for general sparse linear systems

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Chao [Stanford Univ., CA (United States). Inst. for Computational and Mathematical Engineering; Pouransari, Hadi [Stanford Univ., CA (United States). Dept. of Mechanical Engineering; Rajamanickam, Sivasankaran [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Center for Computing Research; Boman, Erik G. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Center for Computing Research; Darve, Eric [Stanford Univ., CA (United States). Inst. for Computational and Mathematical Engineering and Dept. of Mechanical Engineering

    2017-12-20

    We present a parallel hierarchical solver for general sparse linear systems on distributed-memory machines. For large-scale problems, this fully algebraic algorithm is faster and more memory-efficient than sparse direct solvers because it exploits the low-rank structure of fill-in blocks. Depending on the accuracy of low-rank approximations, the hierarchical solver can be used either as a direct solver or as a preconditioner. The parallel algorithm is based on data decomposition and requires only local communication for updating boundary data on every processor. Moreover, the computation-to-communication ratio of the parallel algorithm is approximately the volume-to-surface-area ratio of the subdomain owned by every processor. We also provide various numerical results to demonstrate the versatility and scalability of the parallel algorithm.

  17. TF.Learn: TensorFlow's High-level Module for Distributed Machine Learning

    OpenAIRE

    Tang, Yuan

    2016-01-01

    TF.Learn is a high-level Python module for distributed machine learning inside TensorFlow. It provides an easy-to-use Scikit-learn style interface to simplify the process of creating, configuring, training, evaluating, and experimenting a machine learning model. TF.Learn integrates a wide range of state-of-art machine learning algorithms built on top of TensorFlow's low level APIs for small to large-scale supervised and unsupervised problems. This module focuses on bringing machine learning t...

  18. Dynamic overset grid communication on distributed memory parallel processors

    Science.gov (United States)

    Barszcz, Eric; Weeratunga, Sisira K.; Meakin, Robert L.

    1993-01-01

    A parallel distributed memory implementation of intergrid communication for dynamic overset grids is presented. Included are discussions of various options considered during development. Results are presented comparing an Intel iPSC/860 to a single processor Cray Y-MP. Results for grids in relative motion show the iPSC/860 implementation to be faster than the Cray implementation.

  19. DISTRIBUTED SYSTEM FOR HUMAN MACHINE INTERACTION IN VIRTUAL ENVIRONMENTS

    Directory of Open Access Journals (Sweden)

    Abraham Obed Chan-Canche

    2017-07-01

    Full Text Available The communication networks built by multiple devices and sensors are becoming more frequent. These device networks allow human-machine interaction development which aims to improve the human performance generating an adaptive environment in response to the information provided by it. The problem of this work is the quick integration of a device network that allows the development of a flexible immersive environment for different uses.

  20. Sparse Distributed Memory: understanding the speed and robustness of expert memory

    Directory of Open Access Journals (Sweden)

    Marcelo Salhab Brogliato

    2014-04-01

    Full Text Available How can experts, sometimes in exacting detail, almost immediately and very precisely recall memory items from a vast repertoire? The problem in which we will be interested concerns models of theoretical neuroscience that could explain the speed and robustness of an expert's recollection. The approach is based on Sparse Distributed Memory, which has been shown to be plausible, both in a neuroscientific and in a psychological manner, in a number of ways. A crucial characteristic concerns the limits of human recollection, the `tip-of-tongue' memory event--which is found at a non-linearity in the model. We expand the theoretical framework, deriving an optimization formula to solve to this non-linearity. Numerical results demonstrate how the higher frequency of rehearsal, through work or study, immediately increases the robustness and speed associated with expert memory.

  1. Lifetime-Based Memory Management for Distributed Data Processing Systems

    DEFF Research Database (Denmark)

    Lu, Lu; Shi, Xuanhua; Zhou, Yongluan

    2016-01-01

    create a large amount of long-living data objects in the heap, which may quickly saturate the garbage collector, especially when handling a large dataset, and hence would limit the scalability of the system. To eliminate this problem, we propose a lifetime-based memory management framework, which...... the garbage collection time by up to 99.9%, 2) to achieve up to 22.7x speed up in terms of execution time in cases without data spilling and 41.6x speedup in cases with data spilling, and 3) to consume up to 46.6% less memory.......In-memory caching of intermediate data and eager combining of data in shuffle buffers have been shown to be very effective in minimizing the re-computation and I/O cost in distributed data processing systems like Spark and Flink. However, it has also been widely reported that these techniques would...

  2. Distributed-Memory Breadth-First Search on Massive Graphs

    Energy Technology Data Exchange (ETDEWEB)

    Buluc, Aydin [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Computational Research Division; Beamer, Scott [Univ. of California, Berkeley, CA (United States). Dept. of Electrical Engineering and Computer Sciences; Madduri, Kamesh [Pennsylvania State Univ., University Park, PA (United States). Computer Science & Engineering Dept.; Asanovic, Krste [Univ. of California, Berkeley, CA (United States). Dept. of Electrical Engineering and Computer Sciences; Patterson, David [Univ. of California, Berkeley, CA (United States). Dept. of Electrical Engineering and Computer Sciences

    2017-09-26

    This chapter studies the problem of traversing large graphs using the breadth-first search order on distributed-memory supercomputers. We consider both the traditional level-synchronous top-down algorithm as well as the recently discovered direction optimizing algorithm. We analyze the performance and scalability trade-offs in using different local data structures such as CSR and DCSC, enabling in-node multithreading, and graph decompositions such as 1D and 2D decomposition.

  3. A portable implementation of ARPACK for distributed memory parallel architectures

    Energy Technology Data Exchange (ETDEWEB)

    Maschhoff, K.J.; Sorensen, D.C.

    1996-12-31

    ARPACK is a package of Fortran 77 subroutines which implement the Implicitly Restarted Arnoldi Method used for solving large sparse eigenvalue problems. A parallel implementation of ARPACK is presented which is portable across a wide range of distributed memory platforms and requires minimal changes to the serial code. The communication layers used for message passing are the Basic Linear Algebra Communication Subprograms (BLACS) developed for the ScaLAPACK project and Message Passing Interface(MPI).

  4. Understanding Notional Machines through Traditional Teaching with Conceptual Contraposition and Program Memory Tracing

    Directory of Open Access Journals (Sweden)

    Jeisson Hidalgo-Céspedes

    2016-08-01

    Full Text Available A correct understanding about how computers run code is mandatory in order to effectively learn to program. Lectures have historically been used in programming courses to teach how computers execute code, and students are assessed through traditional evaluation methods, such as exams. Constructivism learning theory objects to students’ passiveness during lessons, and traditional quantitative methods for evaluating a complex cognitive process such as understanding. Constructivism proposes complimentary techniques, such as conceptual contraposition and colloquies. We enriched lectures of a “Programming II” (CS2 course combining conceptual contraposition with program memory tracing, then we evaluated students’ understanding of programming concepts through colloquies. Results revealed that these techniques applied to the lecture are insufficient to help students develop satisfactory mental models of the C++ notional machine, and colloquies behaved as the most comprehensive traditional evaluations conducted in the course.

  5. MACHINE LEARNING FOR THE SELF-ORGANIZATION OF DISTRIBUTED SYSTEMS IN ECONOMIC APPLICATIONS

    OpenAIRE

    Jerzy Balicki; Waldemar Korłub

    2017-01-01

    In this paper, an application of machine learning to the problem of self-organization of distributed systems has been discussed with regard to economic applications, with particular emphasis on supervised neural network learning to predict stock investments and some ratings of companies. In addition, genetic programming can play an important role in the preparation and testing of several financial information systems. For this reason, machine learning applications have been discussed because ...

  6. [Distribution of neural memory, loading factor, its regulation and optimization].

    Science.gov (United States)

    Radchenko, A N

    1999-01-01

    Recording and retrieving functions of the neural memory are simulated as a control of local conformational processes in neural synaptic fields. The localization of conformational changes is related to the afferent temporal-spatial pulse pattern flow, the microstructure of connections and a plurality of temporal delays in synaptic fields and afferent pathways. The loci of conformations are described by sets of afferent addresses named address domains. Being superimposed on each other, address domains form a multilayer covering of the address space of the neuron or the ensemble. The superposition factor determines the dissemination of the conformational process, and the fuzzing of memory, and its accuracy and reliability. The engram is formed as detects in the packing of the address space and hence can be retrieved in inverse form. The accuracy of the retrieved information depends on the threshold level of conformational transitions, the distribution of conformational changes in synaptic fields of the neuronal population, and the memory loading factor. The latter is represented in the model by a slow potential. It reflects total conformational changes and displaces the membrane potential to monostable conformational regimes, by governing the exit from the recording regime, the potentiation of the neurone, and the readiness to reproduction. A relative amplitude of the slow potential and the coefficient of postconformational modification of ionic conductivity, which provides maximum reliability, accuracy, and capacity of memory, are calculated.

  7. A Directed Acyclic Graph-Large Margin Distribution Machine Model for Music Symbol Classification.

    Directory of Open Access Journals (Sweden)

    Cuihong Wen

    Full Text Available Optical Music Recognition (OMR has received increasing attention in recent years. In this paper, we propose a classifier based on a new method named Directed Acyclic Graph-Large margin Distribution Machine (DAG-LDM. The DAG-LDM is an improvement of the Large margin Distribution Machine (LDM, which is a binary classifier that optimizes the margin distribution by maximizing the margin mean and minimizing the margin variance simultaneously. We modify the LDM to the DAG-LDM to solve the multi-class music symbol classification problem. Tests are conducted on more than 10000 music symbol images, obtained from handwritten and printed images of music scores. The proposed method provides superior classification capability and achieves much higher classification accuracy than the state-of-the-art algorithms such as Support Vector Machines (SVMs and Neural Networks (NNs.

  8. A Directed Acyclic Graph-Large Margin Distribution Machine Model for Music Symbol Classification.

    Science.gov (United States)

    Wen, Cuihong; Zhang, Jing; Rebelo, Ana; Cheng, Fanyong

    2016-01-01

    Optical Music Recognition (OMR) has received increasing attention in recent years. In this paper, we propose a classifier based on a new method named Directed Acyclic Graph-Large margin Distribution Machine (DAG-LDM). The DAG-LDM is an improvement of the Large margin Distribution Machine (LDM), which is a binary classifier that optimizes the margin distribution by maximizing the margin mean and minimizing the margin variance simultaneously. We modify the LDM to the DAG-LDM to solve the multi-class music symbol classification problem. Tests are conducted on more than 10000 music symbol images, obtained from handwritten and printed images of music scores. The proposed method provides superior classification capability and achieves much higher classification accuracy than the state-of-the-art algorithms such as Support Vector Machines (SVMs) and Neural Networks (NNs).

  9. Parallel SN algorithms in shared- and distributed-memory environments

    International Nuclear Information System (INIS)

    Haghighat, Alireza; Hunter, Melissa A.; Mattis, Ronald E.

    1995-01-01

    Different 2-D spatial domain partitioning Sn transport theory algorithms have been developed on the basis of the Block-Jacobi iterative scheme. These algorithms have been incorporated into TWOTRAN-II, and tested on a shared-memory CRAY Y-MP C90 and a distributed-memory IBM SP1. For a series of fixed source r-z geometry homogeneous problems, parallel efficiencies in a range of 50-90% are achieved on the C90 with 6 processors, and lower values (20-60%) are obtained on the SP1. It is demonstrated that better performance is attainable if one addresses issues such as convergence rate, load-balancing, and granularity for both architectures, as well as message passing (network bandwidth and latency) for SP1. (author). 17 refs, 4 figs

  10. SU-E-T-113: Dose Distribution Using Respiratory Signals and Machine Parameters During Treatment

    International Nuclear Information System (INIS)

    Imae, T; Haga, A; Saotome, N; Kida, S; Nakano, M; Takeuchi, Y; Shiraki, T; Yano, K; Yamashita, H; Nakagawa, K; Ohtomo, K

    2014-01-01

    Purpose: Volumetric modulated arc therapy (VMAT) is a rotational intensity-modulated radiotherapy (IMRT) technique capable of acquiring projection images during treatment. Treatment plans for lung tumors using stereotactic body radiotherapy (SBRT) are calculated with planning computed tomography (CT) images only exhale phase. Purpose of this study is to evaluate dose distribution by reconstructing from only the data such as respiratory signals and machine parameters acquired during treatment. Methods: Phantom and three patients with lung tumor underwent CT scans for treatment planning. They were treated by VMAT while acquiring projection images to derive their respiratory signals and machine parameters including positions of multi leaf collimators, dose rates and integrated monitor units. The respiratory signals were divided into 4 and 10 phases and machine parameters were correlated with the divided respiratory signals based on the gantry angle. Dose distributions of each respiratory phase were calculated from plans which were reconstructed from the respiratory signals and the machine parameters during treatment. The doses at isocenter, maximum point and the centroid of target were evaluated. Results and Discussion: Dose distributions during treatment were calculated using the machine parameters and the respiratory signals detected from projection images. Maximum dose difference between plan and in treatment distribution was −1.8±0.4% at centroid of target and dose differences of evaluated points between 4 and 10 phases were no significant. Conclusion: The present method successfully evaluated dose distribution using respiratory signals and machine parameters during treatment. This method is feasible to verify the actual dose for moving target

  11. Wearable Technology in Medicine: Machine-to-Machine (M2M) Communication in Distributed Systems.

    Science.gov (United States)

    Schmucker, Michael; Yildirim, Kemal; Igel, Christoph; Haag, Martin

    2016-01-01

    Smart wearables are capable of supporting physicians during various processes in medical emergencies. Nevertheless, it is almost impossible to operate several computers without neglecting a patient's treatment. Thus, it is necessary to set up a distributed network consisting of two or more computers to exchange data or initiate remote procedure calls (RPC). If it is not possible to create flawless connections between those devices, it is not possible to transfer medically relevant data to the most suitable device, as well as to control a device with another one. This paper shows how wearables can be paired and what problems occur when trying to pair several wearables. Furthermore, it is described as to what interesting scenarios are possible in the context of emergency medicine/paramedicine.

  12. Parallel Breadth-First Search on Distributed Memory Systems

    Energy Technology Data Exchange (ETDEWEB)

    Computational Research Division; Buluc, Aydin; Madduri, Kamesh

    2011-04-15

    Data-intensive, graph-based computations are pervasive in several scientific applications, and are known to to be quite challenging to implement on distributed memory systems. In this work, we explore the design space of parallel algorithms for Breadth-First Search (BFS), a key subroutine in several graph algorithms. We present two highly-tuned par- allel approaches for BFS on large parallel systems: a level-synchronous strategy that relies on a simple vertex-based partitioning of the graph, and a two-dimensional sparse matrix- partitioning-based approach that mitigates parallel commu- nication overhead. For both approaches, we also present hybrid versions with intra-node multithreading. Our novel hybrid two-dimensional algorithm reduces communication times by up to a factor of 3.5, relative to a common vertex based approach. Our experimental study identifies execu- tion regimes in which these approaches will be competitive, and we demonstrate extremely high performance on lead- ing distributed-memory parallel systems. For instance, for a 40,000-core parallel execution on Hopper, an AMD Magny- Cours based system, we achieve a BFS performance rate of 17.8 billion edge visits per second on an undirected graph of 4.3 billion vertices and 68.7 billion edges with skewed degree distribution.

  13. Periodic bidirectional associative memory neural networks with distributed delays

    Science.gov (United States)

    Chen, Anping; Huang, Lihong; Liu, Zhigang; Cao, Jinde

    2006-05-01

    Some sufficient conditions are obtained for the existence and global exponential stability of a periodic solution to the general bidirectional associative memory (BAM) neural networks with distributed delays by using the continuation theorem of Mawhin's coincidence degree theory and the Lyapunov functional method and the Young's inequality technique. These results are helpful for designing a globally exponentially stable and periodic oscillatory BAM neural network, and the conditions can be easily verified and be applied in practice. An example is also given to illustrate our results.

  14. MACHINE LEARNING FOR THE SELF-ORGANIZATION OF DISTRIBUTED SYSTEMS IN ECONOMIC APPLICATIONS

    Directory of Open Access Journals (Sweden)

    Jerzy Balicki

    2017-03-01

    Full Text Available In this paper, an application of machine learning to the problem of self-organization of distributed systems has been discussed with regard to economic applications, with particular emphasis on supervised neural network learning to predict stock investments and some ratings of companies. In addition, genetic programming can play an important role in the preparation and testing of several financial information systems. For this reason, machine learning applications have been discussed because some software applications can be automatically constructed by genetic programming. To obtain a competitive advantage, machine learning can be used for the management of self-organizing cloud computing systems performing calculations for business. Also the use of selected economic self-organizing distributed systems has been described, including some testing methods of predicting borrower reliability. Finally, some conclusions and directions for further research have been proposed.

  15. 76 FR 32231 - International Business Machines (IBM), Sales and Distribution Business Unit, Global Sales...

    Science.gov (United States)

    2011-06-03

    ... for the workers and former workers of International Business Machines (IBM), Sales and Distribution... reconsideration alleges that IBM outsourced to India and China. During the reconsideration investigation, it was..., Armonk, New York. The subject worker group supply computer software development and maintenance services...

  16. 76 FR 21033 - International Business Machines (IBM), Sales and Distribution Business Unit, Global Sales...

    Science.gov (United States)

    2011-04-14

    ... DEPARTMENT OF LABOR Employment and Training Administration [TA-W-74,364] International Business Machines (IBM), Sales and Distribution Business Unit, Global Sales Solution Department, Off-Site Teleworker in Centerport, New York; Notice of Affirmative Determination Regarding Application for Reconsideration By application dated November 29, 2011,...

  17. Implementation of Parallel Dynamic Simulation on Shared-Memory vs. Distributed-Memory Environments

    Energy Technology Data Exchange (ETDEWEB)

    Jin, Shuangshuang; Chen, Yousu; Wu, Di; Diao, Ruisheng; Huang, Zhenyu

    2015-12-09

    Power system dynamic simulation computes the system response to a sequence of large disturbance, such as sudden changes in generation or load, or a network short circuit followed by protective branch switching operation. It consists of a large set of differential and algebraic equations, which is computational intensive and challenging to solve using single-processor based dynamic simulation solution. High-performance computing (HPC) based parallel computing is a very promising technology to speed up the computation and facilitate the simulation process. This paper presents two different parallel implementations of power grid dynamic simulation using Open Multi-processing (OpenMP) on shared-memory platform, and Message Passing Interface (MPI) on distributed-memory clusters, respectively. The difference of the parallel simulation algorithms and architectures of the two HPC technologies are illustrated, and their performances for running parallel dynamic simulation are compared and demonstrated.

  18. TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems

    OpenAIRE

    Abadi, Martín; Agarwal, Ashish; Barham, Paul; Brevdo, Eugene; Chen, Zhifeng; Citro, Craig; Corrado, Greg S.; Davis, Andy; Dean, Jeffrey; Devin, Matthieu; Ghemawat, Sanjay; Goodfellow, Ian; Harp, Andrew; Irving, Geoffrey; Isard, Michael

    2016-01-01

    TensorFlow is an interface for expressing machine learning algorithms, and an implementation for executing such algorithms. A computation expressed using TensorFlow can be executed with little or no change on a wide variety of heterogeneous systems, ranging from mobile devices such as phones and tablets up to large-scale distributed systems of hundreds of machines and thousands of computational devices such as GPU cards. The system is flexible and can be used to express a wide variety of algo...

  19. Machine Learning Analysis Identifies Drosophila Grunge/Atrophin as an Important Learning and Memory Gene Required for Memory Retention and Social Learning.

    Science.gov (United States)

    Kacsoh, Balint Z; Greene, Casey S; Bosco, Giovanni

    2017-11-06

    High-throughput experiments are becoming increasingly common, and scientists must balance hypothesis-driven experiments with genome-wide data acquisition. We sought to predict novel genes involved in Drosophila learning and long-term memory from existing public high-throughput data. We performed an analysis using PILGRM, which analyzes public gene expression compendia using machine learning. We evaluated the top prediction alongside genes involved in learning and memory in IMP, an interface for functional relationship networks. We identified Grunge/Atrophin ( Gug/Atro ), a transcriptional repressor, histone deacetylase, as our top candidate. We find, through multiple, distinct assays, that Gug has an active role as a modulator of memory retention in the fly and its function is required in the adult mushroom body. Depletion of Gug specifically in neurons of the adult mushroom body, after cell division and neuronal development is complete, suggests that Gug function is important for memory retention through regulation of neuronal activity, and not by altering neurodevelopment. Our study provides a previously uncharacterized role for Gug as a possible regulator of neuronal plasticity at the interface of memory retention and memory extinction. Copyright © 2017 Kacsoh et al.

  20. Neuronal model with distributed delay: analysis and simulation study for gamma distribution memory kernel.

    Science.gov (United States)

    Karmeshu; Gupta, Varun; Kadambari, K V

    2011-06-01

    A single neuronal model incorporating distributed delay (memory)is proposed. The stochastic model has been formulated as a Stochastic Integro-Differential Equation (SIDE) which results in the underlying process being non-Markovian. A detailed analysis of the model when the distributed delay kernel has exponential form (weak delay) has been carried out. The selection of exponential kernel has enabled the transformation of the non-Markovian model to a Markovian model in an extended state space. For the study of First Passage Time (FPT) with exponential delay kernel, the model has been transformed to a system of coupled Stochastic Differential Equations (SDEs) in two-dimensional state space. Simulation studies of the SDEs provide insight into the effect of weak delay kernel on the Inter-Spike Interval(ISI) distribution. A measure based on Jensen-Shannon divergence is proposed which can be used to make a choice between two competing models viz. distributed delay model vis-á-vis LIF model. An interesting feature of the model is that the behavior of (CV(t))((ISI)) (Coefficient of Variation) of the ISI distribution with respect to memory kernel time constant parameter η reveals that neuron can switch from a bursting state to non-bursting state as the noise intensity parameter changes. The membrane potential exhibits decaying auto-correlation structure with or without damped oscillatory behavior depending on the choice of parameters. This behavior is in agreement with empirically observed pattern of spike count in a fixed time window. The power spectral density derived from the auto-correlation function is found to exhibit single and double peaks. The model is also examined for the case of strong delay with memory kernel having the form of Gamma distribution. In contrast to fast decay of damped oscillations of the ISI distribution for the model with weak delay kernel, the decay of damped oscillations is found to be slower for the model with strong delay kernel.

  1. Balance in machine architecture: Bandwidth on board and offboard, integer/control speed and flops versus memory

    International Nuclear Information System (INIS)

    Fischler, M.

    1992-04-01

    The issues to be addressed here are those of ''balance'' in machine architecture. By this, we mean how much emphasis must be placed on various aspects of the system to maximize its usefulness for physics. There are three components that contribute to the utility of a system: How the machine can be used, how big a problem can be attacked, and what the effective capabilities (power) of the hardware are like. The effective power issue is a matter of evaluating the impact of design decisions trading off architectural features such as memory bandwidth and interprocessor communication capabilities. What is studied is the effect these machine parameters have on how quickly the system can solve desired problems. There is a reasonable method for studying this: One selects a few representative algorithms and computes the impact of changing memory bandwidths, and so forth. The only room for controversy here is in the selection of representative problems. The issue of how big a problem can be attacked boils down to a balance of memory size versus power. Although this is a balance issue it is very different than the effective power situation, because no firm answer can be given at this time. The power to memory ratio is highly problem dependent, and optimizing it requires several pieces of physics input, including: how big a lattice is needed for interesting results; what sort of algorithms are best to use; and how many sweeps are needed to get valid results. We seem to be at the threshold of learning things about these issues, but for now, the memory size issue will necessarily be addressed in terms of best guesses, rules of thumb, and researchers' opinions

  2. Massively Parallel Polar Decomposition on Distributed-Memory Systems

    KAUST Repository

    Ltaief, Hatem

    2018-01-01

    We present a high-performance implementation of the Polar Decomposition (PD) on distributed-memory systems. Building upon on the QR-based Dynamically Weighted Halley (QDWH) algorithm, the key idea lies in finding the best rational approximation for the scalar sign function, which also corresponds to the polar factor for symmetric matrices, to further accelerate the QDWH convergence. Based on the Zolotarev rational functions—introduced by Zolotarev (ZOLO) in 1877— this new PD algorithm ZOLO-PD converges within two iterations even for ill-conditioned matrices, instead of the original six iterations needed for QDWH. ZOLO-PD uses the property of Zolotarev functions that optimality is maintained when two functions are composed in an appropriate manner. The resulting ZOLO-PD has a convergence rate up to seventeen, in contrast to the cubic convergence rate for QDWH. This comes at the price of higher arithmetic costs and memory footprint. These extra floating-point operations can, however, be processed in an embarrassingly parallel fashion. We demonstrate performance using up to 102, 400 cores on two supercomputers. We demonstrate that, in the presence of a large number of processing units, ZOLO-PD is able to outperform QDWH by up to 2.3X speedup, especially in situations where QDWH runs out of work, for instance, in the strong scaling mode of operation.

  3. Sensorimotor memory of object weight distribution during multidigit grasp.

    Science.gov (United States)

    Albert, Frederic; Santello, Marco; Gordon, Andrew M

    2009-10-09

    We studied the ability to transfer three-digit force sharing patterns learned through consecutive lifts of an object with an asymmetric center of mass (CM). After several object lifts, we asked subjects to rotate and translate the object to the contralateral hand and perform one additional lift. This task was performed under two weight conditions (550 and 950 g) to determine the extent to which subjects would be able to transfer weight and CM information. Learning transfer was quantified by measuring the extent to which force sharing patterns and peak object roll on the first post-translation trial resembled those measured on the pre-translation trial with the same CM. We found that the overall gain of fingertip forces was transferred following object rotation, but that the scaling of individual digit forces was specific to the learned digit-object configuration, and thus was not transferred following rotation. As a result, on the first post-translation trial there was a significantly larger object roll following object lift-off than on the pre-translation trial. This suggests that sensorimotor memories for weight, requiring scaling of fingertip force gain, may differ from memories for mass distribution.

  4. High Performance Polar Decomposition on Distributed Memory Systems

    KAUST Repository

    Sukkari, Dalal E.

    2016-08-08

    The polar decomposition of a dense matrix is an important operation in linear algebra. It can be directly calculated through the singular value decomposition (SVD) or iteratively using the QR dynamically-weighted Halley algorithm (QDWH). The former is difficult to parallelize due to the preponderant number of memory-bound operations during the bidiagonal reduction. We investigate the latter scenario, which performs more floating-point operations but exposes at the same time more parallelism, and therefore, runs closer to the theoretical peak performance of the system, thanks to more compute-bound matrix operations. Profiling results show the performance scalability of QDWH for calculating the polar decomposition using around 9200 MPI processes on well and ill-conditioned matrices of 100K×100K problem size. We study then the performance impact of the QDWH-based polar decomposition as a pre-processing step toward calculating the SVD itself. The new distributed-memory implementation of the QDWH-SVD solver achieves up to five-fold speedup against current state-of-the-art vendor SVD implementations. © Springer International Publishing Switzerland 2016.

  5. A Data Flow Model to Solve the Data Distribution Changing Problem in Machine Learning

    Directory of Open Access Journals (Sweden)

    Shang Bo-Wen

    2016-01-01

    Full Text Available Continuous prediction is widely used in broad communities spreading from social to business and the machine learning method is an important method in this problem.When we use the machine learning method to predict a problem. We use the data in the training set to fit the model and estimate the distribution of data in the test set.But when we use machine learning to do the continuous prediction we get new data as time goes by and use the data to predict the future data, there may be a problem. As the size of the data set increasing over time, the distribution changes and there will be many garbage data in the training set.We should remove the garbage data as it reduces the accuracy of the prediction. The main contribution of this article is using the new data to detect the timeliness of historical data and remove the garbage data.We build a data flow model to describe how the data flow among the test set, training set, validation set and the garbage set and improve the accuracy of prediction. As the change of the data set, the best machine learning model will change.We design a hybrid voting algorithm to fit the data set better that uses seven machine learning models predicting the same problem and uses the validation set putting different weights on the learning models to give better model more weights. Experimental results show that, when the distribution of the data set changes over time, our time flow model can remove most of the garbage data and get a better result than the traditional method that adds all the data to the data set; our hybrid voting algorithm has a better prediction result than the average accuracy of other predict models

  6. Memory intensive functional architecture for distributed computer control systems

    International Nuclear Information System (INIS)

    Dimmler, D.G.

    1983-10-01

    A memory-intensive functional architectue for distributed data-acquisition, monitoring, and control systems with large numbers of nodes has been conceptually developed and applied in several large-scale and some smaller systems. This discussion concentrates on: (1) the basic architecture; (2) recent expansions of the architecture which now become feasible in view of the rapidly developing component technologies in microprocessors and functional large-scale integration circuits; and (3) implementation of some key hardware and software structures and one system implementation which is a system for performing control and data acquisition of a neutron spectrometer at the Brookhaven High Flux Beam Reactor. The spectrometer is equipped with a large-area position-sensitive neutron detector

  7. Particle simulation on a distributed memory highly parallel processor

    International Nuclear Information System (INIS)

    Sato, Hiroyuki; Ikesaka, Morio

    1990-01-01

    This paper describes parallel molecular dynamics simulation of atoms governed by local force interaction. The space in the model is divided into cubic subspaces and mapped to the processor array of the CAP-256, a distributed memory, highly parallel processor developed at Fujitsu Labs. We developed a new technique to avoid redundant calculation of forces between atoms in different processors. Experiments showed the communication overhead was less than 5%, and the idle time due to load imbalance was less than 11% for two model problems which contain 11,532 and 46,128 argon atoms. From the software simulation, the CAP-II which is under development is estimated to be about 45 times faster than CAP-256 and will be able to run the same problem about 40 times faster than Fujitsu's M-380 mainframe when 256 processors are used. (author)

  8. The distribution and the functions of autobiographical memories: Why do older adults remember autobiographical memories from their youth?

    Science.gov (United States)

    Wolf, Tabea; Zimprich, Daniel

    2016-09-01

    In the present study, the distribution of autobiographical memories was examined from a functional perspective: we examined whether the extent to which long-term autobiographical memories were rated as having a self-, a directive, or a social function affects the location (mean age) and scale (standard deviation) of the memory distribution. Analyses were based on a total of 5598 autobiographical memories generated by 149 adults aged between 50 and 81 years in response to 51 cue-words. Participants provided their age at the time when the recalled events had happened and rated how frequently they recall these events for self-, directive, and social purposes. While more frequently using autobiographical memories for self-functions was associated with an earlier mean age, memories frequently shared with others showed a narrower distribution around a later mean age. The directive function, by contrast, did not affect the memory distribution. The results strengthen the assumption that experiences from an individual's late adolescence serve to maintain a sense of self-continuity throughout the lifespan. Experiences that are frequently shared with others, in contrast, stem from a narrow age range located in young adulthood.

  9. Propulsion of a Molecular Machine by Asymmetric Distribution of Reaction Products

    Science.gov (United States)

    Golestanian, Ramin; Liverpool, Tanniemola B.; Ajdari, Armand

    2005-06-01

    A simple model for the reaction-driven propulsion of a small device is proposed as a model for (part of) a molecular machine in aqueous media. The motion of the device is driven by an asymmetric distribution of reaction products. The propulsive velocity of the device is calculated as well as the scale of the velocity fluctuations. The effects of hydrodynamic flow as well as a number of different scenarios for the kinetics of the reaction are addressed.

  10. A nanojet: propulsion of a molecular machine by an asymmetric distribution of reaction--products

    Science.gov (United States)

    Liverpool, Tanniemola; Golestanian, Ramin; Ajdari, Armand

    2006-03-01

    A simple model for the reaction-driven propulsion of a small device is proposed as a model for (part of) a molecular machine in aqueous media. Motion of the device is driven by an asymmetric distribution of reaction products. We calculate the propulsive velocity of the device as well as the scale of the velocity fluctuations. We also consider the effects of hydrodynamic flow as well as a number of different scenarios for the kinetics of the reaction.

  11. Improvement of the thickness distribution of a quartz crystal wafer by numerically controlled plasma chemical vaporization machining

    International Nuclear Information System (INIS)

    Shibahara, Masafumi; Yamamura, Kazuya; Sano, Yasuhisa; Sugiyama, Tsuyoshi; Endo, Katsuyoshi; Mori, Yuzo

    2005-01-01

    To improve the thickness uniformity of thin quartz crystal wafer, a new machining process that utilizes an atmospheric pressure plasma was developed. In an atmospheric pressure plasma process, since the kinetic energy of ions that impinge to the wafer surface is small and the density of the reactive species is large, high-efficiency machining without damage is realized, and the thickness distribution is corrected by numerically controlled scanning of the quartz wafer to the localized high-density plasma. By using our developed machining process, the thickness distribution of an AT cut wafer was improved from 174 nm [peak to valley (p-v)] to 67 nm (p-v) within 94 s. Since there are no unwanted spurious modes in the machined quartz wafer, it was proved that the developed machining method has a high machining efficiency without any damage

  12. Power profiling of Cholesky and QR factorizations on distributed memory systems

    KAUST Repository

    Bosilca, George; Ltaief, Hatem; Dongarra, Jack

    2012-01-01

    with a dynamic distributed scheduler (DAGuE) to leverage distributed memory systems. We present performance results (Gflop/s) as well as the power profile (Watts) of two common dense factorizations needed to solve linear systems of equations, namely

  13. Save Now [Y/N]? Machine Memory at War in Iain Banks' "Look to Windward"

    Science.gov (United States)

    Blackmore, Tim

    2010-01-01

    Creating memory during and after wartime trauma is vexed by state attempts to control public and private discourse. Science fiction author Iain Banks' novel "Look to Windward" proposes different ways of preserving memory and culture, from posthuman memory devices, to artwork, to architecture, to personal, local ways of remembering.…

  14. Differentiation and Response Bias in Episodic Memory: Evidence from Reaction Time Distributions

    Science.gov (United States)

    Criss, Amy H.

    2010-01-01

    In differentiation models, the processes of encoding and retrieval produce an increase in the distribution of memory strength for targets and a decrease in the distribution of memory strength for foils as the amount of encoding increases. This produces an increase in the hit rate and decrease in the false-alarm rate for a strongly encoded compared…

  15. Dynamical Mass Measurements of Contaminated Galaxy Clusters Using Support Distribution Machines

    Science.gov (United States)

    Ntampaka, Michelle; Trac, Hy; Sutherland, Dougal; Fromenteau, Sebastien; Poczos, Barnabas; Schneider, Jeff

    2018-01-01

    We study dynamical mass measurements of galaxy clusters contaminated by interlopers and show that a modern machine learning (ML) algorithm can predict masses by better than a factor of two compared to a standard scaling relation approach. We create two mock catalogs from Multidark’s publicly available N-body MDPL1 simulation, one with perfect galaxy cluster membership infor- mation and the other where a simple cylindrical cut around the cluster center allows interlopers to contaminate the clusters. In the standard approach, we use a power-law scaling relation to infer cluster mass from galaxy line-of-sight (LOS) velocity dispersion. Assuming perfect membership knowledge, this unrealistic case produces a wide fractional mass error distribution, with a width E=0.87. Interlopers introduce additional scatter, significantly widening the error distribution further (E=2.13). We employ the support distribution machine (SDM) class of algorithms to learn from distributions of data to predict single values. Applied to distributions of galaxy observables such as LOS velocity and projected distance from the cluster center, SDM yields better than a factor-of-two improvement (E=0.67) for the contaminated case. Remarkably, SDM applied to contaminated clusters is better able to recover masses than even the scaling relation approach applied to uncon- taminated clusters. We show that the SDM method more accurately reproduces the cluster mass function, making it a valuable tool for employing cluster observations to evaluate cosmological models.

  16. Experimental study on Response Parameters of Ni-rich NiTi Shape Memory Alloy during Wire Electric Discharge Machining

    Science.gov (United States)

    Bisaria, Himanshu; Shandilya, Pragya

    2018-03-01

    Nowadays NiTi SMAs are gaining more prominence due to their unique properties such as superelasticity, shape memory effect, high fatigue strength and many other enriched physical and mechanical properties. The current studies explore the effect of machining parameters namely, peak current (Ip), pulse off time (TOFF), and pulse on time (TON) on wire wear ratio (WWR), and dimensional deviation (DD) in WEDM. It was found that high discharge energy was mainly ascribed to high WWR and DD. The WWR and DD increased with the increase in pulse on time and peak current whereas high pulse off time was favourable for low WWR and DD.

  17. Production of a double-humped ion velocity distribution function in a single-ended Q-machine

    DEFF Research Database (Denmark)

    Andersen, S.A.; Jensen, Vagn Orla; Michelsen, Poul

    1970-01-01

    An experimental method of producing a double-humped velocity distribution function for the ions in a Q-machine is described. The method is based on charge exchange processes between neutral ceasium and the ions in a ceasium plasma.......An experimental method of producing a double-humped velocity distribution function for the ions in a Q-machine is described. The method is based on charge exchange processes between neutral ceasium and the ions in a ceasium plasma....

  18. Distributed state machine supervision for long-baseline gravitational-wave detectors

    International Nuclear Information System (INIS)

    Rollins, Jameson Graef

    2016-01-01

    The Laser Interferometer Gravitational-wave Observatory (LIGO) consists of two identical yet independent, widely separated, long-baseline gravitational-wave detectors. Each Advanced LIGO detector consists of complex optical-mechanical systems isolated from the ground by multiple layers of active seismic isolation, all controlled by hundreds of fast, digital, feedback control systems. This article describes a novel state machine-based automation platform developed to handle the automation and supervisory control challenges of these detectors. The platform, called Guardian, consists of distributed, independent, state machine automaton nodes organized hierarchically for full detector control. User code is written in standard Python and the platform is designed to facilitate the fast-paced development process associated with commissioning the complicated Advanced LIGO instruments. While developed specifically for the Advanced LIGO detectors, Guardian is a generic state machine automation platform that is useful for experimental control at all levels, from simple table-top setups to large-scale multi-million dollar facilities.

  19. Distributed state machine supervision for long-baseline gravitational-wave detectors

    Energy Technology Data Exchange (ETDEWEB)

    Rollins, Jameson Graef, E-mail: jameson.rollins@ligo.org [LIGO Laboratory, California Institute of Technology, Pasadena, California 91125 (United States)

    2016-09-15

    The Laser Interferometer Gravitational-wave Observatory (LIGO) consists of two identical yet independent, widely separated, long-baseline gravitational-wave detectors. Each Advanced LIGO detector consists of complex optical-mechanical systems isolated from the ground by multiple layers of active seismic isolation, all controlled by hundreds of fast, digital, feedback control systems. This article describes a novel state machine-based automation platform developed to handle the automation and supervisory control challenges of these detectors. The platform, called Guardian, consists of distributed, independent, state machine automaton nodes organized hierarchically for full detector control. User code is written in standard Python and the platform is designed to facilitate the fast-paced development process associated with commissioning the complicated Advanced LIGO instruments. While developed specifically for the Advanced LIGO detectors, Guardian is a generic state machine automation platform that is useful for experimental control at all levels, from simple table-top setups to large-scale multi-million dollar facilities.

  20. Global assessment of soil organic carbon stocks and spatial distribution of histosols: the Machine Learning approach

    Science.gov (United States)

    Hengl, Tomislav

    2016-04-01

    Preliminary results of predicting distribution of soil organic soils (Histosols) and soil organic carbon stock (in tonnes per ha) using global compilations of soil profiles (about 150,000 points) and covariates at 250 m spatial resolution (about 150 covariates; mainly MODIS seasonal land products, SRTM DEM derivatives, climatic images, lithological and land cover and landform maps) are presented. We focus on using a data-driven approach i.e. Machine Learning techniques that often require no knowledge about the distribution of the target variable or knowledge about the possible relationships. Other advantages of using machine learning are (DOI: 10.1371/journal.pone.0125814): All rules required to produce outputs are formalized. The whole procedure is documented (the statistical model and associated computer script), enabling reproducible research. Predicted surfaces can make use of various information sources and can be optimized relative to all available quantitative point and covariate data. There is more flexibility in terms of the spatial extent, resolution and support of requested maps. Automated mapping is also more cost-effective: once the system is operational, maintenance and production of updates are an order of magnitude faster and cheaper. Consequently, prediction maps can be updated and improved at shorter and shorter time intervals. Some disadvantages of automated soil mapping based on Machine Learning are: Models are data-driven and any serious blunders or artifacts in the input data can propagate to order-of-magnitude larger errors than in the case of expert-based systems. Fitting machine learning models is at the order of magnitude computationally more demanding. Computing effort can be even tens of thousands higher than if e.g. linear geostatistics is used. Many machine learning models are fairly complex often abstract and any interpretation of such models is not trivial and require special multidimensional / multivariable plotting and data mining

  1. Scaling Techniques for Massive Scale-Free Graphs in Distributed (External) Memory

    KAUST Repository

    Pearce, Roger; Gokhale, Maya; Amato, Nancy M.

    2013-01-01

    We present techniques to process large scale-free graphs in distributed memory. Our aim is to scale to trillions of edges, and our research is targeted at leadership class supercomputers and clusters with local non-volatile memory, e.g., NAND Flash

  2. The DELPHI distributed information system for exchanging LEP machine related information

    International Nuclear Information System (INIS)

    Doenszelmann, M.; Gaspar, C.

    1994-01-01

    An information management system was designed and implemented to interchange information between the DELPHI experiment at CERN and the monitoring/control system for the LEP (Large Electron Positron Collider) accelerator. This system is distributed and communicates with many different sources and destinations (LEP) using different types of communication. The system itself communicates internally via a communication system based on a publish-and-subscribe mechanism, DIM (Distributed Information Manager). The information gathered by this system is used for on-line as well as off-line data analysis. Therefore it logs the information to a database and makes it available to operators and users via DUI (DELPHI User Interface). The latter was extended to be capable of displaying ''time-evolution'' plots. It also handles a protocol, implemented using a finite state machine, SMI (State Management Interface), for (semi-)automatic running of the Data Acquisition System and the Slow Controls System. ((orig.))

  3. Fault Diagnosis for Distribution Networks Using Enhanced Support Vector Machine Classifier with Classical Multidimensional Scaling

    Directory of Open Access Journals (Sweden)

    Ming-Yuan Cho

    2017-09-01

    Full Text Available In this paper, a new fault diagnosis techniques based on time domain reflectometry (TDR method with pseudo-random binary sequence (PRBS stimulus and support vector machine (SVM classifier has been investigated to recognize the different types of fault in the radial distribution feeders. This novel technique has considered the amplitude of reflected signals and the peaks of cross-correlation (CCR between the reflected and incident wave for generating fault current dataset for SVM. Furthermore, this multi-layer enhanced SVM classifier is combined with classical multidimensional scaling (CMDS feature extraction algorithm and kernel parameter optimization to increase training speed and improve overall classification accuracy. The proposed technique has been tested on a radial distribution feeder to identify ten different types of fault considering 12 input features generated by using Simulink software and MATLAB Toolbox. The success rate of SVM classifier is over 95% which demonstrates the effectiveness and the high accuracy of proposed method.

  4. Performance assessment of commercial relays for anti-islanding protection of rotating machine based distributed generation

    Energy Technology Data Exchange (ETDEWEB)

    Katiraei, F. [Quanta Technology, Houston, TX (United States); Abbey, C. [Natural Resources Canada, Ottawa, ON (Canada). CANMET Energy Technology Centre; Da Cunha, I. [LeapFrog Energy Technologies Inc., Mississauga, ON (Canada); Brisette, Y. [Hydro-Quebec, Montreal, PQ (Canada). Research Inst

    2008-07-01

    According to power industry standards, distributed generation must stop energizing the power grid upon loss of the main system. Either passive or active methods may be used to fulfill this requirement. Passive methods rely on locally measured signals to determine whether the main grid is present, while active methods inject a perturbation into the system that will manifest itself in locally measured signals if the main grid is not present. This paper compared simulation and experimental results for various commercially available relays for passive anti-islanding protection of small (below 500 kW) distributed generators using either synchronous or induction generators. A commercial multifunction relay and an application specific relay for rate-of-change-of-frequency and vector shift were modelled in simulation. Simulation results were compared with tests using a 25 kV induction generator. Results obtained for the induction machine based DG were in good agreement with trip times associated with under/overvoltage relays. The poor results with frequency based relays may be attributed to the method used for calculating frequency. Sensitivity analysis on the degree of capacitor compensation revealed a small non-detection zone, suggesting that this risk should be evaluated for induction machine based interconnections. These results showed that accurate relay modeling is challenging, particularly for frequency based techniques. Other methods for relay testing, such as hardware-in-the-loop, may be more appropriate than simulation, and are more practical in terms of cost effectiveness, than extensive field trials. 7 refs., 1 tab., 6 figs.

  5. Magnetic Flux Distribution of Linear Machines with Novel Three-Dimensional Hybrid Magnet Arrays

    Directory of Open Access Journals (Sweden)

    Nan Yao

    2017-11-01

    Full Text Available The objective of this paper is to propose a novel tubular linear machine with hybrid permanent magnet arrays and multiple movers, which could be employed for either actuation or sensing technology. The hybrid magnet array produces flux distribution on both sides of windings, and thus helps to increase the signal strength in the windings. The multiple movers are important for airspace technology, because they can improve the system’s redundancy and reliability. The proposed design concept is presented, and the governing equations are obtained based on source free property and Maxwell equations. The magnetic field distribution in the linear machine is thus analytically formulated by using Bessel functions and harmonic expansion of magnetization vector. Numerical simulation is then conducted to validate the analytical solutions of the magnetic flux field. It is proved that the analytical model agrees with the numerical results well. Therefore, it can be utilized for the formulation of signal or force output subsequently, depending on its particular implementation.

  6. Effect of processing conditions on residual stress distributions by bead-on-plate welding after surface machining

    International Nuclear Information System (INIS)

    Ihara, Ryohei; Mochizuki, Masahito

    2014-01-01

    Residual stress is important factor for stress corrosion cracking (SCC) that has been observed near the welded zone in nuclear power plants. Especially, surface residual stress is significant for SCC initiation. In the joining processes of pipes, butt welding is conducted after surface machining. Residual stress is generated by both processes, and residual stress distribution due to surface machining is varied by the subsequent butt welding. In previous paper, authors reported that residual stress distribution generated by bead on plate welding after surface machining has a local maximum residual stress near the weld metal. The local maximum residual stress shows approximately 900 MPa that exceeds the stress threshold for SCC initiation. Therefore, for the safety improvement of nuclear power plants, a study on the local maximum residual stress is important. In this study, the effect of surface machining and welding conditions on residual stress distribution generated by welding after surface machining was investigated. Surface machining using lathe machine and bead on plate welding with tungsten inert gas (TIG) arc under various conditions were conducted for plate specimens made of SUS316L. Then, residual stress distributions were measured by X-ray diffraction method (XRD). As a result, residual stress distributions have the local maximum residual stress near the weld metal in all specimens. The values of the local maximum residual stresses are almost the same. The location of the local maximum residual stress is varied by welding condition. It could be consider that the local maximum residual stress is generated by same generation mechanism as welding residual stress in surface machined layer that has high yield stress. (author)

  7. Operation of a quantum dot in the finite-state machine mode: Single-electron dynamic memory

    Energy Technology Data Exchange (ETDEWEB)

    Klymenko, M. V. [Department of Chemistry, University of Liège, B4000 Liège (Belgium); Klein, M. [The Fritz Haber Center for Molecular Dynamics and the Institute of Chemistry, The Hebrew University of Jerusalem, Jerusalem 91904 (Israel); Levine, R. D. [The Fritz Haber Center for Molecular Dynamics and the Institute of Chemistry, The Hebrew University of Jerusalem, Jerusalem 91904 (Israel); Crump Institute for Molecular Imaging and Department of Molecular and Medical Pharmacology, David Geffen School of Medicine and Department of Chemistry and Biochemistry, University of California, Los Angeles, California 90095 (United States); Remacle, F., E-mail: fremacle@ulg.ac.be [Department of Chemistry, University of Liège, B4000 Liège (Belgium); The Fritz Haber Center for Molecular Dynamics and the Institute of Chemistry, The Hebrew University of Jerusalem, Jerusalem 91904 (Israel)

    2016-07-14

    A single electron dynamic memory is designed based on the non-equilibrium dynamics of charge states in electrostatically defined metallic quantum dots. Using the orthodox theory for computing the transfer rates and a master equation, we model the dynamical response of devices consisting of a charge sensor coupled to either a single and or a double quantum dot subjected to a pulsed gate voltage. We show that transition rates between charge states in metallic quantum dots are characterized by an asymmetry that can be controlled by the gate voltage. This effect is more pronounced when the switching between charge states corresponds to a Markovian process involving electron transport through a chain of several quantum dots. By simulating the dynamics of electron transport we demonstrate that the quantum box operates as a finite-state machine that can be addressed by choosing suitable shapes and switching rates of the gate pulses. We further show that writing times in the ns range and retention memory times six orders of magnitude longer, in the ms range, can be achieved on the double quantum dot system using experimentally feasible parameters, thereby demonstrating that the device can operate as a dynamic single electron memory.

  8. Operation of a quantum dot in the finite-state machine mode: Single-electron dynamic memory

    International Nuclear Information System (INIS)

    Klymenko, M. V.; Klein, M.; Levine, R. D.; Remacle, F.

    2016-01-01

    A single electron dynamic memory is designed based on the non-equilibrium dynamics of charge states in electrostatically defined metallic quantum dots. Using the orthodox theory for computing the transfer rates and a master equation, we model the dynamical response of devices consisting of a charge sensor coupled to either a single and or a double quantum dot subjected to a pulsed gate voltage. We show that transition rates between charge states in metallic quantum dots are characterized by an asymmetry that can be controlled by the gate voltage. This effect is more pronounced when the switching between charge states corresponds to a Markovian process involving electron transport through a chain of several quantum dots. By simulating the dynamics of electron transport we demonstrate that the quantum box operates as a finite-state machine that can be addressed by choosing suitable shapes and switching rates of the gate pulses. We further show that writing times in the ns range and retention memory times six orders of magnitude longer, in the ms range, can be achieved on the double quantum dot system using experimentally feasible parameters, thereby demonstrating that the device can operate as a dynamic single electron memory.

  9. Administrator of 9/11 victim compensation fund to administer Hokie Spirit Memorial Fund distributions

    OpenAIRE

    Hincker, Lawrence

    2007-01-01

    Virginia Tech President Charles Steger has asked Kenneth R. Feinberg, who served as "Special Master of the federal September 11th Victim Compensation Fund of 2001," to administer distributions of the university Hokie Spirit Memorial Fund (HSMF).

  10. Experiences and results multitasking a hydrodynamics code on global and local memory machines

    International Nuclear Information System (INIS)

    Mandell, D.

    1987-01-01

    A one-dimensional, time-dependent Lagrangian hydrodynamics code using a Godunov solution method has been multimasked for the Cray X-MP/48, the Intel iPSC hypercube, the Alliant FX series and the IBM RP3 computers. Actual multitasking results have been obtained for the Cray, Intel and Alliant computers and simulated results were obtained for the Cray and RP3 machines. The differences in the methods required to multitask on each of the machines is discussed. Results are presented for a sample problem involving a shock wave moving down a channel. Comparisons are made between theoretical speedups, predicted by Amdahl's law, and the actual speedups obtained. The problems of debugging on the different machines are also described

  11. Childhood amnesia in the making: different distributions of autobiographical memories in children and adults.

    Science.gov (United States)

    Bauer, Patricia J; Larkina, Marina

    2014-04-01

    Within the memory literature, a robust finding is of childhood amnesia: a relative paucity among adults for autobiographical or personal memories from the first 3 to 4 years of life, and from the first 7 years, a smaller number of memories than would be expected based on normal forgetting. Childhood amnesia is observed in spite of strong evidence that during the period eventually obscured by the amnesia, children construct and preserve autobiographical memories. Why early memories seemingly are lost to recollection is an unanswered question. In the present research, we examined the issue by using the cue word technique to chart the distributions of autobiographical memories in samples of children ages 7 to 11 years and samples of young and middle-aged adults. Among adults, the distributions were best fit by the power function, whereas among children, the exponential function provided a better fit to the distributions of memories. The findings suggest that a major source of childhood amnesia is a constant rate of forgetting in childhood, seemingly resulting from failed consolidation, the outcome of which is a smaller pool of memories available for later retrieval.

  12. Numerical and machine learning simulation of parametric distributions of groundwater residence time in streams and wells

    Science.gov (United States)

    Starn, J. J.; Belitz, K.; Carlson, C.

    2017-12-01

    Groundwater residence-time distributions (RTDs) are critical for assessing susceptibility of water resources to contamination. This novel approach for estimating regional RTDs was to first simulate groundwater flow using existing regional digital data sets in 13 intermediate size watersheds (each an average of 7,000 square kilometers) that are representative of a wide range of glacial systems. RTDs were simulated with particle tracking. We refer to these models as "general models" because they are based on regional, as opposed to site-specific, digital data. Parametric RTDs were created from particle RTDs by fitting 1- and 2-component Weibull, gamma, and inverse Gaussian distributions, thus reducing a large number of particle travel times to 3 to 7 parameters (shape, location, and scale for each component plus a mixing fraction) for each modeled area. The scale parameter of these distributions is related to the mean exponential age; the shape parameter controls departure from the ideal exponential distribution and is partly a function of interaction with bedrock and with drainage density. Given the flexible shape and mathematical similarity of these distributions, any of them are potentially a good fit to particle RTDs. The 1-component gamma distribution provided a good fit to basin-wide particle RTDs. RTDs at monitoring wells and streams often have more complicated shapes than basin-wide RTDs, caused in part by heterogeneity in the model, and generally require 2-component distributions. A machine learning model was trained on the RTD parameters using features derived from regionally available watershed characteristics such as recharge rate, material thickness, and stream density. RTDs appeared to vary systematically across the landscape in relation to watershed features. This relation was used to produce maps of useful metrics with respect to risk-based thresholds, such as the time to first exceedance, time to maximum concentration, time above the threshold

  13. Adaptability of optimization concept in the context of cryogenic distribution for superconducting magnets of fusion machine

    Science.gov (United States)

    Sarkar, Biswanath; Bhattacharya, Ritendra Nath; Vaghela, Hitensinh; Shah, Nitin Dineshkumar; Choukekar, Ketan; Badgujar, Satish

    2012-06-01

    Cryogenic distribution system (CDS) plays a vital role for reliable operation of largescale fusion machines in a Tokamak configuration. Managing dynamic heat loads from the superconducting magnets, namely, toroidal field, poloidal field, central solenoid and supporting structure is the most important function of the CDS along with the static heat loads. Two concepts are foreseen for the configuration of the CDS: singular distribution and collective distribution. In the first concept, each magnet is assigned with one distribution box having its own sub-cooler bath. In the collective concept, it is possible to share one common bath for more than one magnet system. The case study has been performed with an identical dynamic heat load profile applied to both concepts in the same time domain. The choices of a combined system from the magnets are also part of the study without compromising the system functionality. Process modeling and detailed simulations have been performed for both the options using Aspen HYSYS®. Multiple plasma pulses per day have been considered to verify the residual energy deposited in the superconducting magnets at the end of the plasma pulse. Preliminary 3D modeling using CATIA® has been performed along with the first level of component sizing.

  14. Generation and Validation of Spatial Distribution of Hourly Wind Speed Time-Series using Machine Learning

    International Nuclear Information System (INIS)

    Veronesi, F; Grassi, S

    2016-01-01

    Wind resource assessment is a key aspect of wind farm planning since it allows to estimate the long term electricity production. Moreover, wind speed time-series at high resolution are helpful to estimate the temporal changes of the electricity generation and indispensable to design stand-alone systems, which are affected by the mismatch of supply and demand. In this work, we present a new generalized statistical methodology to generate the spatial distribution of wind speed time-series, using Switzerland as a case study. This research is based upon a machine learning model and demonstrates that statistical wind resource assessment can successfully be used for estimating wind speed time-series. In fact, this method is able to obtain reliable wind speed estimates and propagate all the sources of uncertainty (from the measurements to the mapping process) in an efficient way, i.e. minimizing computational time and load. This allows not only an accurate estimation, but the creation of precise confidence intervals to map the stochasticity of the wind resource for a particular site. The validation shows that machine learning can minimize the bias of the wind speed hourly estimates. Moreover, for each mapped location this method delivers not only the mean wind speed, but also its confidence interval, which are crucial data for planners. (paper)

  15. Generation and Validation of Spatial Distribution of Hourly Wind Speed Time-Series using Machine Learning

    Science.gov (United States)

    Veronesi, F.; Grassi, S.

    2016-09-01

    Wind resource assessment is a key aspect of wind farm planning since it allows to estimate the long term electricity production. Moreover, wind speed time-series at high resolution are helpful to estimate the temporal changes of the electricity generation and indispensable to design stand-alone systems, which are affected by the mismatch of supply and demand. In this work, we present a new generalized statistical methodology to generate the spatial distribution of wind speed time-series, using Switzerland as a case study. This research is based upon a machine learning model and demonstrates that statistical wind resource assessment can successfully be used for estimating wind speed time-series. In fact, this method is able to obtain reliable wind speed estimates and propagate all the sources of uncertainty (from the measurements to the mapping process) in an efficient way, i.e. minimizing computational time and load. This allows not only an accurate estimation, but the creation of precise confidence intervals to map the stochasticity of the wind resource for a particular site. The validation shows that machine learning can minimize the bias of the wind speed hourly estimates. Moreover, for each mapped location this method delivers not only the mean wind speed, but also its confidence interval, which are crucial data for planners.

  16. High speed vision processor with reconfigurable processing element array based on full-custom distributed memory

    Science.gov (United States)

    Chen, Zhe; Yang, Jie; Shi, Cong; Qin, Qi; Liu, Liyuan; Wu, Nanjian

    2016-04-01

    In this paper, a hybrid vision processor based on a compact full-custom distributed memory for near-sensor high-speed image processing is proposed. The proposed processor consists of a reconfigurable processing element (PE) array, a row processor (RP) array, and a dual-core microprocessor. The PE array includes two-dimensional processing elements with a compact full-custom distributed memory. It supports real-time reconfiguration between the PE array and the self-organized map (SOM) neural network. The vision processor is fabricated using a 0.18 µm CMOS technology. The circuit area of the distributed memory is reduced markedly into 1/3 of that of the conventional memory so that the circuit area of the vision processor is reduced by 44.2%. Experimental results demonstrate that the proposed design achieves correct functions.

  17. A Distributed Algorithm for the Cluster-Based Outlier Detection Using Unsupervised Extreme Learning Machines

    Directory of Open Access Journals (Sweden)

    Xite Wang

    2017-01-01

    Full Text Available Outlier detection is an important data mining task, whose target is to find the abnormal or atypical objects from a given dataset. The techniques for detecting outliers have a lot of applications, such as credit card fraud detection and environment monitoring. Our previous work proposed the Cluster-Based (CB outlier and gave a centralized method using unsupervised extreme learning machines to compute CB outliers. In this paper, we propose a new distributed algorithm for the CB outlier detection (DACB. On the master node, we collect a small number of points from the slave nodes to obtain a threshold. On each slave node, we design a new filtering method that can use the threshold to efficiently speed up the computation. Furthermore, we also propose a ranking method to optimize the order of cluster scanning. At last, the effectiveness and efficiency of the proposed approaches are verified through a plenty of simulation experiments.

  18. From shoebox to performative agent: the computer as personal memory machine

    NARCIS (Netherlands)

    van Dijck, J.

    2005-01-01

    Digital technologies offer new opportunities in the everyday lives of people: with still expanding memory capacities, the computer is rapidly becoming a giant storage and processing facility for recording and retrieving ‘bits of life’. Software engineers and companies promise not only to expand the

  19. Memory-assisted quantum key distribution resilient against multiple-excitation effects

    Science.gov (United States)

    Lo Piparo, Nicolò; Sinclair, Neil; Razavi, Mohsen

    2018-01-01

    Memory-assisted measurement-device-independent quantum key distribution (MA-MDI-QKD) has recently been proposed as a technique to improve the rate-versus-distance behavior of QKD systems by using existing, or nearly-achievable, quantum technologies. The promise is that MA-MDI-QKD would require less demanding quantum memories than the ones needed for probabilistic quantum repeaters. Nevertheless, early investigations suggest that, in order to beat the conventional memory-less QKD schemes, the quantum memories used in the MA-MDI-QKD protocols must have high bandwidth-storage products and short interaction times. Among different types of quantum memories, ensemble-based memories offer some of the required specifications, but they typically suffer from multiple excitation effects. To avoid the latter issue, in this paper, we propose two new variants of MA-MDI-QKD both relying on single-photon sources for entangling purposes. One is based on known techniques for entanglement distribution in quantum repeaters. This scheme turns out to offer no advantage even if one uses ideal single-photon sources. By finding the root cause of the problem, we then propose another setup, which can outperform single memory-less setups even if we allow for some imperfections in our single-photon sources. For such a scheme, we compare the key rate for different types of ensemble-based memories and show that certain classes of atomic ensembles can improve the rate-versus-distance behavior.

  20. Optimizing NEURON Simulation Environment Using Remote Memory Access with Recursive Doubling on Distributed Memory Systems

    OpenAIRE

    Shehzad, Danish; Bozkuş, Zeki

    2016-01-01

    Increase in complexity of neuronal network models escalated the efforts to make NEURON simulation environment efficient. The computational neuroscientists divided the equations into subnets amongst multiple processors for achieving better hardware performance. On parallel machines for neuronal networks, interprocessor spikes exchange consumes large section of overall simulation time. In NEURON for communication between processors Message Passing Interface (MPI) is used. MPI_Allgather collecti...

  1. A distributed parallel genetic algorithm of placement strategy for virtual machines deployment on cloud platform.

    Science.gov (United States)

    Dong, Yu-Shuang; Xu, Gao-Chao; Fu, Xiao-Dong

    2014-01-01

    The cloud platform provides various services to users. More and more cloud centers provide infrastructure as the main way of operating. To improve the utilization rate of the cloud center and to decrease the operating cost, the cloud center provides services according to requirements of users by sharding the resources with virtualization. Considering both QoS for users and cost saving for cloud computing providers, we try to maximize performance and minimize energy cost as well. In this paper, we propose a distributed parallel genetic algorithm (DPGA) of placement strategy for virtual machines deployment on cloud platform. It executes the genetic algorithm parallelly and distributedly on several selected physical hosts in the first stage. Then it continues to execute the genetic algorithm of the second stage with solutions obtained from the first stage as the initial population. The solution calculated by the genetic algorithm of the second stage is the optimal one of the proposed approach. The experimental results show that the proposed placement strategy of VM deployment can ensure QoS for users and it is more effective and more energy efficient than other placement strategies on the cloud platform.

  2. A Distributed Parallel Genetic Algorithm of Placement Strategy for Virtual Machines Deployment on Cloud Platform

    Directory of Open Access Journals (Sweden)

    Yu-Shuang Dong

    2014-01-01

    Full Text Available The cloud platform provides various services to users. More and more cloud centers provide infrastructure as the main way of operating. To improve the utilization rate of the cloud center and to decrease the operating cost, the cloud center provides services according to requirements of users by sharding the resources with virtualization. Considering both QoS for users and cost saving for cloud computing providers, we try to maximize performance and minimize energy cost as well. In this paper, we propose a distributed parallel genetic algorithm (DPGA of placement strategy for virtual machines deployment on cloud platform. It executes the genetic algorithm parallelly and distributedly on several selected physical hosts in the first stage. Then it continues to execute the genetic algorithm of the second stage with solutions obtained from the first stage as the initial population. The solution calculated by the genetic algorithm of the second stage is the optimal one of the proposed approach. The experimental results show that the proposed placement strategy of VM deployment can ensure QoS for users and it is more effective and more energy efficient than other placement strategies on the cloud platform.

  3. Two alternate proofs of Wang's lune formula for sparse distributed memory and an integral approximation

    Science.gov (United States)

    Jaeckel, Louis A.

    1988-01-01

    In Kanerva's Sparse Distributed Memory, writing to and reading from the memory are done in relation to spheres in an n-dimensional binary vector space. Thus it is important to know how many points are in the intersection of two spheres in this space. Two proofs are given of Wang's formula for spheres of unequal radii, and an integral approximation for the intersection in this case.

  4. How are rescaled range analyses affected by different memory and distributional properties? A Monte Carlo study

    Czech Academy of Sciences Publication Activity Database

    Krištoufek, Ladislav

    2012-01-01

    Roč. 391, č. 17 (2012), s. 4252-4260 ISSN 0378-4371 R&D Projects: GA ČR GA402/09/0965 Grant - others:GA UK(CZ) 118310; SVV(CZ) 261 501 Institutional support: RVO:67985556 Keywords : Rescaled range analysis * Modified rescaled range analysis * Hurst exponent * Long - term memory * Short- term memory Subject RIV: AH - Economics Impact factor: 1.676, year: 2012 http://library.utia.cas.cz/separaty/2012/E/kristoufek-how are rescaled range analyses affected by different memory and distributional properties.pdf

  5. Efficient packing of patterns in sparse distributed memory by selective weighting of input bits

    Science.gov (United States)

    Kanerva, Pentti

    1991-01-01

    When a set of patterns is stored in a distributed memory, any given storage location participates in the storage of many patterns. From the perspective of any one stored pattern, the other patterns act as noise, and such noise limits the memory's storage capacity. The more similar the retrieval cues for two patterns are, the more the patterns interfere with each other in memory, and the harder it is to separate them on retrieval. A method is described of weighting the retrieval cues to reduce such interference and thus to improve the separability of patterns that have similar cues.

  6. Portable memory consistency for software managed distributed memory in many-core SoC

    NARCIS (Netherlands)

    Rutgers, J.H.; Bekooij, Marco Jan Gerrit; Smit, Gerardus Johannes Maria

    2013-01-01

    Porting software to different platforms can require modifications of the application. One of the issues is that the targeted hardware supports another memory consistency model. As a consequence, the completion order of reads and writes in a multi-threaded application can change, which may result in

  7. Recognition of simple visual images using a sparse distributed memory: Some implementations and experiments

    Science.gov (United States)

    Jaeckel, Louis A.

    1990-01-01

    Previously, a method was described of representing a class of simple visual images so that they could be used with a Sparse Distributed Memory (SDM). Herein, two possible implementations are described of a SDM, for which these images, suitably encoded, will serve both as addresses to the memory and as data to be stored in the memory. A key feature of both implementations is that a pattern that is represented as an unordered set with a variable number of members can be used as an address to the memory. In the 1st model, an image is encoded as a 9072 bit string to be used as a read or write address; the bit string may also be used as data to be stored in the memory. Another representation, in which an image is encoded as a 256 bit string, may be used with either model as data to be stored in the memory, but not as an address. In the 2nd model, an image is not represented as a vector of fixed length to be used as an address. Instead, a rule is given for determining which memory locations are to be activated in response to an encoded image. This activation rule treats the pieces of an image as an unordered set. With this model, the memory can be simulated, based on a method of computing the approximate result of a read operation.

  8. Integrated Multi-Scale Data Analytics and Machine Learning for the Distribution Grid and Building-to-Grid Interface

    Energy Technology Data Exchange (ETDEWEB)

    Stewart, Emma M. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Hendrix, Val [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Chertkov, Michael [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Deka, Deepjyoti [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-03-16

    This white paper introduces the application of advanced data analytics to the modernized grid. In particular, we consider the field of machine learning and where it is both useful, and not useful, for the particular field of the distribution grid and buildings interface. While analytics, in general, is a growing field of interest, and often seen as the golden goose in the burgeoning distribution grid industry, its application is often limited by communications infrastructure, or lack of a focused technical application. Overall, the linkage of analytics to purposeful application in the grid space has been limited. In this paper we consider the field of machine learning as a subset of analytical techniques, and discuss its ability and limitations to enable the future distribution grid and the building-to-grid interface. To that end, we also consider the potential for mixing distributed and centralized analytics and the pros and cons of these approaches. Machine learning is a subfield of computer science that studies and constructs algorithms that can learn from data and make predictions and improve forecasts. Incorporation of machine learning in grid monitoring and analysis tools may have the potential to solve data and operational challenges that result from increasing penetration of distributed and behind-the-meter energy resources. There is an exponentially expanding volume of measured data being generated on the distribution grid, which, with appropriate application of analytics, may be transformed into intelligible, actionable information that can be provided to the right actors – such as grid and building operators, at the appropriate time to enhance grid or building resilience, efficiency, and operations against various metrics or goals – such as total carbon reduction or other economic benefit to customers. While some basic analysis into these data streams can provide a wealth of information, computational and human boundaries on performing the analysis

  9. Parallel grid generation algorithm for distributed memory computers

    Science.gov (United States)

    Moitra, Stuti; Moitra, Anutosh

    1994-01-01

    A parallel grid-generation algorithm and its implementation on the Intel iPSC/860 computer are described. The grid-generation scheme is based on an algebraic formulation of homotopic relations. Methods for utilizing the inherent parallelism of the grid-generation scheme are described, and implementation of multiple levELs of parallelism on multiple instruction multiple data machines are indicated. The algorithm is capable of providing near orthogonality and spacing control at solid boundaries while requiring minimal interprocessor communications. Results obtained on the Intel hypercube for a blended wing-body configuration are used to demonstrate the effectiveness of the algorithm. Fortran implementations bAsed on the native programming model of the iPSC/860 computer and the Express system of software tools are reported. Computational gains in execution time speed-up ratios are given.

  10. Combined spatial/angular domain decomposition SN algorithms for shared memory parallel machines

    International Nuclear Information System (INIS)

    Hunter, M.A.; Haghighat, A.

    1993-01-01

    Several parallel processing algorithms on the basis of spatial and angular domain decomposition methods are developed and incorporated into a two-dimensional discrete ordinates transport theory code. These algorithms divide the spatial and angular domains into independent subdomains so that the flux calculations within each subdomain can be processed simultaneously. Two spatial parallel algorithms (Block-Jacobi, red-black), one angular parallel algorithm (η-level), and their combinations are implemented on an eight processor CRAY Y-MP. Parallel performances of the algorithms are measured using a series of fixed source RZ geometry problems. Some of the results are also compared with those executed on an IBM 3090/600J machine. (orig.)

  11. Alchemical and structural distribution based representation for universal quantum machine learning

    Science.gov (United States)

    Faber, Felix A.; Christensen, Anders S.; Huang, Bing; von Lilienfeld, O. Anatole

    2018-06-01

    We introduce a representation of any atom in any chemical environment for the automatized generation of universal kernel ridge regression-based quantum machine learning (QML) models of electronic properties, trained throughout chemical compound space. The representation is based on Gaussian distribution functions, scaled by power laws and explicitly accounting for structural as well as elemental degrees of freedom. The elemental components help us to lower the QML model's learning curve, and, through interpolation across the periodic table, even enable "alchemical extrapolation" to covalent bonding between elements not part of training. This point is demonstrated for the prediction of covalent binding in single, double, and triple bonds among main-group elements as well as for atomization energies in organic molecules. We present numerical evidence that resulting QML energy models, after training on a few thousand random training instances, reach chemical accuracy for out-of-sample compounds. Compound datasets studied include thousands of structurally and compositionally diverse organic molecules, non-covalently bonded protein side-chains, (H2O)40-clusters, and crystalline solids. Learning curves for QML models also indicate competitive predictive power for various other electronic ground state properties of organic molecules, calculated with hybrid density functional theory, including polarizability, heat-capacity, HOMO-LUMO eigenvalues and gap, zero point vibrational energy, dipole moment, and highest vibrational fundamental frequency.

  12. Memory

    Science.gov (United States)

    ... it has to decide what is worth remembering. Memory is the process of storing and then remembering this information. There are different types of memory. Short-term memory stores information for a few ...

  13. Teraflop-scale Incremental Machine Learning

    OpenAIRE

    Özkural, Eray

    2011-01-01

    We propose a long-term memory design for artificial general intelligence based on Solomonoff's incremental machine learning methods. We use R5RS Scheme and its standard library with a few omissions as the reference machine. We introduce a Levin Search variant based on Stochastic Context Free Grammar together with four synergistic update algorithms that use the same grammar as a guiding probability distribution of programs. The update algorithms include adjusting production probabilities, re-u...

  14. Determination and shaping of the ion-velocity distribution function in a single-ended Q machine

    DEFF Research Database (Denmark)

    Andersen, S.A.; Jensen, Vagn Orla; Michelsen, Poul

    1971-01-01

    An electrostatic energy analyzer with a resolution better than 0.03 eV was constructed. This analyzer was used to determine the ion-velocity distribution function at different densities and plate temperatures in a single-ended Q machine. In all regions good agreement with theoretical predictions...... based on simple, physical pictures is obtained. It is shown that within certain limits the velocity distribution function can be shaped; double-humped distribution functions have been obtained. The technique used here is suggested as an accurate method for determination of plasma densities within 10...

  15. Data Provenance for Agent-Based Models in a Distributed Memory

    Directory of Open Access Journals (Sweden)

    Delmar B. Davis

    2018-04-01

    Full Text Available Agent-Based Models (ABMs assist with studying emergent collective behavior of individual entities in social, biological, economic, network, and physical systems. Data provenance can support ABM by explaining individual agent behavior. However, there is no provenance support for ABMs in a distributed setting. The Multi-Agent Spatial Simulation (MASS library provides a framework for simulating ABMs at fine granularity, where agents and spatial data are shared application resources in a distributed memory. We introduce a novel approach to capture ABM provenance in a distributed memory, called ProvMASS. We evaluate our technique with traditional data provenance queries and performance measures. Our results indicate that a configurable approach can capture provenance that explains coordination of distributed shared resources, simulation logic, and agent behavior while limiting performance overhead. We also show the ability to support practical analyses (e.g., agent tracking and storage requirements for different capture configurations.

  16. Power profiling of Cholesky and QR factorizations on distributed memory systems

    KAUST Repository

    Bosilca, George

    2012-08-30

    This paper presents the power profile of two high performance dense linear algebra libraries on distributed memory systems, ScaLAPACK and DPLASMA. From the algorithmic perspective, their methodologies are opposite. The former is based on block algorithms and relies on multithreaded BLAS and a two-dimensional block cyclic data distribution to achieve high parallel performance. The latter is based on tile algorithms running on top of a tile data layout and uses fine-grained task parallelism combined with a dynamic distributed scheduler (DAGuE) to leverage distributed memory systems. We present performance results (Gflop/s) as well as the power profile (Watts) of two common dense factorizations needed to solve linear systems of equations, namely Cholesky and QR. The reported numbers show that DPLASMA surpasses ScaLAPACK not only in terms of performance (up to 2X speedup) but also in terms of energy efficiency (up to 62 %). © 2012 Springer-Verlag (outside the USA).

  17. Extending and implementing the Self-adaptive Virtual Processor for distributed memory architectures

    NARCIS (Netherlands)

    van Tol, M.W.; Koivisto, J.

    2011-01-01

    Many-core architectures of the future are likely to have distributed memory organizations and need fine grained concurrency management to be used effectively. The Self-adaptive Virtual Processor (SVP) is an abstract concurrent programming model which can provide this, but the model and its current

  18. Learning to read aloud: A neural network approach using sparse distributed memory

    Science.gov (United States)

    Joglekar, Umesh Dwarkanath

    1989-01-01

    An attempt to solve a problem of text-to-phoneme mapping is described which does not appear amenable to solution by use of standard algorithmic procedures. Experiments based on a model of distributed processing are also described. This model (sparse distributed memory (SDM)) can be used in an iterative supervised learning mode to solve the problem. Additional improvements aimed at obtaining better performance are suggested.

  19. Effects of pole flux distribution in a homopolar linear synchronous machine

    Science.gov (United States)

    Balchin, M. J.; Eastham, J. F.; Coles, P. C.

    1994-05-01

    Linear forms of synchronous electrical machine are at present being considered as the propulsion means in high-speed, magnetically levitated (Maglev) ground transportation systems. A homopolar form of machine is considered in which the primary member, which carries both ac and dc windings, is supported on the vehicle. Test results and theoretical predictions are presented for a design of machine intended for driving a 100 passenger vehicle at a top speed of 400 km/h. The layout of the dc magnetic circuit is examined to locate the best position for the dc winding from the point of view of minimum core weight. Measurements of flux build-up under the machine at different operating speeds are given for two types of secondary pole: solid and laminated. The solid pole results, which are confirmed theoretically, show that this form of construction is impractical for high-speed drives. Measured motoring characteristics are presented for a short length of machine which simulates conditions at the leading and trailing ends of the full-sized machine. Combination of the results with those from a cylindrical version of the machine make it possible to infer the performance of the full-sized traction machine. This gives 0.8 pf and 0.9 efficiency at 300 km/h, which is much better than the reported performance of a comparable linear induction motor (0.52 pf and 0.82 efficiency). It is therefore concluded that in any projected high-speed Maglev systems, a linear synchronous machine should be the first choice as the propulsion means.

  20. Multi-objective problem of the modified distributed parallel machine and assembly scheduling problem (MDPMASP) with eligibility constraints

    Science.gov (United States)

    Amallynda, I.; Santosa, B.

    2017-11-01

    This paper proposes a new generalization of the distributed parallel machine and assembly scheduling problem (DPMASP) with eligibility constraints referred to as the modified distributed parallel machine and assembly scheduling problem (MDPMASP) with eligibility constraints. Within this generalization, we assume that there are a set non-identical factories or production lines, each one with a set unrelated parallel machine with different speeds in processing them disposed to a single assembly machine in series. A set of different products that are manufactured through an assembly program of a set of components (jobs) according to the requested demand. Each product requires several kinds of jobs with different sizes. Beside that we also consider to the multi-objective problem (MOP) of minimizing mean flow time and the number of tardy products simultaneously. This is known to be NP-Hard problem, is important to practice, as the former criterions to reflect the customer's demand and manufacturer's perspective. This is a realistic and complex problem with wide range of possible solutions, we propose four simple heuristics and two metaheuristics to solve it. Various parameters of the proposed metaheuristic algorithms are discussed and calibrated by means of Taguchi technique. All proposed algorithms are tested by Matlab software. Our computational experiments indicate that the proposed problem and fourth proposed algorithms are able to be implemented and can be used to solve moderately-sized instances, and giving efficient solutions, which are close to optimum in most cases.

  1. The use of fractal dimension calculation algorithm to determine the nature of autobiographical memories distribution across the life span

    Science.gov (United States)

    Mitina, Olga V.; Nourkova, Veronica V.

    In the given research we offer the technique for the calculation of the density of events which people retrieve from autobiographical memory. We wanted to prove a non-uniformity nature of memories distribution in the course of time and were interested with the law of distribution of these events during life course.

  2. Feature-Based Visual Short-Term Memory Is Widely Distributed and Hierarchically Organized.

    Science.gov (United States)

    Dotson, Nicholas M; Hoffman, Steven J; Goodell, Baldwin; Gray, Charles M

    2018-06-15

    Feature-based visual short-term memory is known to engage both sensory and association cortices. However, the extent of the participating circuit and the neural mechanisms underlying memory maintenance is still a matter of vigorous debate. To address these questions, we recorded neuronal activity from 42 cortical areas in monkeys performing a feature-based visual short-term memory task and an interleaved fixation task. We find that task-dependent differences in firing rates are widely distributed throughout the cortex, while stimulus-specific changes in firing rates are more restricted and hierarchically organized. We also show that microsaccades during the memory delay encode the stimuli held in memory and that units modulated by microsaccades are more likely to exhibit stimulus specificity, suggesting that eye movements contribute to visual short-term memory processes. These results support a framework in which most cortical areas, within a modality, contribute to mnemonic representations at timescales that increase along the cortical hierarchy. Copyright © 2018 Elsevier Inc. All rights reserved.

  3. Immigration, language proficiency, and autobiographical memories: Lifespan distribution and second-language access.

    Science.gov (United States)

    Esposito, Alena G; Baker-Ward, Lynne

    2016-08-01

    This investigation examined two controversies in the autobiographical literature: how cross-language immigration affects the distribution of autobiographical memories across the lifespan and under what circumstances language-dependent recall is observed. Both Spanish/English bilingual immigrants and English monolingual non-immigrants participated in a cue word study, with the bilingual sample taking part in a within-subject language manipulation. The expected bump in the number of memories from early life was observed for non-immigrants but not immigrants, who reported more memories for events surrounding immigration. Aspects of the methodology addressed possible reasons for past discrepant findings. Language-dependent recall was influenced by second-language proficiency. Results were interpreted as evidence that bilinguals with high second-language proficiency, in contrast to those with lower second-language proficiency, access a single conceptual store through either language. The final multi-level model predicting language-dependent recall, including second-language proficiency, age of immigration, internal language, and cue word language, explained ¾ of the between-person variance and (1)/5 of the within-person variance. We arrive at two conclusions. First, major life transitions influence the distribution of memories. Second, concept representation across multiple languages follows a developmental model. In addition, the results underscore the importance of considering language experience in research involving memory reports.

  4. Design of multiple sequence alignment algorithms on parallel, distributed memory supercomputers.

    Science.gov (United States)

    Church, Philip C; Goscinski, Andrzej; Holt, Kathryn; Inouye, Michael; Ghoting, Amol; Makarychev, Konstantin; Reumann, Matthias

    2011-01-01

    The challenge of comparing two or more genomes that have undergone recombination and substantial amounts of segmental loss and gain has recently been addressed for small numbers of genomes. However, datasets of hundreds of genomes are now common and their sizes will only increase in the future. Multiple sequence alignment of hundreds of genomes remains an intractable problem due to quadratic increases in compute time and memory footprint. To date, most alignment algorithms are designed for commodity clusters without parallelism. Hence, we propose the design of a multiple sequence alignment algorithm on massively parallel, distributed memory supercomputers to enable research into comparative genomics on large data sets. Following the methodology of the sequential progressiveMauve algorithm, we design data structures including sequences and sorted k-mer lists on the IBM Blue Gene/P supercomputer (BG/P). Preliminary results show that we can reduce the memory footprint so that we can potentially align over 250 bacterial genomes on a single BG/P compute node. We verify our results on a dataset of E.coli, Shigella and S.pneumoniae genomes. Our implementation returns results matching those of the original algorithm but in 1/2 the time and with 1/4 the memory footprint for scaffold building. In this study, we have laid the basis for multiple sequence alignment of large-scale datasets on a massively parallel, distributed memory supercomputer, thus enabling comparison of hundreds instead of a few genome sequences within reasonable time.

  5. Mnemonic transmission, social contagion, and emergence of collective memory: Influence of emotional valence, group structure, and information distribution.

    Science.gov (United States)

    Choi, Hae-Yoon; Kensinger, Elizabeth A; Rajaram, Suparna

    2017-09-01

    Social transmission of memory and its consequence on collective memory have generated enduring interdisciplinary interest because of their widespread significance in interpersonal, sociocultural, and political arenas. We tested the influence of 3 key factors-emotional salience of information, group structure, and information distribution-on mnemonic transmission, social contagion, and collective memory. Participants individually studied emotionally salient (negative or positive) and nonemotional (neutral) picture-word pairs that were completely shared, partially shared, or unshared within participant triads, and then completed 3 consecutive recalls in 1 of 3 conditions: individual-individual-individual (control), collaborative-collaborative (identical group; insular structure)-individual, and collaborative-collaborative (reconfigured group; diverse structure)-individual. Collaboration enhanced negative memories especially in insular group structure and especially for shared information, and promoted collective forgetting of positive memories. Diverse group structure reduced this negativity effect. Unequally distributed information led to social contagion that creates false memories; diverse structure propagated a greater variety of false memories whereas insular structure promoted confidence in false recognition and false collective memory. A simultaneous assessment of network structure, information distribution, and emotional valence breaks new ground to specify how network structure shapes the spread of negative memories and false memories, and the emergence of collective memory. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  6. Simulation of Particulate Flows on Multi-Processor Machines with Distributed Memory

    International Nuclear Information System (INIS)

    Uhlmann, M.

    2004-01-01

    We present a method for the parallelization of an immersed boundary algorithm for particulate flows using the MPI standard of communication. The treatment of the fluid phase uses the domain decomposition technique over a Cartesian processor grid. The solution of the Hehnholtz problem is approximately factorized an relies upon apparel tri-diagonal solver; the Poisson problem is solved by means of a parallel multi-grid technique simulator MUDPACK. For the solid phase we employ a master-slaves technique where one process or handles all the particles contained in its Eulerian fluid sub-domain and zero or more neighbor processors collaborate in the computation of particle-related quantities whenever a particle position overlaps the boundary of a sub- do mam.The parallel efficiency for some preliminary computations is presented. (Author) 9 refs

  7. Simulation of Particulate Flows Multi-Processor Machines with Distributed Memory

    Energy Technology Data Exchange (ETDEWEB)

    Uhlmann, M.

    2004-07-01

    We presented a method for the parallelization of an immersed boundary algorithm for particulate flows using the MPI standard of communication. The treatment of the fluid phase used the domain decomposition technique over a Cartesian processor grid. The solution of the Helmholtz problem is approximately factorized an relies upon apparel tri-diagonal solver the Poisson problem is solved by means of a parallel multi-grid technique similar to MUDPACK. for the solid phase we employ a master-slaves technique where one processor handles all the particles contained in its Eulerian fluid sub-domain and zero or more neighbor processors collaborate in the computation of particle-related quantities whenever a particle position over laps the boundary of a sub-domain. the parallel efficiency for some preliminary computations is presented. (Author) 9 refs.

  8. Surface Characteristics of Machined NiTi Shape Memory Alloy: The Effects of Cryogenic Cooling and Preheating Conditions

    Science.gov (United States)

    Kaynak, Y.; Huang, B.; Karaca, H. E.; Jawahir, I. S.

    2017-07-01

    This experimental study focuses on the phase state and phase transformation response of the surface and subsurface of machined NiTi alloys. X-ray diffraction (XRD) analysis and differential scanning calorimeter techniques were utilized to measure the phase state and the transformation response of machined specimens, respectively. Specimens were machined under dry machining at ambient temperature, preheated conditions, and cryogenic cooling conditions at various cutting speeds. The findings from this research demonstrate that cryogenic machining substantially alters austenite finish temperature of martensitic NiTi alloy. Austenite finish ( A f) temperature shows more than 25 percent increase resulting from cryogenic machining compared with austenite finish temperature of as-received NiTi. Dry and preheated conditions do not substantially alter austenite finish temperature. XRD analysis shows that distinctive transformation from martensite to austenite occurs during machining process in all three conditions. Complete transformation from martensite to austenite is observed in dry cutting at all selected cutting speeds.

  9. Virtual memory support for distributed computing environments using a shared data object model

    Science.gov (United States)

    Huang, F.; Bacon, J.; Mapp, G.

    1995-12-01

    Conventional storage management systems provide one interface for accessing memory segments and another for accessing secondary storage objects. This hinders application programming and affects overall system performance due to mandatory data copying and user/kernel boundary crossings, which in the microkernel case may involve context switches. Memory-mapping techniques may be used to provide programmers with a unified view of the storage system. This paper extends such techniques to support a shared data object model for distributed computing environments in which good support for coherence and synchronization is essential. The approach is based on a microkernel, typed memory objects, and integrated coherence control. A microkernel architecture is used to support multiple coherence protocols and the addition of new protocols. Memory objects are typed and applications can choose the most suitable protocols for different types of object to avoid protocol mismatch. Low-level coherence control is integrated with high-level concurrency control so that the number of messages required to maintain memory coherence is reduced and system-wide synchronization is realized without severely impacting the system performance. These features together contribute a novel approach to the support for flexible coherence under application control.

  10. The reminiscence bump without memories: The distribution of imagined word-cued and important autobiographical memories in a hypothetical 70-year-old.

    Science.gov (United States)

    Koppel, Jonathan; Berntsen, Dorthe

    2016-08-01

    The reminiscence bump is the disproportionate number of autobiographical memories dating from adolescence and early adulthood. It has often been ascribed to a consolidation of the mature self in the period covered by the bump. Here we stripped away factors relating to the characteristics of autobiographical memories per se, most notably factors that aid in their encoding or retention, by asking students to generate imagined word-cued and imagined 'most important' autobiographical memories of a hypothetical, prototypical 70-year-old of their own culture and gender. We compared the distribution of these fictional memories with the distributions of actual word-cued and most important autobiographical memories in a sample of 61-70-year-olds. We found a striking similarity between the temporal distributions of the imagined memories and the actual memories. These results suggest that the reminiscence bump is largely driven by constructive, schematic factors at retrieval, thereby challenging most existing theoretical accounts. Copyright © 2016 Elsevier Inc. All rights reserved.

  11. Distributed memory in a heterogeneous network, as used in the CERN-PS complex timing system

    CERN Document Server

    Kovaltsov, V I

    1995-01-01

    The Distributed Table Manager (DTM) is a fast and efficient utility for distributing named binary data structures called Tables, of arbitrary size and structure, around a heterogeneous network of computers to a set of registered clients. The Tables are transmitted over a UDP network between DTM servers in network format, where the servers perform the conversions to and from host format for local clients. The servers provide clients with synchronization mechanisms, a choice of network data flows, and table options such as keeping table disc copies, shared memory or heap memory table allocation, table read/write permissions, and table subnet broadcasting. DTM has been designed to be easily maintainable, and to automatically recover from the type of errors typically encountered in a large control system network. The DTM system is based on a three level server daemon hierarchy, in which an inter daemon protocol handles network failures, and incorporates recovery procedures which will guarantee table consistency w...

  12. Bearingless AC Homopolar Machine Design and Control for Distributed Flywheel Energy Storage

    Science.gov (United States)

    Severson, Eric Loren

    The increasing ownership of electric vehicles, in-home solar and wind generation, and wider penetration of renewable energies onto the power grid has created a need for grid-based energy storage to provide energy-neutral services. These services include frequency regulation, which requires short response-times, high power ramping capabilities, and several charge cycles over the course of one day; and diurnal load-/generation-following services to offset the inherent mismatch between renewable generation and the power grid's load profile, which requires low self-discharge so that a reasonable efficiency is obtained over a 24 hour storage interval. To realize the maximum benefits of energy storage, the technology should be modular and have minimum geographic constraints, so that it is easily scalable according to local demands. Furthermore, the technology must be economically viable to participate in the energy markets. There is currently no storage technology that is able to simultaneously meet all of these needs. This dissertation focuses on developing a new energy storage device based on flywheel technology to meet these needs. It is shown that the bearingless ac homopolar machine can be used to overcome key obstacles in flywheel technology, namely: unacceptable self-discharge and overall system cost and complexity. Bearingless machines combine the functionality of a magnetic bearing and a motor/generator into a single electromechanical device. Design of these machines is particularly challenging due to cross-coupling effects and trade-offs between motor and magnetic bearing capabilities. The bearingless ac homopolar machine adds to these design challenges due to its 3D flux paths requiring computationally expensive 3D finite element analysis. At the time this dissertation was started, bearingless ac homopolar machines were a highly immature technology. This dissertation advances the state-of-the-art of these machines through research contributions in the areas of

  13. Capacity for patterns and sequences in Kanerva's SDM as compared to other associative memory models. [Sparse, Distributed Memory

    Science.gov (United States)

    Keeler, James D.

    1988-01-01

    The information capacity of Kanerva's Sparse Distributed Memory (SDM) and Hopfield-type neural networks is investigated. Under the approximations used here, it is shown that the total information stored in these systems is proportional to the number connections in the network. The proportionality constant is the same for the SDM and Hopfield-type models independent of the particular model, or the order of the model. The approximations are checked numerically. This same analysis can be used to show that the SDM can store sequences of spatiotemporal patterns, and the addition of time-delayed connections allows the retrieval of context dependent temporal patterns. A minor modification of the SDM can be used to store correlated patterns.

  14. Immigration, Language Proficiency, and Autobiographical Memories: Lifespan Distribution and Second-Language Access

    OpenAIRE

    Esposito, Alena G.; Baker-Ward, Lynne

    2015-01-01

    This investigation examined two controversies in the autobiographical literature: how cross-language immigration affects the distribution of autobiographical memories across the lifespan and under what circumstances language-dependent recall is observed. Both Spanish/English bilingual immigrants and English monolingual non-immigrants participated in a cue word study, with the bilingual sample taking part in a within-subject language manipulation. The expected bump in the num...

  15. More than a filter: Feature-based attention regulates the distribution of visual working memory resources.

    Science.gov (United States)

    Dube, Blaire; Emrich, Stephen M; Al-Aidroos, Naseem

    2017-10-01

    Across 2 experiments we revisited the filter account of how feature-based attention regulates visual working memory (VWM). Originally drawing from discrete-capacity ("slot") models, the filter account proposes that attention operates like the "bouncer in the brain," preventing distracting information from being encoded so that VWM resources are reserved for relevant information. Given recent challenges to the assumptions of discrete-capacity models, we investigated whether feature-based attention plays a broader role in regulating memory. Both experiments used partial report tasks in which participants memorized the colors of circle and square stimuli, and we provided a feature-based goal by manipulating the likelihood that 1 shape would be probed over the other across a range of probabilities. By decomposing participants' responses using mixture and variable-precision models, we estimated the contributions of guesses, nontarget responses, and imprecise memory representations to their errors. Consistent with the filter account, participants were less likely to guess when the probed memory item matched the feature-based goal. Interestingly, this effect varied with goal strength, even across high probabilities where goal-matching information should always be prioritized, demonstrating strategic control over filter strength. Beyond this effect of attention on which stimuli were encoded, we also observed effects on how they were encoded: Estimates of both memory precision and nontarget errors varied continuously with feature-based attention. The results offer support for an extension to the filter account, where feature-based attention dynamically regulates the distribution of resources within working memory so that the most relevant items are encoded with the greatest precision. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  16. Global asymptotic stability analysis of bidirectional associative memory neural networks with distributed delays and impulse

    International Nuclear Information System (INIS)

    Huang Zaitang; Luo Xiaoshu; Yang Qigui

    2007-01-01

    Many systems existing in physics, chemistry, biology, engineering and information science can be characterized by impulsive dynamics caused by abrupt jumps at certain instants during the process. These complex dynamical behaviors can be model by impulsive differential system or impulsive neural networks. This paper formulates and studies a new model of impulsive bidirectional associative memory (BAM) networks with finite distributed delays. Several fundamental issues, such as global asymptotic stability and existence and uniqueness of such BAM neural networks with impulse and distributed delays, are established

  17. Patterns of particle distribution in multiparticle systems by random walks with memory enhancement and decay

    Science.gov (United States)

    Tan, Zhi-Jie; Zou, Xian-Wu; Huang, Sheng-You; Zhang, Wei; Jin, Zhun-Zhi

    2002-07-01

    We investigate the pattern of particle distribution and its evolution with time in multiparticle systems using the model of random walks with memory enhancement and decay. This model describes some biological intelligent walks. With decrease in the memory decay exponent α, the distribution of particles changes from a random dispersive pattern to a locally dense one, and then returns to the random one. Correspondingly, the fractal dimension Df,p characterizing the distribution of particle positions increases from a low value to a maximum and then decreases to the low one again. This is determined by the degree of overlap of regions consisting of sites with remanent information. The second moment of the density ρ(2) was introduced to investigate the inhomogeneity of the particle distribution. The dependence of ρ(2) on α is similar to that of Df,p on α. ρ(2) increases with time as a power law in the process of adjusting the particle distribution, and then ρ(2) tends to a stable equilibrium value.

  18. Supervised Machine Learning for Regionalization of Environmental Data: Distribution of Uranium in Groundwater in Ukraine

    Science.gov (United States)

    Govorov, Michael; Gienko, Gennady; Putrenko, Viktor

    2018-05-01

    In this paper, several supervised machine learning algorithms were explored to define homogeneous regions of con-centration of uranium in surface waters in Ukraine using multiple environmental parameters. The previous study was focused on finding the primary environmental parameters related to uranium in ground waters using several methods of spatial statistics and unsupervised classification. At this step, we refined the regionalization using Artifi-cial Neural Networks (ANN) techniques including Multilayer Perceptron (MLP), Radial Basis Function (RBF), and Convolutional Neural Network (CNN). The study is focused on building local ANN models which may significantly improve the prediction results of machine learning algorithms by taking into considerations non-stationarity and autocorrelation in spatial data.

  19. IDOCS: intelligent distributed ontology consensus system--the use of machine learning in retinal drusen phenotyping.

    Science.gov (United States)

    Thomas, George; Grassi, Michael A; Lee, John R; Edwards, Albert O; Gorin, Michael B; Klein, Ronald; Casavant, Thomas L; Scheetz, Todd E; Stone, Edwin M; Williams, Andrew B

    2007-05-01

    To use the power of knowledge acquisition and machine learning in the development of a collaborative computer classification system based on the features of age-related macular degeneration (AMD). A vocabulary was acquired from four AMD experts who examined 100 ophthalmoscopic images. The vocabulary was analyzed, hierarchically structured, and incorporated into a collaborative computer classification system called IDOCS. Using this system, three of the experts examined images from a second set of digital images compiled from more than 1000 patients with AMD. Images were annotated, and features were identified and defined. Decision trees, a machine learning method, were trained on the data collected and used to extract patterns. Interrelationships between the data from the different clinicians were investigated. Six drusen classes in the structured vocabulary were largely sufficient to describe all the identified features. The decision trees classified the data with 76.86% to 88.5% accuracy and distilled patterns in the form of hierarchical trees composed of 5 to 15 nodes. Experts were largely consistent in their characterization of soft, and to a lesser extent, hard drusen, but diverge in definition of other drusen. Size and crystalline morphology were the main determinants of drusen type across all experts. Machine learning is a powerful tool for the characterization of disease phenotypes. The creation of a defined feature set for AMD will facilitate the development of an IDOCS-based classification system.

  20. Distribution of return point memory states for systems with stochastic inputs

    International Nuclear Information System (INIS)

    Amann, A; Brokate, M; Rachinskii, D; Temnov, G

    2011-01-01

    We consider the long term effect of stochastic inputs on the state of an open loop system which exhibits the so-called return point memory. An example of such a system is the Preisach model; more generally, systems with the Preisach type input-state relationship, such as in spin-interaction models, are considered. We focus on the characterisation of the expected memory configuration after the system has been effected by the input for sufficiently long period of time. In the case where the input is given by a discrete time random walk process, or the Wiener process, simple closed form expressions for the probability density of the vector of the main input extrema recorded by the memory state, and scaling laws for the dimension of this vector, are derived. If the input is given by a general continuous Markov process, we show that the distribution of previous memory elements can be obtained from a Markov chain scheme which is derived from the solution of an associated one-dimensional escape type problem. Formulas for transition probabilities defining this Markov chain scheme are presented. Moreover, explicit formulas for the conditional probability densities of previous main extrema are obtained for the Ornstein-Uhlenbeck input process. The analytical results are confirmed by numerical experiments.

  1. An Investigation of the Micro-Electrical Discharge Machining of Nickel-Titanium Shape Memory Alloy Using Grey Relations Coupled with Principal Component Analysis

    Directory of Open Access Journals (Sweden)

    Mustufa Haider Abidi

    2017-11-01

    Full Text Available Shape memory alloys (SMAs are advanced engineering materials which possess shape memory effects and super-elastic properties. Their high strength, high wear-resistance, pseudo plasticity, etc., makes the machining of Ni-Ti based SMAs difficult using traditional techniques. Among all non-conventional processes, micro-electric discharge machining (micro-EDM is considered one of the leading processes for micro-machining, owing to its high aspect ratio and capability to machine hard-to-cut materials with good surface finish.The selection of the most appropriate input parameter combination to provide the optimum values for various responses is very important in micro-EDM. This article demonstrates the methodology for optimizing multiple quality characteristics (overcut, taper angle and surface roughness to enhance the quality of micro-holes in Ni-Ti based alloy, using the Grey–Taguchi method. A Taguchi-based grey relational analysis coupled with principal component analysis (Grey-PCA methodology was implemented to investigate the effect of three important micro-EDM process parameters, namely capacitance, voltage and electrode material.The analysis of the individual responses established the importance of multi-response optimization. The main effects plots for the micro-EDM parameters and Analysis of Variance (ANOVA indicate that every parameter does not produce same effect on individual responses, and also that the percent contribution of each parameter to individual response is highly varied. As a result, multi-response optimization was implemented using Grey-PCA. Further, this study revealed that the electrode material had the strongest effect on the multi-response parameter, followed by the voltage and capacitance. The main effects plot for the Grey-PCA shows that the micro-EDM parameters “capacitance” at level-2 (i.e., 475 pF, “discharge voltage” at level-1 (i.e., 80 V and the “electrode material” Cu provided the best multi-response.

  2. A Screen Space GPGPU Surface LIC Algorithm for Distributed Memory Data Parallel Sort Last Rendering Infrastructures

    Energy Technology Data Exchange (ETDEWEB)

    Loring, Burlen; Karimabadi, Homa; Rortershteyn, Vadim

    2014-07-01

    The surface line integral convolution(LIC) visualization technique produces dense visualization of vector fields on arbitrary surfaces. We present a screen space surface LIC algorithm for use in distributed memory data parallel sort last rendering infrastructures. The motivations for our work are to support analysis of datasets that are too large to fit in the main memory of a single computer and compatibility with prevalent parallel scientific visualization tools such as ParaView and VisIt. By working in screen space using OpenGL we can leverage the computational power of GPUs when they are available and run without them when they are not. We address efficiency and performance issues that arise from the transformation of data from physical to screen space by selecting an alternate screen space domain decomposition. We analyze the algorithm's scaling behavior with and without GPUs on two high performance computing systems using data from turbulent plasma simulations.

  3. Distributed collaborative probabilistic design for turbine blade-tip radial running clearance using support vector machine of regression

    Science.gov (United States)

    Fei, Cheng-Wei; Bai, Guang-Chen

    2014-12-01

    To improve the computational precision and efficiency of probabilistic design for mechanical dynamic assembly like the blade-tip radial running clearance (BTRRC) of gas turbine, a distribution collaborative probabilistic design method-based support vector machine of regression (SR)(called as DCSRM) is proposed by integrating distribution collaborative response surface method and support vector machine regression model. The mathematical model of DCSRM is established and the probabilistic design idea of DCSRM is introduced. The dynamic assembly probabilistic design of aeroengine high-pressure turbine (HPT) BTRRC is accomplished to verify the proposed DCSRM. The analysis results reveal that the optimal static blade-tip clearance of HPT is gained for designing BTRRC, and improving the performance and reliability of aeroengine. The comparison of methods shows that the DCSRM has high computational accuracy and high computational efficiency in BTRRC probabilistic analysis. The present research offers an effective way for the reliability design of mechanical dynamic assembly and enriches mechanical reliability theory and method.

  4. Calculating Kolmogorov Complexity from the Output Frequency Distributions of Small Turing Machines

    Science.gov (United States)

    Delahaye, Jean-Paul; Gauvrit, Nicolas

    2014-01-01

    Drawing on various notions from theoretical computer science, we present a novel numerical approach, motivated by the notion of algorithmic probability, to the problem of approximating the Kolmogorov-Chaitin complexity of short strings. The method is an alternative to the traditional lossless compression algorithms, which it may complement, the two being serviceable for different string lengths. We provide a thorough analysis for all binary strings of length and for most strings of length by running all Turing machines with 5 states and 2 symbols ( with reduction techniques) using the most standard formalism of Turing machines, used in for example the Busy Beaver problem. We address the question of stability and error estimation, the sensitivity of the continued application of the method for wider coverage and better accuracy, and provide statistical evidence suggesting robustness. As with compression algorithms, this work promises to deliver a range of applications, and to provide insight into the question of complexity calculation of finite (and short) strings. Additional material can be found at the Algorithmic Nature Group website at http://www.algorithmicnature.org. An Online Algorithmic Complexity Calculator implementing this technique and making the data available to the research community is accessible at http://www.complexitycalculator.com. PMID:24809449

  5. Calculating Kolmogorov complexity from the output frequency distributions of small Turing machines.

    Directory of Open Access Journals (Sweden)

    Fernando Soler-Toscano

    Full Text Available Drawing on various notions from theoretical computer science, we present a novel numerical approach, motivated by the notion of algorithmic probability, to the problem of approximating the Kolmogorov-Chaitin complexity of short strings. The method is an alternative to the traditional lossless compression algorithms, which it may complement, the two being serviceable for different string lengths. We provide a thorough analysis for all Σ(n=1(11 2(n binary strings of length n<12 and for most strings of length 12≤n≤16 by running all ~2.5 x 10(13 Turing machines with 5 states and 2 symbols (8 x 22(9 with reduction techniques using the most standard formalism of Turing machines, used in for example the Busy Beaver problem. We address the question of stability and error estimation, the sensitivity of the continued application of the method for wider coverage and better accuracy, and provide statistical evidence suggesting robustness. As with compression algorithms, this work promises to deliver a range of applications, and to provide insight into the question of complexity calculation of finite (and short strings. Additional material can be found at the Algorithmic Nature Group website at http://www.algorithmicnature.org. An Online Algorithmic Complexity Calculator implementing this technique and making the data available to the research community is accessible at http://www.complexitycalculator.com.

  6. The Cortex Transform as an image preprocessor for sparse distributed memory: An initial study

    Science.gov (United States)

    Olshausen, Bruno; Watson, Andrew

    1990-01-01

    An experiment is described which was designed to evaluate the use of the Cortex Transform as an image processor for Sparse Distributed Memory (SDM). In the experiment, a set of images were injected with Gaussian noise, preprocessed with the Cortex Transform, and then encoded into bit patterns. The various spatial frequency bands of the Cortex Transform were encoded separately so that they could be evaluated based on their ability to properly cluster patterns belonging to the same class. The results of this study indicate that by simply encoding the low pass band of the Cortex Transform, a very suitable input representation for the SDM can be achieved.

  7. Convergence dynamics of hybrid bidirectional associative memory neural networks with distributed delays

    International Nuclear Information System (INIS)

    Liao Xiaofeng; Wong, K.-W.; Yang Shizhong

    2003-01-01

    In this Letter, the characteristics of the convergence dynamics of hybrid bidirectional associative memory neural networks with distributed transmission delays are studied. Without assuming the symmetry of synaptic connection weights and the monotonicity and differentiability of activation functions, the Lyapunov functionals are constructed and the generalized Halanay-type inequalities are employed to derive the delay-independent sufficient conditions under which the networks converge exponentially to the equilibria associated with temporally uniform external inputs. Some examples are given to illustrate the correctness of our results

  8. User and Machine Authentication and Authorization Infrastructure for Distributed Wireless Sensor Network Testbeds

    Directory of Open Access Journals (Sweden)

    Gerald Wagenknecht

    2013-03-01

    Full Text Available The intention of an authentication and authorization infrastructure (AAI is to simplify and unify access to different web resources. With a single login, a user can access web applications at multiple organizations. The Shibboleth authentication and authorization infrastructure is a standards-based, open source software package for web single sign-on (SSO across or within organizational boundaries. It allows service providers to make fine-grained authorization decisions for individual access of protected online resources. The Shibboleth system is a widely used AAI, but only supports protection of browser-based web resources. We have implemented a Shibboleth AAI extension to protect web services using Simple Object Access Protocol (SOAP. Besides user authentication for browser-based web resources, this extension also provides user and machine authentication for web service-based resources. Although implemented for a Shibboleth AAI, the architecture can be easily adapted to other AAIs.

  9. Memory

    OpenAIRE

    Wager, Nadia

    2017-01-01

    This chapter will explore a response to traumatic victimisation which has divided the opinions of psychologists at an exponential rate. We will be examining amnesia for memories of childhood sexual abuse and the potential to recover these memories in adulthood. Whilst this phenomenon is generally accepted in clinical circles, it is seen as highly contentious amongst research psychologists, particularly experimental cognitive psychologists. The chapter will begin with a real case study of a wo...

  10. Discrete Ziggurat: A time-memory trade-off for sampling from a Gaussian distribution over the integers

    NARCIS (Netherlands)

    Buchmann, J.; Cabarcas, D.; Göpfert, F.; Hülsing, A.T.; Weiden, P.; Lange, T.; Lauter, K.; Lisonek, P.

    2014-01-01

    Several lattice-based cryptosystems require to sample from a discrete Gaussian distribution over the integers. Existing methods to sample from such a distribution either need large amounts of memory or they are very slow. In this paper we explore a different method that allows for a flexible

  11. Accelerated Cyclic Reduction: A Distributed-Memory Fast Solver for Structured Linear Systems

    KAUST Repository

    Chávez, Gustavo

    2017-12-15

    We present Accelerated Cyclic Reduction (ACR), a distributed-memory fast solver for rank-compressible block tridiagonal linear systems arising from the discretization of elliptic operators, developed here for three dimensions. Algorithmic synergies between Cyclic Reduction and hierarchical matrix arithmetic operations result in a solver that has O(kNlogN(logN+k2)) arithmetic complexity and O(k Nlog N) memory footprint, where N is the number of degrees of freedom and k is the rank of a block in the hierarchical approximation, and which exhibits substantial concurrency. We provide a baseline for performance and applicability by comparing with the multifrontal method with and without hierarchical semi-separable matrices, with algebraic multigrid and with the classic cyclic reduction method. Over a set of large-scale elliptic systems with features of nonsymmetry and indefiniteness, the robustness of the direct solvers extends beyond that of the multigrid solver, and relative to the multifrontal approach ACR has lower or comparable execution time and size of the factors, with substantially lower numerical ranks. ACR exhibits good strong and weak scaling in a distributed context and, as with any direct solver, is advantageous for problems that require the solution of multiple right-hand sides. Numerical experiments show that the rank k patterns are of O(1) for the Poisson equation and of O(n) for the indefinite Helmholtz equation. The solver is ideal in situations where low-accuracy solutions are sufficient, or otherwise as a preconditioner within an iterative method.

  12. Accelerated Cyclic Reduction: A Distributed-Memory Fast Solver for Structured Linear Systems

    KAUST Repository

    Chá vez, Gustavo; Turkiyyah, George; Zampini, Stefano; Ltaief, Hatem; Keyes, David E.

    2017-01-01

    We present Accelerated Cyclic Reduction (ACR), a distributed-memory fast solver for rank-compressible block tridiagonal linear systems arising from the discretization of elliptic operators, developed here for three dimensions. Algorithmic synergies between Cyclic Reduction and hierarchical matrix arithmetic operations result in a solver that has O(kNlogN(logN+k2)) arithmetic complexity and O(k Nlog N) memory footprint, where N is the number of degrees of freedom and k is the rank of a block in the hierarchical approximation, and which exhibits substantial concurrency. We provide a baseline for performance and applicability by comparing with the multifrontal method with and without hierarchical semi-separable matrices, with algebraic multigrid and with the classic cyclic reduction method. Over a set of large-scale elliptic systems with features of nonsymmetry and indefiniteness, the robustness of the direct solvers extends beyond that of the multigrid solver, and relative to the multifrontal approach ACR has lower or comparable execution time and size of the factors, with substantially lower numerical ranks. ACR exhibits good strong and weak scaling in a distributed context and, as with any direct solver, is advantageous for problems that require the solution of multiple right-hand sides. Numerical experiments show that the rank k patterns are of O(1) for the Poisson equation and of O(n) for the indefinite Helmholtz equation. The solver is ideal in situations where low-accuracy solutions are sufficient, or otherwise as a preconditioner within an iterative method.

  13. Parallel definition of tear film maps on distributed-memory clusters for the support of dry eye diagnosis.

    Science.gov (United States)

    González-Domínguez, Jorge; Remeseiro, Beatriz; Martín, María J

    2017-02-01

    The analysis of the interference patterns on the tear film lipid layer is a useful clinical test to diagnose dry eye syndrome. This task can be automated with a high degree of accuracy by means of the use of tear film maps. However, the time required by the existing applications to generate them prevents a wider acceptance of this method by medical experts. Multithreading has been previously successfully employed by the authors to accelerate the tear film map definition on multicore single-node machines. In this work, we propose a hybrid message-passing and multithreading parallel approach that further accelerates the generation of tear film maps by exploiting the computational capabilities of distributed-memory systems such as multicore clusters and supercomputers. The algorithm for drawing tear film maps is parallelized using Message Passing Interface (MPI) for inter-node communications and the multithreading support available in the C++11 standard for intra-node parallelization. The original algorithm is modified to reduce the communications and increase the scalability. The hybrid method has been tested on 32 nodes of an Intel cluster (with two 12-core Haswell 2680v3 processors per node) using 50 representative images. Results show that maximum runtime is reduced from almost two minutes using the previous only-multithreaded approach to less than ten seconds using the hybrid method. The hybrid MPI/multithreaded implementation can be used by medical experts to obtain tear film maps in only a few seconds, which will significantly accelerate and facilitate the diagnosis of the dry eye syndrome. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  14. Effect of Machine Smoking Intensity and Filter Ventilation Level on Gas-Phase Temperature Distribution Inside a Burning Cigarette

    Directory of Open Access Journals (Sweden)

    Li Bin

    2015-01-01

    Full Text Available Accurate measurements of cigarette coal temperature are essential to understand the thermophysical and thermo-chemical processes in a burning cigarette. The last system-atic studies of cigarette burning temperature measurements were conducted in the mid-1970s. Contemporary cigarettes have evolved in design features and multiple standard machine-smoking regimes have also become available, hence there is a need to re-examine cigarette combustion. In this work, we performed systematic measurements on gas-phase temperature of burning cigarettes using an improved fine thermocouple technique. The effects of machine-smoking parameters (puff volume and puff duration and filter ventilation levels were studied with high spatial and time resolutions during single puffs. The experimental results were presented in a number of differ-ent ways to highlight the dynamic and complex thermal processes inside a burning coal. A mathematical distribution equation was used to fit the experimental temperature data. Extracting and plotting the distribution parameters against puffing time revealed complex temperature profiles under different coal volume as a function of puffing intensities or filter ventilation levels. By dividing the coal volume prior to puffing into three temperature ranges (low-temperature from 200 to 400 °C, medium-temperature from 400 to 600 °C, and high-temperature volume above 600 °C by following their development at different smoking regimes, useful mechanistic details were obtained. Finally, direct visualisation of the gas-phase temperature through detailed temperature and temperature gradient contour maps provided further insights into the complex thermo-physics of the burning coal. [Beitr. Tabakforsch. Int. 26 (2014 191-203

  15. HYGIENIC AND HEALTH QUALITY OF HOT BEVERAGES DISTRIBUTED BY VENDING MACHINES

    Directory of Open Access Journals (Sweden)

    L. Vallone

    2011-08-01

    Full Text Available The food and beverage vending had in the last 40 years a great development in Italy. From the hygienic and health point of view, the quality of the products distributed by Vending is essentially related to three factors: the quality of raw materials, the quality of tap water and the good working order together with the good cleanliness and hygienic status of equipments. In this work we wanted to test these features. We evaluated microbiological and fungal quality of raw materials (powders, of distributed hot beverages and the used equipments. Despite contamination levels shown by the results of this study, the temperature of the boiler is sufficient to make a significant reduction of bacterial and fungal loads. To obtain satisfactory results on the quality of delivered hot beverages is necessary to apply correct maintenance and cleaning/sanitation procedures of equipments, as well as an appropriate selection of suppliers.

  16. Memories.

    Science.gov (United States)

    Brand, Judith, Ed.

    1998-01-01

    This theme issue of the journal "Exploring" covers the topic of "memories" and describes an exhibition at San Francisco's Exploratorium that ran from May 22, 1998 through January 1999 and that contained over 40 hands-on exhibits, demonstrations, artworks, images, sounds, smells, and tastes that demonstrated and depicted the biological,…

  17. A QDWH-Based SVD Software Framework on Distributed-Memory Manycore Systems

    KAUST Repository

    Sukkari, Dalal

    2017-01-01

    This paper presents a high performance software framework for computing a dense SVD on distributed- memory manycore systems. Originally introduced by Nakatsukasa et al. (Nakatsukasa et al. 2010; Nakatsukasa and Higham 2013), the SVD solver relies on the polar decomposition using the QR Dynamically-Weighted Halley algorithm (QDWH). Although the QDWH-based SVD algorithm performs a significant amount of extra floating-point operations compared to the traditional SVD with the one-stage bidiagonal reduction, the inherent high level of concurrency associated with Level 3 BLAS compute-bound kernels ultimately compensates for the arithmetic complexity overhead. Using the ScaLAPACK two-dimensional block cyclic data distribution with a rectangular processor topology, the resulting QDWH-SVD further reduces excessive communications during the panel factorization, while increasing the degree of parallelism during the update of the trailing submatrix, as opposed to relying to the default square processor grid. After detailing the algorithmic complexity and the memory footprint of the algorithm, we conduct a thorough performance analysis and study the impact of the grid topology on the performance by looking at the communication and computation profiling trade-offs. We report performance results against state-of-the-art existing QDWH software implementations (e.g., Elemental) and their SVD extensions on large-scale distributed-memory manycore systems based on commodity Intel x86 Haswell processors and Knights Landing (KNL) architecture. The QDWH-SVD framework achieves up to 3/8-fold on the Haswell/KNL-based platforms, respectively, against ScaLAPACK PDGESVD and turns out to be a competitive alternative for well and ill-conditioned matrices. We finally come up herein with a performance model based on these empirical results. Our QDWH-based polar decomposition and its SVD extension are freely available at https://github.com/ecrc/qdwh.git and https

  18. Morphing Metal and Elastomer Bicontinuous Foams for Reversible Stiffness, Shape Memory, and Self-Healing Soft Machines.

    Science.gov (United States)

    Van Meerbeek, Ilse M; Mac Murray, Benjamin C; Kim, Jae Woo; Robinson, Sanlin S; Zou, Perry X; Silberstein, Meredith N; Shepherd, Robert F

    2016-04-13

    A metal-elastomer-foam composite that varies in stiffness, that can change shape and store shape memory, that self-heals, and that welds into monolithic structures from smaller components is presented. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  19. Diffusion with space memory modelled with distributed order space fractional differential equations

    Directory of Open Access Journals (Sweden)

    M. Caputo

    2003-06-01

    Full Text Available Distributed order fractional differential equations (Caputo, 1995, 2001; Bagley and Torvik, 2000a,b were fi rst used in the time domain; they are here considered in the space domain and introduced in the constitutive equation of diffusion. The solution of the classic problems are obtained, with closed form formulae. In general, the Green functions act as low pass fi lters in the frequency domain. The major difference with the case when a single space fractional derivative is present in the constitutive equations of diffusion (Caputo and Plastino, 2002 is that the solutions found here are potentially more fl exible to represent more complex media (Caputo, 2001a. The difference between the space memory medium and that with the time memory is that the former is more fl exible to represent local phenomena while the latter is more fl exible to represent variations in space. Concerning the boundary value problem, the difference with the solution of the classic diffusion medium, in the case when a constant boundary pressure is assigned and in the medium the pressure is initially nil, is that one also needs to assign the fi rst order space derivative at the boundary.

  20. MSAProbs-MPI: parallel multiple sequence aligner for distributed-memory systems.

    Science.gov (United States)

    González-Domínguez, Jorge; Liu, Yongchao; Touriño, Juan; Schmidt, Bertil

    2016-12-15

    MSAProbs is a state-of-the-art protein multiple sequence alignment tool based on hidden Markov models. It can achieve high alignment accuracy at the expense of relatively long runtimes for large-scale input datasets. In this work we present MSAProbs-MPI, a distributed-memory parallel version of the multithreaded MSAProbs tool that is able to reduce runtimes by exploiting the compute capabilities of common multicore CPU clusters. Our performance evaluation on a cluster with 32 nodes (each containing two Intel Haswell processors) shows reductions in execution time of over one order of magnitude for typical input datasets. Furthermore, MSAProbs-MPI using eight nodes is faster than the GPU-accelerated QuickProbs running on a Tesla K20. Another strong point is that MSAProbs-MPI can deal with large datasets for which MSAProbs and QuickProbs might fail due to time and memory constraints, respectively. Source code in C ++ and MPI running on Linux systems as well as a reference manual are available at http://msaprobs.sourceforge.net CONTACT: jgonzalezd@udc.esSupplementary information: Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  1. Scaling Techniques for Massive Scale-Free Graphs in Distributed (External) Memory

    KAUST Repository

    Pearce, Roger

    2013-05-01

    We present techniques to process large scale-free graphs in distributed memory. Our aim is to scale to trillions of edges, and our research is targeted at leadership class supercomputers and clusters with local non-volatile memory, e.g., NAND Flash. We apply an edge list partitioning technique, designed to accommodate high-degree vertices (hubs) that create scaling challenges when processing scale-free graphs. In addition to partitioning hubs, we use ghost vertices to represent the hubs to reduce communication hotspots. We present a scaling study with three important graph algorithms: Breadth-First Search (BFS), K-Core decomposition, and Triangle Counting. We also demonstrate scalability on BG/P Intrepid by comparing to best known Graph500 results. We show results on two clusters with local NVRAM storage that are capable of traversing trillion-edge scale-free graphs. By leveraging node-local NAND Flash, our approach can process thirty-two times larger datasets with only a 39% performance degradation in Traversed Edges Per Second (TEPS). © 2013 IEEE.

  2. Short-Term Distribution System State Forecast Based on Optimal Synchrophasor Sensor Placement and Extreme Learning Machine

    Energy Technology Data Exchange (ETDEWEB)

    Jiang, Huaiguang; Zhang, Yingchen

    2016-11-14

    This paper proposes an approach for distribution system state forecasting, which aims to provide an accurate and high speed state forecasting with an optimal synchrophasor sensor placement (OSSP) based state estimator and an extreme learning machine (ELM) based forecaster. Specifically, considering the sensor installation cost and measurement error, an OSSP algorithm is proposed to reduce the number of synchrophasor sensor and keep the whole distribution system numerically and topologically observable. Then, the weighted least square (WLS) based system state estimator is used to produce the training data for the proposed forecaster. Traditionally, the artificial neural network (ANN) and support vector regression (SVR) are widely used in forecasting due to their nonlinear modeling capabilities. However, the ANN contains heavy computation load and the best parameters for SVR are difficult to obtain. In this paper, the ELM, which overcomes these drawbacks, is used to forecast the future system states with the historical system states. The proposed approach is effective and accurate based on the testing results.

  3. Event parallelism: Distributed memory parallel computing for high energy physics experiments

    International Nuclear Information System (INIS)

    Nash, T.

    1989-05-01

    This paper describes the present and expected future development of distributed memory parallel computers for high energy physics experiments. It covers the use of event parallel microprocessor farms, particularly at Fermilab, including both ACP multiprocessors and farms of MicroVAXES. These systems have proven very cost effective in the past. A case is made for moving to the more open environment of UNIX and RISC processors. The 2nd Generation ACP Multiprocessor System, which is based on powerful RISC systems, is described. Given the promise of still more extraordinary increases in processor performance, a new emphasis on point to point, rather than bussed, communication will be required. Developments in this direction are described. 6 figs

  4. Studies of electron collisions with polyatomic molecules using distributed-memory parallel computers

    International Nuclear Information System (INIS)

    Winstead, C.; Hipes, P.G.; Lima, M.A.P.; McKoy, V.

    1991-01-01

    Elastic electron scattering cross sections from 5--30 eV are reported for the molecules C 2 H 4 , C 2 H 6 , C 3 H 8 , Si 2 H 6 , and GeH 4 , obtained using an implementation of the Schwinger multichannel method for distributed-memory parallel computer architectures. These results, obtained within the static-exchange approximation, are in generally good agreement with the available experimental data. These calculations demonstrate the potential of highly parallel computation in the study of collisions between low-energy electrons and polyatomic gases. The computational methodology discussed is also directly applicable to the calculation of elastic cross sections at higher levels of approximation (target polarization) and of electronic excitation cross sections

  5. Comparison between sparsely distributed memory and Hopfield-type neural network models

    Science.gov (United States)

    Keeler, James D.

    1986-01-01

    The Sparsely Distributed Memory (SDM) model (Kanerva, 1984) is compared to Hopfield-type neural-network models. A mathematical framework for comparing the two is developed, and the capacity of each model is investigated. The capacity of the SDM can be increased independently of the dimension of the stored vectors, whereas the Hopfield capacity is limited to a fraction of this dimension. However, the total number of stored bits per matrix element is the same in the two models, as well as for extended models with higher order interactions. The models are also compared in their ability to store sequences of patterns. The SDM is extended to include time delays so that contextual information can be used to cover sequences. Finally, it is shown how a generalization of the SDM allows storage of correlated input pattern vectors.

  6. Interoperable mesh components for large-scale, distributed-memory simulations

    International Nuclear Information System (INIS)

    Devine, K; Leung, V; Diachin, L; Miller, M

    2009-01-01

    SciDAC applications have a demonstrated need for advanced software tools to manage the complexities associated with sophisticated geometry, mesh, and field manipulation tasks, particularly as computer architectures move toward the petascale. In this paper, we describe a software component - an abstract data model and programming interface - designed to provide support for parallel unstructured mesh operations. We describe key issues that must be addressed to successfully provide high-performance, distributed-memory unstructured mesh services and highlight some recent research accomplishments in developing new load balancing and MPI-based communication libraries appropriate for leadership class computing. Finally, we give examples of the use of parallel adaptive mesh modification in two SciDAC applications.

  7. Global exponential stability of bidirectional associative memory neural networks with distributed delays

    Science.gov (United States)

    Song, Qiankun; Cao, Jinde

    2007-05-01

    A bidirectional associative memory neural network model with distributed delays is considered. By constructing a new Lyapunov functional, employing the homeomorphism theory, M-matrix theory and the inequality (a[greater-or-equal, slanted]0,bk[greater-or-equal, slanted]0,qk>0 with , and r>1), a sufficient condition is obtained to ensure the existence, uniqueness and global exponential stability of the equilibrium point for the model. Moreover, the exponential converging velocity index is estimated, which depends on the delay kernel functions and the system parameters. The results generalize and improve the earlier publications, and remove the usual assumption that the activation functions are bounded . Two numerical examples are given to show the effectiveness of the obtained results.

  8. Event parallelism: Distributed memory parallel computing for high energy physics experiments

    International Nuclear Information System (INIS)

    Nash, T.

    1989-01-01

    This paper describes the present and expected future development of distributed memory parallel computers for high energy physics experiments. It covers the use of event parallel microprocessor farms, particularly at Fermilab, including both ACP multiprocessors and farms of MicroVAXES. These systems have proven very cost effective in the past. A case is made for moving to the more open environment of UNIX and RISC processors. The 2nd Generation ACP Multiprocessor System, which is based on powerful RISC systems, is described. Given the promise of still more extraordinary increases in processor performance, a new emphasis on point to point, rather than bussed, communication will be required. Developments in this direction are described. (orig.)

  9. Event parallelism: Distributed memory parallel computing for high energy physics experiments

    Science.gov (United States)

    Nash, Thomas

    1989-12-01

    This paper describes the present and expected future development of distributed memory parallel computers for high energy physics experiments. It covers the use of event parallel microprocessor farms, particularly at Fermilab, including both ACP multiprocessors and farms of MicroVAXES. These systems have proven very cost effective in the past. A case is made for moving to the more open environment of UNIX and RISC processors. The 2nd Generation ACP Multiprocessor System, which is based on powerful RISC system, is described. Given the promise of still more extraordinary increases in processor performance, a new emphasis on point to point, rather than bussed, communication will be required. Developments in this direction are described.

  10. ClimateSpark: An In-memory Distributed Computing Framework for Big Climate Data Analytics

    Science.gov (United States)

    Hu, F.; Yang, C. P.; Duffy, D.; Schnase, J. L.; Li, Z.

    2016-12-01

    Massive array-based climate data is being generated from global surveillance systems and model simulations. They are widely used to analyze the environment problems, such as climate changes, natural hazards, and public health. However, knowing the underlying information from these big climate datasets is challenging due to both data- and computing- intensive issues in data processing and analyzing. To tackle the challenges, this paper proposes ClimateSpark, an in-memory distributed computing framework to support big climate data processing. In ClimateSpark, the spatiotemporal index is developed to enable Apache Spark to treat the array-based climate data (e.g. netCDF4, HDF4) as native formats, which are stored in Hadoop Distributed File System (HDFS) without any preprocessing. Based on the index, the spatiotemporal query services are provided to retrieve dataset according to a defined geospatial and temporal bounding box. The data subsets will be read out, and a data partition strategy will be applied to equally split the queried data to each computing node, and store them in memory as climateRDDs for processing. By leveraging Spark SQL and User Defined Function (UDFs), the climate data analysis operations can be conducted by the intuitive SQL language. ClimateSpark is evaluated by two use cases using the NASA Modern-Era Retrospective Analysis for Research and Applications (MERRA) climate reanalysis dataset. One use case is to conduct the spatiotemporal query and visualize the subset results in animation; the other one is to compare different climate model outputs using Taylor-diagram service. Experimental results show that ClimateSpark can significantly accelerate data query and processing, and enable the complex analysis services served in the SQL-style fashion.

  11. Evaluation of a Connectionless NoC for a Real-Time Distributed Shared Memory Many-Core System

    NARCIS (Netherlands)

    Rutgers, J.H.; Bekooij, Marco Jan Gerrit; Smit, Gerardus Johannes Maria

    2012-01-01

    Real-time embedded systems like smartphones tend to comprise an ever increasing number of processing cores. For scalability and the need for guaranteed performance, the use of a connection-oriented network-on-chip (NoC) is advocated. Furthermore, a distributed shared memory architecture is preferred

  12. The Glasgow Parallel Reduction Machine: Programming Shared-memory Many-core Systems using Parallel Task Composition

    Directory of Open Access Journals (Sweden)

    Ashkan Tousimojarad

    2013-12-01

    Full Text Available We present the Glasgow Parallel Reduction Machine (GPRM, a novel, flexible framework for parallel task-composition based many-core programming. We allow the programmer to structure programs into task code, written as C++ classes, and communication code, written in a restricted subset of C++ with functional semantics and parallel evaluation. In this paper we discuss the GPRM, the virtual machine framework that enables the parallel task composition approach. We focus the discussion on GPIR, the functional language used as the intermediate representation of the bytecode running on the GPRM. Using examples in this language we show the flexibility and power of our task composition framework. We demonstrate the potential using an implementation of a merge sort algorithm on a 64-core Tilera processor, as well as on a conventional Intel quad-core processor and an AMD 48-core processor system. We also compare our framework with OpenMP tasks in a parallel pointer chasing algorithm running on the Tilera processor. Our results show that the GPRM programs outperform the corresponding OpenMP codes on all test platforms, and can greatly facilitate writing of parallel programs, in particular non-data parallel algorithms such as reductions.

  13. Distributed cerebellar plasticity implements generalized multiple-scale memory components in real-robot sensorimotor tasks

    Directory of Open Access Journals (Sweden)

    Claudia eCasellato

    2015-02-01

    Full Text Available The cerebellum plays a crucial role in motor learning and it acts as a predictive controller. Modeling it and embedding it into sensorimotor tasks allows us to create functional links between plasticity mechanisms, neural circuits and behavioral learning. Moreover, if applied to real-time control of a neurorobot, the cerebellar model has to deal with a real noisy and changing environment, thus showing its robustness and effectiveness in learning. A biologically inspired cerebellar model with distributed plasticity, both at cortical and nuclear sites, has been used. Two cerebellum-mediated paradigms have been designed: an associative Pavlovian task and a vestibulo-ocular reflex, with multiple sessions of acquisition and extinction and with different stimuli and perturbation patterns. The cerebellar controller succeeded to generate conditioned responses and finely tuned eye movement compensation, thus reproducing human-like behaviors. Through a productive plasticity transfer from cortical to nuclear sites, the distributed cerebellar controller showed in both tasks the capability to optimize learning on multiple time-scales, to store motor memory and to effectively adapt to dynamic ranges of stimuli.

  14. Altered distribution of peripheral blood memory B cells in humans chronically infected with Trypanosoma cruzi.

    Science.gov (United States)

    Fernández, Esteban R; Olivera, Gabriela C; Quebrada Palacio, Luz P; González, Mariela N; Hernandez-Vasquez, Yolanda; Sirena, Natalia María; Morán, María L; Ledesma Patiño, Oscar S; Postan, Miriam

    2014-01-01

    Numerous abnormalities of the peripheral blood T cell compartment have been reported in human chronic Trypanosoma cruzi infection and related to prolonged antigenic stimulation by persisting parasites. Herein, we measured circulating lymphocytes of various phenotypes based on the differential expression of CD19, CD4, CD27, CD10, IgD, IgM, IgG and CD138 in a total of 48 T. cruzi-infected individuals and 24 healthy controls. Infected individuals had decreased frequencies of CD19+CD27+ cells, which positively correlated with the frequencies of CD4+CD27+ cells. The contraction of CD19+CD27+ cells was comprised of IgG+IgD-, IgM+IgD- and isotype switched IgM-IgD- memory B cells, CD19+CD10+CD27+ B cell precursors and terminally differentiated CD19+CD27+CD138+ plasma cells. Conversely, infected individuals had increased proportions of CD19+IgG+CD27-IgD- memory and CD19+IgM+CD27-IgD+ transitional/naïve B cells. These observations prompted us to assess soluble CD27, a molecule generated by the cleavage of membrane-bound CD27 and used to monitor systemic immune activation. Elevated levels of serum soluble CD27 were observed in infected individuals with Chagas cardiomyopathy, indicating its potentiality as an immunological marker for disease progression in endemic areas. In conclusion, our results demonstrate that chronic T. cruzi infection alters the distribution of various peripheral blood B cell subsets, probably related to the CD4+ T cell deregulation process provoked by the parasite in humans.

  15. Altered distribution of peripheral blood memory B cells in humans chronically infected with Trypanosoma cruzi.

    Directory of Open Access Journals (Sweden)

    Esteban R Fernández

    Full Text Available Numerous abnormalities of the peripheral blood T cell compartment have been reported in human chronic Trypanosoma cruzi infection and related to prolonged antigenic stimulation by persisting parasites. Herein, we measured circulating lymphocytes of various phenotypes based on the differential expression of CD19, CD4, CD27, CD10, IgD, IgM, IgG and CD138 in a total of 48 T. cruzi-infected individuals and 24 healthy controls. Infected individuals had decreased frequencies of CD19+CD27+ cells, which positively correlated with the frequencies of CD4+CD27+ cells. The contraction of CD19+CD27+ cells was comprised of IgG+IgD-, IgM+IgD- and isotype switched IgM-IgD- memory B cells, CD19+CD10+CD27+ B cell precursors and terminally differentiated CD19+CD27+CD138+ plasma cells. Conversely, infected individuals had increased proportions of CD19+IgG+CD27-IgD- memory and CD19+IgM+CD27-IgD+ transitional/naïve B cells. These observations prompted us to assess soluble CD27, a molecule generated by the cleavage of membrane-bound CD27 and used to monitor systemic immune activation. Elevated levels of serum soluble CD27 were observed in infected individuals with Chagas cardiomyopathy, indicating its potentiality as an immunological marker for disease progression in endemic areas. In conclusion, our results demonstrate that chronic T. cruzi infection alters the distribution of various peripheral blood B cell subsets, probably related to the CD4+ T cell deregulation process provoked by the parasite in humans.

  16. A cross-cultural study of the lifespan distributions of life script events and autobiographical memories of life story events

    DEFF Research Database (Denmark)

    Zaragoza Scherman, Alejandra; Salgado, Sinué; Shao, Zhifang

    Cultural Life Script Theory provides a cultural explanation of the reminiscence bump: adults older than 40 years remember more life events happening between 15 - 30 years of age. The cultural life script represents semantic knowledge about commonly shared expectations regarding the order and timing...... of major transitional life events in an idealized life course. By comparing the lifespan distribution of life scripts events and memories of life story events, we can determine the degree to which the cultural life script serves as a recall template for autobiographical memories, especially of positive...

  17. Kmerind: A Flexible Parallel Library for K-mer Indexing of Biological Sequences on Distributed Memory Systems.

    Science.gov (United States)

    Pan, Tony; Flick, Patrick; Jain, Chirag; Liu, Yongchao; Aluru, Srinivas

    2017-10-09

    Counting and indexing fixed length substrings, or k-mers, in biological sequences is a key step in many bioinformatics tasks including genome alignment and mapping, genome assembly, and error correction. While advances in next generation sequencing technologies have dramatically reduced the cost and improved latency and throughput, few bioinformatics tools can efficiently process the datasets at the current generation rate of 1.8 terabases every 3 days. We present Kmerind, a high performance parallel k-mer indexing library for distributed memory environments. The Kmerind library provides a set of simple and consistent APIs with sequential semantics and parallel implementations that are designed to be flexible and extensible. Kmerind's k-mer counter performs similarly or better than the best existing k-mer counting tools even on shared memory systems. In a distributed memory environment, Kmerind counts k-mers in a 120 GB sequence read dataset in less than 13 seconds on 1024 Xeon CPU cores, and fully indexes their positions in approximately 17 seconds. Querying for 1% of the k-mers in these indices can be completed in 0.23 seconds and 28 seconds, respectively. Kmerind is the first k-mer indexing library for distributed memory environments, and the first extensible library for general k-mer indexing and counting. Kmerind is available at https://github.com/ParBLiSS/kmerind.

  18. ParBiBit: Parallel tool for binary biclustering on modern distributed-memory systems.

    Science.gov (United States)

    González-Domínguez, Jorge; Expósito, Roberto R

    2018-01-01

    Biclustering techniques are gaining attention in the analysis of large-scale datasets as they identify two-dimensional submatrices where both rows and columns are correlated. In this work we present ParBiBit, a parallel tool to accelerate the search of interesting biclusters on binary datasets, which are very popular on different fields such as genetics, marketing or text mining. It is based on the state-of-the-art sequential Java tool BiBit, which has been proved accurate by several studies, especially on scenarios that result on many large biclusters. ParBiBit uses the same methodology as BiBit (grouping the binary information into patterns) and provides the same results. Nevertheless, our tool significantly improves performance thanks to an efficient implementation based on C++11 that includes support for threads and MPI processes in order to exploit the compute capabilities of modern distributed-memory systems, which provide several multicore CPU nodes interconnected through a network. Our performance evaluation with 18 representative input datasets on two different eight-node systems shows that our tool is significantly faster than the original BiBit. Source code in C++ and MPI running on Linux systems as well as a reference manual are available at https://sourceforge.net/projects/parbibit/.

  19. ClimateSpark: An in-memory distributed computing framework for big climate data analytics

    Science.gov (United States)

    Hu, Fei; Yang, Chaowei; Schnase, John L.; Duffy, Daniel Q.; Xu, Mengchao; Bowen, Michael K.; Lee, Tsengdar; Song, Weiwei

    2018-06-01

    The unprecedented growth of climate data creates new opportunities for climate studies, and yet big climate data pose a grand challenge to climatologists to efficiently manage and analyze big data. The complexity of climate data content and analytical algorithms increases the difficulty of implementing algorithms on high performance computing systems. This paper proposes an in-memory, distributed computing framework, ClimateSpark, to facilitate complex big data analytics and time-consuming computational tasks. Chunking data structure improves parallel I/O efficiency, while a spatiotemporal index is built for the chunks to avoid unnecessary data reading and preprocessing. An integrated, multi-dimensional, array-based data model (ClimateRDD) and ETL operations are developed to address big climate data variety by integrating the processing components of the climate data lifecycle. ClimateSpark utilizes Spark SQL and Apache Zeppelin to develop a web portal to facilitate the interaction among climatologists, climate data, analytic operations and computing resources (e.g., using SQL query and Scala/Python notebook). Experimental results show that ClimateSpark conducts different spatiotemporal data queries/analytics with high efficiency and data locality. ClimateSpark is easily adaptable to other big multiple-dimensional, array-based datasets in various geoscience domains.

  20. Modeling of long-range memory processes with inverse cubic distributions by the nonlinear stochastic differential equations

    Science.gov (United States)

    Kaulakys, B.; Alaburda, M.; Ruseckas, J.

    2016-05-01

    A well-known fact in the financial markets is the so-called ‘inverse cubic law’ of the cumulative distributions of the long-range memory fluctuations of market indicators such as a number of events of trades, trading volume and the logarithmic price change. We propose the nonlinear stochastic differential equation (SDE) giving both the power-law behavior of the power spectral density and the long-range dependent inverse cubic law of the cumulative distribution. This is achieved using the suggestion that when the market evolves from calm to violent behavior there is a decrease of the delay time of multiplicative feedback of the system in comparison to the driving noise correlation time. This results in a transition from the Itô to the Stratonovich sense of the SDE and yields a long-range memory process.

  1. FY1995 distributed control of man-machine cooperative multi agent systems; 1995 nendo ningen kyochogata multi agent kikai system no jiritsu seigyo

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-03-01

    In the near future, distributed autonomous systems will be practical in many situations, e.g., interactive production systems, hazardous environments, nursing homes, and individual houses. The agents which consist of the distributed system must not give damages to human being and should be working economically. In this project man-machine cooperative multi agent systems are studied in many kind of respects, and basic design technology, basic control technique are developed by establishing fundamental theories and by constructing experimental systems. In this project theoretical and experimental studies are conducted in the following sub-projects: (1) Distributed cooperative control in multi agent type actuation systems (2) Control of non-holonomic systems (3) Man-machine Cooperative systems (4) Robot systems learning human skills (5) Robust force control of constrained systems In each sub-project cooperative nature between machine agent systems and human being, interference between artificial multi agents and environment and new function emergence in coordination of the multi agents and the environment, robust force control against for the environments, control methods for non-holonomic systems, robot systems which can mimic and learn human skills were studied. In each sub-project, some problems were hi-lighted and solutions for the problems have been given based on construction of experimental systems. (NEDO)

  2. Human-machine interactions

    Science.gov (United States)

    Forsythe, J Chris [Sandia Park, NM; Xavier, Patrick G [Albuquerque, NM; Abbott, Robert G [Albuquerque, NM; Brannon, Nathan G [Albuquerque, NM; Bernard, Michael L [Tijeras, NM; Speed, Ann E [Albuquerque, NM

    2009-04-28

    Digital technology utilizing a cognitive model based on human naturalistic decision-making processes, including pattern recognition and episodic memory, can reduce the dependency of human-machine interactions on the abilities of a human user and can enable a machine to more closely emulate human-like responses. Such a cognitive model can enable digital technology to use cognitive capacities fundamental to human-like communication and cooperation to interact with humans.

  3. Effect of starting microstructure upon the nucleation sites and distribution of graphite particles during a graphitising anneal of an experimental medium-carbon machining steel

    Energy Technology Data Exchange (ETDEWEB)

    Inam, A., E-mail: aqil.ceet@pu.edu.pk; Brydson, R., E-mail: mtlrmdb@leeds.ac.uk; Edmonds, D.V., E-mail: d.v.edmonds@leeds.ac.uk

    2015-08-15

    The potential for using graphite particles as an internal lubricant during machining is considered. Graphite particles were found to form during graphitisation of experimental medium-carbon steel alloyed with Si and Al. The graphite nucleation sites were strongly influenced by the starting microstructure, whether ferrite–pearlite, bainite or martensite, as revealed by light and electron microscopy. Favourable nucleation sites in the ferrite–pearlite starting microstructure were, not unexpectedly, found to be located within pearlite colonies, no doubt due to the presence of abundant cementite as a source of carbon. In consequence, the final distribution of graphite nodules in ferrite–pearlite microstructures was less uniform than for the bainite microstructure studied. In the case of martensite, this study found a predominance of nucleation at grain boundaries, again leading to less uniform graphite dispersions. - Highlights: • Metallography of formation of graphite particles in experimental carbon steel. • Potential for using graphite in steel as an internal lubricant during machining. • Microstructure features expected to influence improved machinability studied. • Influence of pre-anneal starting microstructure on graphite nucleation sites. • Influence of pre-anneal starting microstructure on graphite distribution. • Potential benefit is new free-cutting steel compositions without e.g. Pb alloying.

  4. The Milan Project: A New Method for High-Assurance and High-Performance Computing on Large-Scale Distributed Platforms

    National Research Council Canada - National Science Library

    Kedem, Zvi

    2000-01-01

    ...: Calypso, Chime, and Charlotte; which enable applications developed for ideal, shared memory, parallel machines to execute on distributed platforms that are subject to failures, slowdowns, and changing resource availability...

  5. Is the dose distribution distorted in IMRT and RapidArc treatment when patient plans are swapped across beam‐matched machines?

    Science.gov (United States)

    Radha, Chandrasekaran Anu; Subramani, Vendhan; Gunasekaran, Madhan Kumar

    2016-01-01

    The purpose of this study is to evaluate the degree of dose distribution distortion in advanced treatments like IMRT and RapidArc when patient plans are swapped across dosimetrically equivalent so‐called “beam‐matched” machines. For this purpose the entire work is divided into two stages. At forefront stage all basic beam properties of 6 MV X‐rays like PDD, profiles, output factors, TPR20/10 and MLC transmission of two beam‐matched machines — Varian Clinac iX and Varian 600 C/D Unique — are compared and evaluated for differences. At second stage 40 IMRT and RapidArc patient plans from the pool of head and neck (H&N) and pelvis sites are selected for the study. The plans are swapped across the machines for dose recalculation and the DVHs of target and critical organs are evaluated for dose differences. Following this, the accuracy of the beam‐matching at the TPS level for treatments like IMRT and RapidArc are compared. On PDD, profile (central 80%) and output factor comparison between the two machines, a maximum percentage disagreement value of −2.39%,−2.0% and −2.78%, respectively, has been observed. The maximum dose difference observed at volumes in IMRT and RapidArc treatments for H&N dose prescription of 69.3 Gy/33 fractions is 0.88 Gy and 0.82 Gy, respectively. Similarly, for pelvis, with a dose prescription of 50 Gy/25 fractions, a maximum dose difference of 0.55 Gy and 0.53 Gy is observed at volumes in IMRT and RapidArc treatments, respectively. Overall results of the swapped plans between two machines' 6 MV X‐rays are well within the limits of accepted clinical tolerance. PACS number(s): 87.56.bd PMID:27685106

  6. Efficient implementation of multidimensional fast fourier transform on a distributed-memory parallel multi-node computer

    Science.gov (United States)

    Bhanot, Gyan V [Princeton, NJ; Chen, Dong [Croton-On-Hudson, NY; Gara, Alan G [Mount Kisco, NY; Giampapa, Mark E [Irvington, NY; Heidelberger, Philip [Cortlandt Manor, NY; Steinmacher-Burow, Burkhard D [Mount Kisco, NY; Vranas, Pavlos M [Bedford Hills, NY

    2012-01-10

    The present in invention is directed to a method, system and program storage device for efficiently implementing a multidimensional Fast Fourier Transform (FFT) of a multidimensional array comprising a plurality of elements initially distributed in a multi-node computer system comprising a plurality of nodes in communication over a network, comprising: distributing the plurality of elements of the array in a first dimension across the plurality of nodes of the computer system over the network to facilitate a first one-dimensional FFT; performing the first one-dimensional FFT on the elements of the array distributed at each node in the first dimension; re-distributing the one-dimensional FFT-transformed elements at each node in a second dimension via "all-to-all" distribution in random order across other nodes of the computer system over the network; and performing a second one-dimensional FFT on elements of the array re-distributed at each node in the second dimension, wherein the random order facilitates efficient utilization of the network thereby efficiently implementing the multidimensional FFT. The "all-to-all" re-distribution of array elements is further efficiently implemented in applications other than the multidimensional FFT on the distributed-memory parallel supercomputer.

  7. Efficient implementation of a multidimensional fast fourier transform on a distributed-memory parallel multi-node computer

    Science.gov (United States)

    Bhanot, Gyan V [Princeton, NJ; Chen, Dong [Croton-On-Hudson, NY; Gara, Alan G [Mount Kisco, NY; Giampapa, Mark E [Irvington, NY; Heidelberger, Philip [Cortlandt Manor, NY; Steinmacher-Burow, Burkhard D [Mount Kisco, NY; Vranas, Pavlos M [Bedford Hills, NY

    2008-01-01

    The present in invention is directed to a method, system and program storage device for efficiently implementing a multidimensional Fast Fourier Transform (FFT) of a multidimensional array comprising a plurality of elements initially distributed in a multi-node computer system comprising a plurality of nodes in communication over a network, comprising: distributing the plurality of elements of the array in a first dimension across the plurality of nodes of the computer system over the network to facilitate a first one-dimensional FFT; performing the first one-dimensional FFT on the elements of the array distributed at each node in the first dimension; re-distributing the one-dimensional FFT-transformed elements at each node in a second dimension via "all-to-all" distribution in random order across other nodes of the computer system over the network; and performing a second one-dimensional FFT on elements of the array re-distributed at each node in the second dimension, wherein the random order facilitates efficient utilization of the network thereby efficiently implementing the multidimensional FFT. The "all-to-all" re-distribution of array elements is further efficiently implemented in applications other than the multidimensional FFT on the distributed-memory parallel supercomputer.

  8. LiF thermoluminescence dosimetry for mapping absorbed dose distributions in the gamma ray disinfection of machine-baled sheep wool

    International Nuclear Information System (INIS)

    Dexi Jiang

    1985-01-01

    The measurement of absorbed dose distributions of 60 Co γ-rays in machine-baled sheep wool, which is disinfected of certain parasitic bacteria (e.g. Brucella bacilli) by γ-ray treatment, is summarized. The preparation and main physical properties of the LiF-TLD are described, as well as the shape, structure and the activity of the 60 Co source and typical dose distributions measured around the source in free air. The results of dose distributions measured by the LiF-TLD agreed within +-5% with those given by a calibrated ionization chamber. The exposure rates (units R/min) at three typical measurement points inside a bale of sheep's wool were found to be quite uniform: centre 3.8x10 3 (+-2.1%); upper region 3.9x10 3 (+-2.4%); lower region 3.9x10 3 (+-1.9%). (author)

  9. Implementation of the Lanczos algorithm for the Hubbard model on the Connection Machine system

    International Nuclear Information System (INIS)

    Leung, P.W.; Oppenheimer, P.E.

    1992-01-01

    An implementation of the Lanczos algorithm for the exact diagonalization of the two dimensional Hubbard model on a 4x4 square lattice on the Connection Machine CM-2 system is described. The CM-2 is a massively parallel machine with distributed memory. The program is written in C/PARIS. This implementation minimizes memory usage by generating the matrix elements as needed instead of storing them. The Lanczos vectors are stored across the local memory of the processors. Using translational symmetry only, the dimension of the Hilbert space at half filling is more than 10 million. A speed of about 2.4 min per iteration is achieved on a 64K CM-2. This implementation is scalable. Running it on a bigger machine with more processors speeds up the process. The performance analysis of this implementation is shown and discuss its advantages and disadvantages are discussed

  10. A trade-off between local and distributed information processing associated with remote episodic versus semantic memory.

    Science.gov (United States)

    Heisz, Jennifer J; Vakorin, Vasily; Ross, Bernhard; Levine, Brian; McIntosh, Anthony R

    2014-01-01

    Episodic memory and semantic memory produce very different subjective experiences yet rely on overlapping networks of brain regions for processing. Traditional approaches for characterizing functional brain networks emphasize static states of function and thus are blind to the dynamic information processing within and across brain regions. This study used information theoretic measures of entropy to quantify changes in the complexity of the brain's response as measured by magnetoencephalography while participants listened to audio recordings describing past personal episodic and general semantic events. Personal episodic recordings evoked richer subjective mnemonic experiences and more complex brain responses than general semantic recordings. Critically, we observed a trade-off between the relative contribution of local versus distributed entropy, such that personal episodic recordings produced relatively more local entropy whereas general semantic recordings produced relatively more distributed entropy. Changes in the relative contributions of local and distributed entropy to the total complexity of the system provides a potential mechanism that allows the same network of brain regions to represent cognitive information as either specific episodes or more general semantic knowledge.

  11. Distribution of Peripheral Memory T Follicular Helper Cells in Patients with Schistosomiasis Japonica.

    Directory of Open Access Journals (Sweden)

    Xiaojun Chen

    Full Text Available Schistosomiasis is a helminthic disease that affects more than 200 million people. An effective vaccine would be a major step towards eliminating the disease. Studies suggest that T follicular helper (Tfh cells provide help to B cells to generate the long-term humoral immunity, which would be a crucial component of successful vaccines. Thus, understanding the biological characteristics of Tfh cells in patients with schistosomiasis, which has never been explored, is essential for vaccine design.In this study, we investigated the biological characteristics of peripheral memory Tfh cells in schistosomiasis patients by flow cytometry. Our data showed that the frequencies of total and activated peripheral memory Tfh cells in patients were significantly increased during Schistosoma japonicum infection. Moreover, Tfh2 cells, which were reported to be a specific subpopulation to facilitate the generation of protective antibodies, were increased more greatly than other subpopulations of total peripheral memory Tfh cells in patients with schistosomiasis japonica. More importantly, our result showed significant correlations of the percentage of Tfh2 cells with both the frequency of plasma cells and the level of IgG antibody. In addition, our results showed that the percentage of T follicular regulatory (Tfr cells was also increased in patients with schistosomiasis.Our report is the first characterization of peripheral memory Tfh cells in schistosomasis patients, which not only provides potential targets to improve immune response to vaccination, but also is important for the development of vaccination strategies to control schistosomiasis.

  12. Machine Shop Grinding Machines.

    Science.gov (United States)

    Dunn, James

    This curriculum manual is one in a series of machine shop curriculum manuals intended for use in full-time secondary and postsecondary classes, as well as part-time adult classes. The curriculum can also be adapted to open-entry, open-exit programs. Its purpose is to equip students with basic knowledge and skills that will enable them to enter the…

  13. A parallelization study of the general purpose Monte Carlo code MCNP4 on a distributed memory highly parallel computer

    International Nuclear Information System (INIS)

    Yamazaki, Takao; Fujisaki, Masahide; Okuda, Motoi; Takano, Makoto; Masukawa, Fumihiro; Naito, Yoshitaka

    1993-01-01

    The general purpose Monte Carlo code MCNP4 has been implemented on the Fujitsu AP1000 distributed memory highly parallel computer. Parallelization techniques developed and studied are reported. A shielding analysis function of the MCNP4 code is parallelized in this study. A technique to map a history to each processor dynamically and to map control process to a certain processor was applied. The efficiency of parallelized code is up to 80% for a typical practical problem with 512 processors. These results demonstrate the advantages of a highly parallel computer to the conventional computers in the field of shielding analysis by Monte Carlo method. (orig.)

  14. Efficient calculation of open quantum system dynamics and time-resolved spectroscopy with distributed memory HEOM (DM-HEOM).

    Science.gov (United States)

    Kramer, Tobias; Noack, Matthias; Reinefeld, Alexander; Rodríguez, Mirta; Zelinskyy, Yaroslav

    2018-06-11

    Time- and frequency-resolved optical signals provide insights into the properties of light-harvesting molecular complexes, including excitation energies, dipole strengths and orientations, as well as in the exciton energy flow through the complex. The hierarchical equations of motion (HEOM) provide a unifying theory, which allows one to study the combined effects of system-environment dissipation and non-Markovian memory without making restrictive assumptions about weak or strong couplings or separability of vibrational and electronic degrees of freedom. With increasing system size the exact solution of the open quantum system dynamics requires memory and compute resources beyond a single compute node. To overcome this barrier, we developed a scalable variant of HEOM. Our distributed memory HEOM, DM-HEOM, is a universal tool for open quantum system dynamics. It is used to accurately compute all experimentally accessible time- and frequency-resolved processes in light-harvesting molecular complexes with arbitrary system-environment couplings for a wide range of temperatures and complex sizes. © 2018 Wiley Periodicals, Inc. © 2018 Wiley Periodicals, Inc.

  15. PCI bus content-addressable-memory (CAM) implementation on FPGA for pattern recognition/image retrieval in a distributed environment

    Science.gov (United States)

    Megherbi, Dalila B.; Yan, Yin; Tanmay, Parikh; Khoury, Jed; Woods, C. L.

    2004-11-01

    Recently surveillance and Automatic Target Recognition (ATR) applications are increasing as the cost of computing power needed to process the massive amount of information continues to fall. This computing power has been made possible partly by the latest advances in FPGAs and SOPCs. In particular, to design and implement state-of-the-Art electro-optical imaging systems to provide advanced surveillance capabilities, there is a need to integrate several technologies (e.g. telescope, precise optics, cameras, image/compute vision algorithms, which can be geographically distributed or sharing distributed resources) into a programmable system and DSP systems. Additionally, pattern recognition techniques and fast information retrieval, are often important components of intelligent systems. The aim of this work is using embedded FPGA as a fast, configurable and synthesizable search engine in fast image pattern recognition/retrieval in a distributed hardware/software co-design environment. In particular, we propose and show a low cost Content Addressable Memory (CAM)-based distributed embedded FPGA hardware architecture solution with real time recognition capabilities and computing for pattern look-up, pattern recognition, and image retrieval. We show how the distributed CAM-based architecture offers a performance advantage of an order-of-magnitude over RAM-based architecture (Random Access Memory) search for implementing high speed pattern recognition for image retrieval. The methods of designing, implementing, and analyzing the proposed CAM based embedded architecture are described here. Other SOPC solutions/design issues are covered. Finally, experimental results, hardware verification, and performance evaluations using both the Xilinx Virtex-II and the Altera Apex20k are provided to show the potential and power of the proposed method for low cost reconfigurable fast image pattern recognition/retrieval at the hardware/software co-design level.

  16. Virtual Vector Machine for Bayesian Online Classification

    OpenAIRE

    Minka, Thomas P.; Xiang, Rongjing; Yuan; Qi

    2012-01-01

    In a typical online learning scenario, a learner is required to process a large data stream using a small memory buffer. Such a requirement is usually in conflict with a learner's primary pursuit of prediction accuracy. To address this dilemma, we introduce a novel Bayesian online classi cation algorithm, called the Virtual Vector Machine. The virtual vector machine allows you to smoothly trade-off prediction accuracy with memory size. The virtual vector machine summarizes the information con...

  17. Paying attention to working memory: Similarities in the spatial distribution of attention in mental and physical space.

    Science.gov (United States)

    Sahan, Muhammet Ikbal; Verguts, Tom; Boehler, Carsten Nicolas; Pourtois, Gilles; Fias, Wim

    2016-08-01

    Selective attention is not limited to information that is physically present in the external world, but can also operate on mental representations in the internal world. However, it is not known whether the mechanisms of attentional selection operate in similar fashions in physical and mental space. We studied the spatial distributions of attention for items in physical and mental space by comparing how successfully distractors were rejected at varying distances from the attended location. The results indicated very similar distribution characteristics of spatial attention in physical and mental space. Specifically, we found that performance monotonically improved with increasing distractor distance relative to the attended location, suggesting that distractor confusability is particularly pronounced for nearby distractors, relative to distractors farther away. The present findings suggest that mental representations preserve their spatial configuration in working memory, and that similar mechanistic principles underlie selective attention in physical and in mental space.

  18. Individual Differences in Components of Reaction Time Distributions and Their Relations to Working Memory and Intelligence

    Science.gov (United States)

    Schmiedek, Florian; Oberauer, Klaus; Wilhelm, Oliver; Suss, Heinz-Martin; Wittmann, Werner W.

    2007-01-01

    The authors bring together approaches from cognitive and individual differences psychology to model characteristics of reaction time distributions beyond measures of central tendency. Ex-Gaussian distributions and a diffusion model approach are used to describe individuals' reaction time data. The authors identified common latent factors for each…

  19. Distributed patterns of occipito-parietal functional connectivity predict the precision of visual working memory.

    Science.gov (United States)

    Galeano Weber, Elena M; Hahn, Tim; Hilger, Kirsten; Fiebach, Christian J

    2017-02-01

    Limitations in visual working memory (WM) quality (i.e., WM precision) may depend on perceptual and attentional limitations during stimulus encoding, thereby affecting WM capacity. WM encoding relies on the interaction between sensory processing systems and fronto-parietal 'control' regions, and differences in the quality of this interaction are a plausible source of individual differences in WM capacity. Accordingly, we hypothesized that the coupling between perceptual and attentional systems affects the quality of WM encoding. We combined fMRI connectivity analysis with behavioral modeling by fitting a variable precision and fixed capacity model to the performance data obtained while participants performed a visual delayed continuous response WM task. We quantified functional connectivity during WM encoding between occipital and parietal brain regions activated during both perception and WM encoding, as determined using a conjunction of two independent experiments. The multivariate pattern of voxel-wise inter-areal functional connectivity significantly predicted WM performance, most specifically the mean of WM precision but not the individual number of items that could be stored in memory. In particular, higher occipito-parietal connectivity was associated with higher behavioral mean precision. These results are consistent with a network perspective of WM capacity, suggesting that the efficiency of information flow between perceptual and attentional neural systems is a critical determinant of limitations in WM quality. Copyright © 2016 Elsevier Inc. All rights reserved.

  20. Distributed patterns of activity in sensory cortex reflect the precision of multiple items maintained in visual short-term memory.

    Science.gov (United States)

    Emrich, Stephen M; Riggall, Adam C; Larocque, Joshua J; Postle, Bradley R

    2013-04-10

    Traditionally, load sensitivity of sustained, elevated activity has been taken as an index of storage for a limited number of items in visual short-term memory (VSTM). Recently, studies have demonstrated that the contents of a single item held in VSTM can be decoded from early visual cortex, despite the fact that these areas do not exhibit elevated, sustained activity. It is unknown, however, whether the patterns of neural activity decoded from sensory cortex change as a function of load, as one would expect from a region storing multiple representations. Here, we use multivoxel pattern analysis to examine the neural representations of VSTM in humans across multiple memory loads. In an important extension of previous findings, our results demonstrate that the contents of VSTM can be decoded from areas that exhibit a transient response to visual stimuli, but not from regions that exhibit elevated, sustained load-sensitive delay-period activity. Moreover, the neural information present in these transiently activated areas decreases significantly with increasing load, indicating load sensitivity of the patterns of activity that support VSTM maintenance. Importantly, the decrease in classification performance as a function of load is correlated with within-subject changes in mnemonic resolution. These findings indicate that distributed patterns of neural activity in putatively sensory visual cortex support the representation and precision of information in VSTM.

  1. A QDWH-Based SVD Software Framework on Distributed-Memory Manycore Systems

    KAUST Repository

    Sukkari, Dalal; Ltaief, Hatem; Esposito, Aniello; Keyes, David E.

    2017-01-01

    , the inherent high level of concurrency associated with Level 3 BLAS compute-bound kernels ultimately compensates for the arithmetic complexity overhead. Using the ScaLAPACK two-dimensional block cyclic data distribution with a rectangular processor topology

  2. The integration of elastic wave properties and machine learning for the distribution of petrophysical properties in reservoir modeling

    Science.gov (United States)

    Ratnam, T. C.; Ghosh, D. P.; Negash, B. M.

    2018-05-01

    Conventional reservoir modeling employs variograms to predict the spatial distribution of petrophysical properties. This study aims to improve property distribution by incorporating elastic wave properties. In this study, elastic wave properties obtained from seismic inversion are used as input for an artificial neural network to predict neutron porosity in between well locations. The method employed in this study is supervised learning based on available well logs. This method converts every seismic trace into a pseudo-well log, hence reducing the uncertainty between well locations. By incorporating the seismic response, the reliance on geostatistical methods such as variograms for the distribution of petrophysical properties is reduced drastically. The results of the artificial neural network show good correlation with the neutron porosity log which gives confidence for spatial prediction in areas where well logs are not available.

  3. Sustainable machining

    CERN Document Server

    2017-01-01

    This book provides an overview on current sustainable machining. Its chapters cover the concept in economic, social and environmental dimensions. It provides the reader with proper ways to handle several pollutants produced during the machining process. The book is useful on both undergraduate and postgraduate levels and it is of interest to all those working with manufacturing and machining technology.

  4. Study Trapped Charge Distribution in P-Channel Silicon-Oxide-Nitride-Oxide-Silicon Memory Device Using Dynamic Programming Scheme

    Science.gov (United States)

    Li, Fu-Hai; Chiu, Yung-Yueh; Lee, Yen-Hui; Chang, Ru-Wei; Yang, Bo-Jun; Sun, Wein-Town; Lee, Eric; Kuo, Chao-Wei; Shirota, Riichiro

    2013-04-01

    In this study, we precisely investigate the charge distribution in SiN layer by dynamic programming of channel hot hole induced hot electron injection (CHHIHE) in p-channel silicon-oxide-nitride-oxide-silicon (SONOS) memory device. In the dynamic programming scheme, gate voltage is increased as a staircase with fixed step amplitude, which can prohibits the injection of holes in SiN layer. Three-dimensional device simulation is calibrated and is compared with the measured programming characteristics. It is found, for the first time, that the hot electron injection point quickly traverses from drain to source side synchronizing to the expansion of charged area in SiN layer. As a result, the injected charges quickly spread over on the almost whole channel area uniformly during a short programming period, which will afford large tolerance against lateral trapped charge diffusion by baking.

  5. Parallelization of MCNP 4, a Monte Carlo neutron and photon transport code system, in highly parallel distributed memory type computer

    International Nuclear Information System (INIS)

    Masukawa, Fumihiro; Takano, Makoto; Naito, Yoshitaka; Yamazaki, Takao; Fujisaki, Masahide; Suzuki, Koichiro; Okuda, Motoi.

    1993-11-01

    In order to improve the accuracy and calculating speed of shielding analyses, MCNP 4, a Monte Carlo neutron and photon transport code system, has been parallelized and measured of its efficiency in the highly parallel distributed memory type computer, AP1000. The code has been analyzed statically and dynamically, then the suitable algorithm for parallelization has been determined for the shielding analysis functions of MCNP 4. This includes a strategy where a new history is assigned to the idling processor element dynamically during the execution. Furthermore, to avoid the congestion of communicative processing, the batch concept, processing multi-histories by a unit, has been introduced. By analyzing a sample cask problem with 2,000,000 histories by the AP1000 with 512 processor elements, the 82 % of parallelization efficiency is achieved, and the calculational speed has been estimated to be around 50 times as fast as that of FACOM M-780. (author)

  6. Successful declarative memory formation is associated with ongoing activity during encoding in a distributed neocortical network related to working memory: a magnetoencephalography study.

    NARCIS (Netherlands)

    Takashima, A.; Jensen, O.; Oostenveld, R.; Maris, E.G.G.; Coevering, M. van de; Fernandez, G.S.E.

    2006-01-01

    The aim of the present study was to investigate the spatio-temporal characteristics of the neural correlates of declarative memory formation as assessed by the subsequent memory effect, i.e. the difference in encoding activity between subsequently remembered and subsequently forgotten items.

  7. Successful declarative memory formation is associated with ongoing activity during encoding in a distributed neocortical network related to working memory: A magnetoencephalography study

    NARCIS (Netherlands)

    Takashima, A.; Jensen, O.; Oostenveld, R.; Maris, E.G.G.; Coevering, M. van de; Fernandez, G.S.E.

    2006-01-01

    The aim of the present study was to investigate the spatio-temporal characteristics of the neural correlates of declarative memory formation as assessed by the subsequent memory effect, i.e. the difference in encoding activity between subsequently remembered and subsequently forgotten items.

  8. A Parallel Distributed-Memory Particle Method Enables Acquisition-Rate Segmentation of Large Fluorescence Microscopy Images.

    Directory of Open Access Journals (Sweden)

    Yaser Afshar

    Full Text Available Modern fluorescence microscopy modalities, such as light-sheet microscopy, are capable of acquiring large three-dimensional images at high data rate. This creates a bottleneck in computational processing and analysis of the acquired images, as the rate of acquisition outpaces the speed of processing. Moreover, images can be so large that they do not fit the main memory of a single computer. We address both issues by developing a distributed parallel algorithm for segmentation of large fluorescence microscopy images. The method is based on the versatile Discrete Region Competition algorithm, which has previously proven useful in microscopy image segmentation. The present distributed implementation decomposes the input image into smaller sub-images that are distributed across multiple computers. Using network communication, the computers orchestrate the collectively solving of the global segmentation problem. This not only enables segmentation of large images (we test images of up to 10(10 pixels, but also accelerates segmentation to match the time scale of image acquisition. Such acquisition-rate image segmentation is a prerequisite for the smart microscopes of the future and enables online data compression and interactive experiments.

  9. A Parallel Distributed-Memory Particle Method Enables Acquisition-Rate Segmentation of Large Fluorescence Microscopy Images.

    Science.gov (United States)

    Afshar, Yaser; Sbalzarini, Ivo F

    2016-01-01

    Modern fluorescence microscopy modalities, such as light-sheet microscopy, are capable of acquiring large three-dimensional images at high data rate. This creates a bottleneck in computational processing and analysis of the acquired images, as the rate of acquisition outpaces the speed of processing. Moreover, images can be so large that they do not fit the main memory of a single computer. We address both issues by developing a distributed parallel algorithm for segmentation of large fluorescence microscopy images. The method is based on the versatile Discrete Region Competition algorithm, which has previously proven useful in microscopy image segmentation. The present distributed implementation decomposes the input image into smaller sub-images that are distributed across multiple computers. Using network communication, the computers orchestrate the collectively solving of the global segmentation problem. This not only enables segmentation of large images (we test images of up to 10(10) pixels), but also accelerates segmentation to match the time scale of image acquisition. Such acquisition-rate image segmentation is a prerequisite for the smart microscopes of the future and enables online data compression and interactive experiments.

  10. Theta-alpha EEG phase distributions in the frontal area for dissociation of visual and auditory working memory.

    Science.gov (United States)

    Akiyama, Masakazu; Tero, Atsushi; Kawasaki, Masahiro; Nishiura, Yasumasa; Yamaguchi, Yoko

    2017-03-07

    Working memory (WM) is known to be associated with synchronization of the theta and alpha bands observed in electroencephalograms (EEGs). Although frontal-posterior global theta synchronization appears in modality-specific WM, local theta synchronization in frontal regions has been found in modality-independent WM. How frontal theta oscillations separately synchronize with task-relevant sensory brain areas remains an open question. Here, we focused on theta-alpha phase relationships in frontal areas using EEG, and then verified their functional roles with mathematical models. EEG data showed that the relationship between theta (6 Hz) and alpha (12 Hz) phases in the frontal areas was about 1:2 during both auditory and visual WM, and that the phase distributions between auditory and visual WM were different. Next, we used the differences in phase distributions to construct FitzHugh-Nagumo type mathematical models. The results replicated the modality-specific branching by orthogonally of the trigonometric functions for theta and alpha oscillations. Furthermore, mathematical and experimental results were consistent with regards to the phase relationships and amplitudes observed in frontal and sensory areas. These results indicate the important role that different phase distributions of theta and alpha oscillations have in modality-specific dissociation in the brain.

  11. A Parallel Distributed-Memory Particle Method Enables Acquisition-Rate Segmentation of Large Fluorescence Microscopy Images

    Science.gov (United States)

    Afshar, Yaser; Sbalzarini, Ivo F.

    2016-01-01

    Modern fluorescence microscopy modalities, such as light-sheet microscopy, are capable of acquiring large three-dimensional images at high data rate. This creates a bottleneck in computational processing and analysis of the acquired images, as the rate of acquisition outpaces the speed of processing. Moreover, images can be so large that they do not fit the main memory of a single computer. We address both issues by developing a distributed parallel algorithm for segmentation of large fluorescence microscopy images. The method is based on the versatile Discrete Region Competition algorithm, which has previously proven useful in microscopy image segmentation. The present distributed implementation decomposes the input image into smaller sub-images that are distributed across multiple computers. Using network communication, the computers orchestrate the collectively solving of the global segmentation problem. This not only enables segmentation of large images (we test images of up to 1010 pixels), but also accelerates segmentation to match the time scale of image acquisition. Such acquisition-rate image segmentation is a prerequisite for the smart microscopes of the future and enables online data compression and interactive experiments. PMID:27046144

  12. Kemari: A Portable High Performance Fortran System for Distributed Memory Parallel Processors

    Directory of Open Access Journals (Sweden)

    T. Kamachi

    1997-01-01

    Full Text Available We have developed a compilation system which extends High Performance Fortran (HPF in various aspects. We support the parallelization of well-structured problems with loop distribution and alignment directives similar to HPF's data distribution directives. Such directives give both additional control to the user and simplify the compilation process. For the support of unstructured problems, we provide directives for dynamic data distribution through user-defined mappings. The compiler also allows integration of message-passing interface (MPI primitives. The system is part of a complete programming environment which also comprises a parallel debugger and a performance monitor and analyzer. After an overview of the compiler, we describe the language extensions and related compilation mechanisms in detail. Performance measurements demonstrate the compiler's applicability to a variety of application classes.

  13. Humans and machines in space: The vision, the challenge, the payoff; Proceedings of the 29th Goddard Memorial Symposium, Washington, Mar. 14, 15, 1991

    Science.gov (United States)

    Johnson, Bradley; May, Gayle L.; Korn, Paula

    The present conference discusses the currently envisioned goals of human-machine systems in spacecraft environments, prospects for human exploration of the solar system, and plausible methods for meeting human needs in space. Also discussed are the problems of human-machine interaction in long-duration space flights, remote medical systems for space exploration, the use of virtual reality for planetary exploration, the alliance between U.S. Antarctic and space programs, and the economic and educational impacts of the U.S. space program.

  14. Approach to Accelerating Dissolved Vector Buffer Generation in Distributed In-Memory Cluster Architecture

    Directory of Open Access Journals (Sweden)

    Jinxin Shen

    2018-01-01

    Full Text Available The buffer generation algorithm is a fundamental function in GIS, identifying areas of a given distance surrounding geographic features. Past research largely focused on buffer generation algorithms generated in a stand-alone environment. Moreover, dissolved buffer generation is data- and computing-intensive. In this scenario, the improvement in the stand-alone environment is limited when considering large-scale mass vector data. Nevertheless, recent parallel dissolved vector buffer algorithms suffer from scalability problems, leaving room for further optimization. At present, the prevailing in-memory cluster-computing framework—Spark—provides promising efficiency for computing-intensive analysis; however, it has seldom been researched for buffer analysis. On this basis, we propose a cluster-computing-oriented parallel dissolved vector buffer generating algorithm, called the HPBM, that contains a Hilbert-space-filling-curve-based data partition method, a data skew and cross-boundary objects processing strategy, and a depth-given tree-like merging method. Experiments are conducted in both stand-alone and cluster environments using real-world vector data that include points and roads. Compared with some existing parallel buffer algorithms, as well as various popular GIS software, the HPBM achieves a performance gain of more than 50%.

  15. [Changes in cortical power distribution produced by memory consolidation as a function of a typewriting skill].

    Science.gov (United States)

    Cunha, Marlo; Bastos, Victor Hugo; Veiga, Heloisa; Cagy, Maurício; McDowell, Kaleb; Furtado, Vernon; Piedade, Roberto; Ribeiro, Pedro

    2004-09-01

    The present study aimed to investigate alterations in EEG patterns in normal, right-handed individuals, during the process of learning a specific motor skill (typewriting). Recent studies have shown that the cerebral cortex is susceptible to several changes during a learning process and that alterations in the brain's electrical patterns take place as a result of the acquisition of a motor skill and memory consolidation. In this context, subjects' brain electrical activity was analyzed before and after the motor task. EEG data were collected by a Braintech 3000 and analyzed by Neurometrics. For the statistical analysis, the behavioral variables "time" and "number of errors" were assessed by a one-way ANOVA. For the neurophysiological variable "Absolute Power", a paired t-Test was performed for each pair of electrodes CZ-C3/CZ-C4, in the theta and alpha frequency bands. The main results demonstrated a change in performance, through both behavioral variables ("time" and "number of errors"). At the same time, no changes were observed for the neurophysiological variable ("Absolute Power") in the theta band. On the other hand, a significant increase was observed in the alpha band in central areas (CZ-C3/CZ-C4). These results suggest an adaptation of the sensory-motor cortex, as a consequence of the typewriting training.

  16. Stochastic fluctuations and distributed control of gene expression impact cellular memory.

    Directory of Open Access Journals (Sweden)

    Guillaume Corre

    Full Text Available Despite the stochastic noise that characterizes all cellular processes the cells are able to maintain and transmit to their daughter cells the stable level of gene expression. In order to better understand this phenomenon, we investigated the temporal dynamics of gene expression variation using a double reporter gene model. We compared cell clones with transgenes coding for highly stable mRNA and fluorescent proteins with clones expressing destabilized mRNA-s and proteins. Both types of clones displayed strong heterogeneity of reporter gene expression levels. However, cells expressing stable gene products produced daughter cells with similar level of reporter proteins, while in cell clones with short mRNA and protein half-lives the epigenetic memory of the gene expression level was completely suppressed. Computer simulations also confirmed the role of mRNA and protein stability in the conservation of constant gene expression levels over several cell generations. These data indicate that the conservation of a stable phenotype in a cellular lineage may largely depend on the slow turnover of mRNA-s and proteins.

  17. Object-Oriented Support for Adaptive Methods on Paranel Machines

    Directory of Open Access Journals (Sweden)

    Sandeep Bhatt

    1993-01-01

    Full Text Available This article reports on experiments from our ongoing project whose goal is to develop a C++ library which supports adaptive and irregular data structures on distributed memory supercomputers. We demonstrate the use of our abstractions in implementing "tree codes" for large-scale N-body simulations. These algorithms require dynamically evolving treelike data structures, as well as load-balancing, both of which are widely believed to make the application difficult and cumbersome to program for distributed-memory machines. The ease of writing the application code on top of our C++ library abstractions (which themselves are application independent, and the low overhead of the resulting C++ code (over hand-crafted C code supports our belief that object-oriented approaches are eminently suited to programming distributed-memory machines in a manner that (to the applications programmer is architecture-independent. Our contribution in parallel programming methodology is to identify and encapsulate general classes of communication and load-balancing strategies useful across applications and MIMD architectures. This article reports experimental results from simulations of half a million particles using multiple methods.

  18. Modeling of the wind turbine with doubly fed induction machine and its dynamic behavior in distribution networks

    International Nuclear Information System (INIS)

    Mendez Rodriguez, Christian; Badilla Solorzano, Jorge Adrian

    2014-01-01

    Wind turbines equipped with doubly fed induction generator (DFIG) are described. A model is constructed to represent the behavior of wind turbines during the connection with distribution networks. The main systems that compose a wind turbine with DFIG are specified to develop a mathematical model of each of them. The behavior of the wind turbine in the stable and transient regimes is investigated to explain its dynamics during nominal operation and contingency situations when they are connected to distribution networks. In addition, strategies to mitigate the negative effects of such situations and control strategies to contribute to the dynamics of the network are included. An integrated model of the parts of the wind turbine is built in the program SIMULINK® of MATLAB® to validate the models of the systems and to obtain a tool that allows their simulation. The wind turbine model developed is simulated in order to evaluate and to analyze the dynamic behavior under different operating conditions. The results from validations have revealed an adequate behavior for the model under normal operating conditions. In the case of behavior in contingency situations, the study is limited to the response to three-phase faults and voltage variations, and frequency under conditions of balance in the power system [es

  19. Microscale soil structure development after glacial retreat - using machine-learning based segmentation of elemental distributions obtained by NanoSIMS

    Science.gov (United States)

    Schweizer, Steffen; Schlueter, Steffen; Hoeschen, Carmen; Koegel-Knabner, Ingrid; Mueller, Carsten W.

    2017-04-01

    Soil organic matter (SOM) is distributed on mineral surfaces depending on physicochemical soil properties that vary at the submicron scale. Nanoscale secondary ion mass spectrometry (NanoSIMS) can be used to visualize the spatial distribution of up to seven elements simultaneously at a lateral resolution of approximately 100 nm from which patterns of SOM coatings can be derived. Existing computational methods are mostly confined to visualization and lack spatial quantification measures of coverage and connectivity of organic matter coatings. This study proposes a methodology for the spatial analysis of SOM coatings based on supervised pixel classification and automatic image analysis of the 12C, 12C14N (indicative for SOM) and 16O (indicative for mineral surfaces) secondary ion distributions. The image segmentation of the secondary ion distributions into mineral particle surface and organic coating was done with a machine learning algorithm, which accounts for multiple features like size, color, intensity, edge and texture in all three ion distributions simultaneously. Our workflow allowed the spatial analysis of differences in the SOM coverage during soil development in the Damma glacier forefield (Switzerland) based on NanoSIMS measurements (n=121; containing ca. 4000 particles). The Damma chronosequence comprises several stages of soil development with increasing ice-free period (from ca. 15 to >700 years). To investigate mineral-associated SOM in the developing soil we obtained clay fractions (2.2 g cm3). We found increased coverage and a simultaneous development from patchy-distributed organic coatings to more connected coatings with increasing time after glacial retreat. The normalized N:C ratio (12C14N: (12C14N + 12C)) on the organic matter coatings was higher in the medium-aged soils than in the young and mature ones in both heavy and light mineral fraction. This reflects the sequential accumulation of proteinaceous SOM in the medium-aged soils and C

  20. Three-dimensional magnetic field computation on a distributed memory parallel processor

    International Nuclear Information System (INIS)

    Barion, M.L.

    1990-01-01

    The analysis of three-dimensional magnetic fields by finite element methods frequently proves too onerous a task for the computing resource on which it is attempted. When non-linear and transient effects are included, it may become impossible to calculate the field distribution to sufficient resolution. One approach to this problem is to exploit the natural parallelism in the finite element method via parallel processing. This paper reports on an implementation of a finite element code for non-linear three-dimensional low-frequency magnetic field calculation on Intel's iPSC/2

  1. Simple machines

    CERN Document Server

    Graybill, George

    2007-01-01

    Just how simple are simple machines? With our ready-to-use resource, they are simple to teach and easy to learn! Chocked full of information and activities, we begin with a look at force, motion and work, and examples of simple machines in daily life are given. With this background, we move on to different kinds of simple machines including: Levers, Inclined Planes, Wedges, Screws, Pulleys, and Wheels and Axles. An exploration of some compound machines follows, such as the can opener. Our resource is a real time-saver as all the reading passages, student activities are provided. Presented in s

  2. Applying machine learning to global surface ocean and seabed data to reveal the controls on the distribution of deep-sea sediments

    Science.gov (United States)

    Dutkiewicz, Adriana; Müller, Dietmar; O'Callaghan, Simon

    2017-04-01

    World's ocean basins contain a rich and nearly continuous record of environmental fluctuations preserved as different types of deep-sea sediments. The sediments represent the largest carbon sink on Earth and its largest geological deposit. Knowing the controls on the distribution of these sediments is essential for understanding the history of ocean-climate dynamics, including changes in sea-level and ocean circulation, as well as biological perturbations. Indeed, the bulk of deep-sea sediments comprises the remains of planktonic organisms that originate in the photic zone of the global ocean implying a strong connection between the seafloor and the sea surface. Machine-learning techniques are perfectly suited to unravelling these controls as they are able to handle large sets of spatial data and they often outperform traditional spatial analysis approaches. Using a support vector machine algorithm we recently created the first digital map of seafloor lithologies (Dutkiewicz et al., 2015) based on 14,400 surface samples. This map reveals significant deviations in distribution of deep-sea lithologies from hitherto hand-drawn maps based on far fewer data points. It also allows us to explore quantitatively, for the first time, the relationship between oceanographic parameters at the sea surface and lithologies on the seafloor. We subsequently coupled this global point sample dataset of 14,400 seafloor lithologies to bathymetry and oceanographic grids (sea-surface temperature, salinity, dissolved oxygen and dissolved inorganic nutrients) and applied a probabilistic Gaussian process classifier in an exhaustive combinatorial fashion (Dutkiewicz et al., 2016). We focused on five major lithologies (calcareous sediment, diatom ooze, radiolarian ooze, clay and lithogenous sediment) and used a computationally intensive five-fold cross-validation, withholding 20% of the data at each iteration, to assess the predictive performance of the machine learning method. We find that

  3. Face machines

    Energy Technology Data Exchange (ETDEWEB)

    Hindle, D.

    1999-06-01

    The article surveys latest equipment available from the world`s manufacturers of a range of machines for tunnelling. These are grouped under headings: excavators; impact hammers; road headers; and shields and tunnel boring machines. Products of thirty manufacturers are referred to. Addresses and fax numbers of companies are supplied. 5 tabs., 13 photos.

  4. Electric machine

    Science.gov (United States)

    El-Refaie, Ayman Mohamed Fawzi [Niskayuna, NY; Reddy, Patel Bhageerath [Madison, WI

    2012-07-17

    An interior permanent magnet electric machine is disclosed. The interior permanent magnet electric machine comprises a rotor comprising a plurality of radially placed magnets each having a proximal end and a distal end, wherein each magnet comprises a plurality of magnetic segments and at least one magnetic segment towards the distal end comprises a high resistivity magnetic material.

  5. Machine Learning.

    Science.gov (United States)

    Kirrane, Diane E.

    1990-01-01

    As scientists seek to develop machines that can "learn," that is, solve problems by imitating the human brain, a gold mine of information on the processes of human learning is being discovered, expert systems are being improved, and human-machine interactions are being enhanced. (SK)

  6. Nonplanar machines

    International Nuclear Information System (INIS)

    Ritson, D.

    1989-05-01

    This talk examines methods available to minimize, but never entirely eliminate, degradation of machine performance caused by terrain following. Breaking of planar machine symmetry for engineering convenience and/or monetary savings must be balanced against small performance degradation, and can only be decided on a case-by-case basis. 5 refs

  7. What happens when we compare the lifespan distributions of life script events and autobiographical memories of life story events? A cross-cultural study

    DEFF Research Database (Denmark)

    Zaragoza Scherman, Alejandra; Salgado, Sinué; Shao, Zhifang

    Cultural Life Script Theory (Berntsen and Rubin, 2004), provides a cultural explanation of the reminiscence bump: adults older than 40 years remember a significantly greater amount of life events happening between 15 - 30 years of age (Rubin, Rahal, & Poon, 1998), compared to other lifetime periods....... Most of these memories are rated as emotionally positive (Rubin & Berntsen, 2003). The cultural life script represents culturally shared expectations about the order and timing of life events in an typical, idealised life course. By comparing the lifespan distribution of the life scripts events...... and memories of life story events, we can determine the degree to which the cultural life script serves as a recall template for autobiographical memories, especially of positive life events from adolescence and early adulthood, also known as the reminiscence bump period....

  8. An A.P.L. micro-programmed machine: implementation on a Multi-20 mini-computer, memory organization, micro-programming and flowcharts

    International Nuclear Information System (INIS)

    Granger, Jean-Louis

    1975-01-01

    This work deals with the presentation of an APL interpreter implemented on an MULTI 20 mini-computer. It includes a left to right syntax analyser, a recursive routine for generation and execution. This routine uses a beating method for array processing. Moreover, during the execution of all APL statements, dynamic memory allocation is used. Execution of basic operations has been micro-programmed. The basic APL interpreter has a length of 10 K bytes. It uses overlay methods. (author) [fr

  9. Modeling Mental Speed: Decomposing Response Time Distributions in Elementary Cognitive Tasks and Correlations with Working Memory Capacity and Fluid Intelligence

    Directory of Open Access Journals (Sweden)

    Florian Schmitz

    2016-10-01

    Full Text Available Previous research has shown an inverse relation between response times in elementary cognitive tasks and intelligence, but findings are inconsistent as to which is the most informative score. We conducted a study (N = 200 using a battery of elementary cognitive tasks, working memory capacity (WMC paradigms, and a test of fluid intelligence (gf. Frequently used candidate scores and model parameters derived from the response time (RT distribution were tested. Results confirmed a clear correlation of mean RT with WMC and to a lesser degree with gf. Highly comparable correlations were obtained for alternative location measures with or without extreme value treatment. Moderate correlations were found as well for scores of RT variability, but they were not as strong as for mean RT. Additionally, there was a trend towards higher correlations for slow RT bands, as compared to faster RT bands. Clearer evidence was obtained in an ex-Gaussian decomposition of the response times: the exponential component was selectively related to WMC and gf in easy tasks, while mean response time was additionally predictive in the most complex tasks. The diffusion model parsimoniously accounted for these effects in terms of individual differences in drift rate. Finally, correlations of model parameters as trait-like dispositions were investigated across different tasks, by correlating parameters of the diffusion and the ex-Gaussian model with conventional RT and accuracy scores.

  10. Some methods of encoding simple visual images for use with a sparse distributed memory, with applications to character recognition

    Science.gov (United States)

    Jaeckel, Louis A.

    1989-01-01

    To study the problems of encoding visual images for use with a Sparse Distributed Memory (SDM), I consider a specific class of images- those that consist of several pieces, each of which is a line segment or an arc of a circle. This class includes line drawings of characters such as letters of the alphabet. I give a method of representing a segment of an arc by five numbers in a continuous way; that is, similar arcs have similar representations. I also give methods for encoding these numbers as bit strings in an approximately continuous way. The set of possible segments and arcs may be viewed as a five-dimensional manifold M, whose structure is like a Mobious strip. An image, considered to be an unordered set of segments and arcs, is therefore represented by a set of points in M - one for each piece. I then discuss the problem of constructing a preprocessor to find the segments and arcs in these images, although a preprocessor has not been developed. I also describe a possible extension of the representation.

  11. Discrete time analysis of a repairable machine

    OpenAIRE

    Alfa, Attahiru Sule; Castro, I. T.

    2002-01-01

    We consider, in discrete time, a single machine system that operates for a period of time represented by a general distribution. This machine is subject to failures during operations and the occurrence of these failures depends on how many times the machine has previously failed. Some failures are repairable and the repair times may or may not depend on the number of times the machine was previously repaired. Repair times also have a general distribution. The operating times...

  12. The reminiscence bump without memories: The distribution of imagined word-cued and important autobiographical memories in a hypothetical 70-year-old

    DEFF Research Database (Denmark)

    Koppel, Jonathan; Berntsen, Dorthe

    2016-01-01

    The reminiscence bump is the disproportionate number of autobiographical memories dating from adolescence and early adulthood. It has often been ascribed to a consolidation of the mature self in the period covered by the bump. Here we stripped away factors relating to the characteristics of autob...

  13. The Machine within the Machine

    CERN Multimedia

    Katarina Anthony

    2014-01-01

    Although Virtual Machines are widespread across CERN, you probably won't have heard of them unless you work for an experiment. Virtual machines - known as VMs - allow you to create a separate machine within your own, allowing you to run Linux on your Mac, or Windows on your Linux - whatever combination you need.   Using a CERN Virtual Machine, a Linux analysis software runs on a Macbook. When it comes to LHC data, one of the primary issues collaborations face is the diversity of computing environments among collaborators spread across the world. What if an institute cannot run the analysis software because they use different operating systems? "That's where the CernVM project comes in," says Gerardo Ganis, PH-SFT staff member and leader of the CernVM project. "We were able to respond to experimentalists' concerns by providing a virtual machine package that could be used to run experiment software. This way, no matter what hardware they have ...

  14. Mapping the spatial distribution and activity of "2"2"6Ra at legacy sites through Machine Learning interpretation of gamma-ray spectrometry data

    International Nuclear Information System (INIS)

    Varley, Adam; Tyler, Andrew; Smith, Leslie; Dale, Paul; Davies, Mike

    2016-01-01

    Radium ("2"2"6Ra) contamination derived from military, industrial, and pharmaceutical products can be found at a number of historical sites across the world posing a risk to human health. The analysis of spectral data derived using gamma-ray spectrometry can offer a powerful tool to rapidly estimate and map the activity, depth, and lateral distribution of "2"2"6Ra contamination covering an extensive area. Subsequently, reliable risk assessments can be developed for individual sites in a fraction of the timeframe compared to traditional labour-intensive sampling techniques: for example soil coring. However, local heterogeneity of the natural background, statistical counting uncertainty, and non-linear source response are confounding problems associated with gamma-ray spectral analysis. This is particularly challenging, when attempting to deal with enhanced concentrations of a naturally occurring radionuclide such as "2"2"6Ra. As a result, conventional surveys tend to attribute the highest activities to the largest total signal received by a detector (Gross counts): an assumption that tends to neglect higher activities at depth. To overcome these limitations, a methodology was developed making use of Monte Carlo simulations, Principal Component Analysis and Machine Learning based algorithms to derive depth and activity estimates for "2"2"6Ra contamination. The approach was applied on spectra taken using two gamma-ray detectors (Lanthanum Bromide and Sodium Iodide), with the aim of identifying an optimised combination of detector and spectral processing routine. It was confirmed that, through a combination of Neural Networks and Lanthanum Bromide, the most accurate depth and activity estimates could be found. The advantage of the method was demonstrated by mapping depth and activity estimates at a case study site in Scotland. There the method identified significantly higher activity ( 0.4 m), that conventional gross counting algorithms failed to identify. It was

  15. Mapping the spatial distribution and activity of {sup 226}Ra at legacy sites through Machine Learning interpretation of gamma-ray spectrometry data

    Energy Technology Data Exchange (ETDEWEB)

    Varley, Adam, E-mail: a.l.varley@stir.ac.uk [Department of Biological and Environmental Sciences, University of Stirling, Stirling FK9 4LA (United Kingdom); Tyler, Andrew [Department of Biological and Environmental Sciences, University of Stirling, Stirling FK9 4LA (United Kingdom); Smith, Leslie [Department of Computing Science and Mathematics, University of Stirling, Stirling FK9 4LA (United Kingdom); Dale, Paul [Scottish Environmental Protection Agency, Radioactive Substances, Strathallan House, Castle Business Park, Stirling FK9 4TZ (United Kingdom); Davies, Mike [Nuvia Limited, The Library, Eight Street, Harwell Oxford, Didcot, Oxfordshire OX11 0RL (United Kingdom)

    2016-03-01

    Radium ({sup 226}Ra) contamination derived from military, industrial, and pharmaceutical products can be found at a number of historical sites across the world posing a risk to human health. The analysis of spectral data derived using gamma-ray spectrometry can offer a powerful tool to rapidly estimate and map the activity, depth, and lateral distribution of {sup 226}Ra contamination covering an extensive area. Subsequently, reliable risk assessments can be developed for individual sites in a fraction of the timeframe compared to traditional labour-intensive sampling techniques: for example soil coring. However, local heterogeneity of the natural background, statistical counting uncertainty, and non-linear source response are confounding problems associated with gamma-ray spectral analysis. This is particularly challenging, when attempting to deal with enhanced concentrations of a naturally occurring radionuclide such as {sup 226}Ra. As a result, conventional surveys tend to attribute the highest activities to the largest total signal received by a detector (Gross counts): an assumption that tends to neglect higher activities at depth. To overcome these limitations, a methodology was developed making use of Monte Carlo simulations, Principal Component Analysis and Machine Learning based algorithms to derive depth and activity estimates for {sup 226}Ra contamination. The approach was applied on spectra taken using two gamma-ray detectors (Lanthanum Bromide and Sodium Iodide), with the aim of identifying an optimised combination of detector and spectral processing routine. It was confirmed that, through a combination of Neural Networks and Lanthanum Bromide, the most accurate depth and activity estimates could be found. The advantage of the method was demonstrated by mapping depth and activity estimates at a case study site in Scotland. There the method identified significantly higher activity (< 3 Bq g{sup −1}) occurring at depth (> 0.4 m), that conventional gross

  16. Machine translation

    Energy Technology Data Exchange (ETDEWEB)

    Nagao, M

    1982-04-01

    Each language has its own structure. In translating one language into another one, language attributes and grammatical interpretation must be defined in an unambiguous form. In order to parse a sentence, it is necessary to recognize its structure. A so-called context-free grammar can help in this respect for machine translation and machine-aided translation. Problems to be solved in studying machine translation are taken up in the paper, which discusses subjects for semantics and for syntactic analysis and translation software. 14 references.

  17. Neural markers of negative symptom outcomes in distributed working memory brain activity of antipsychotic-naive schizophrenia patients

    DEFF Research Database (Denmark)

    Nejad, Ayna B.; Madsen, Kristoffer H.; Ebdrup, Bjørn H.

    2013-01-01

    Since working memory deficits in schizophrenia have been linked to negative symptoms, we tested whether features of the one could predict the treatment outcome in the other. Specifically, we hypothesized that working memory-related functional connectivity at pre-treatment can predict improvement...

  18. Machine Learning

    CERN Multimedia

    CERN. Geneva

    2017-01-01

    Machine learning, which builds on ideas in computer science, statistics, and optimization, focuses on developing algorithms to identify patterns and regularities in data, and using these learned patterns to make predictions on new observations. Boosted by its industrial and commercial applications, the field of machine learning is quickly evolving and expanding. Recent advances have seen great success in the realms of computer vision, natural language processing, and broadly in data science. Many of these techniques have already been applied in particle physics, for instance for particle identification, detector monitoring, and the optimization of computer resources. Modern machine learning approaches, such as deep learning, are only just beginning to be applied to the analysis of High Energy Physics data to approach more and more complex problems. These classes will review the framework behind machine learning and discuss recent developments in the field.

  19. Machine Translation

    Indian Academy of Sciences (India)

    Research Mt System Example: The 'Janus' Translating Phone Project. The Janus ... based on laptops, and simultaneous translation of two speakers in a dialogue. For more ..... The current focus in MT research is on using machine learning.

  20. Virtual Machine Language 2.1

    Science.gov (United States)

    Riedel, Joseph E.; Grasso, Christopher A.

    2012-01-01

    VML (Virtual Machine Language) is an advanced computing environment that allows spacecraft to operate using mechanisms ranging from simple, time-oriented sequencing to advanced, multicomponent reactive systems. VML has developed in four evolutionary stages. VML 0 is a core execution capability providing multi-threaded command execution, integer data types, and rudimentary branching. VML 1 added named parameterized procedures, extensive polymorphism, data typing, branching, looping issuance of commands using run-time parameters, and named global variables. VML 2 added for loops, data verification, telemetry reaction, and an open flight adaptation architecture. VML 2.1 contains major advances in control flow capabilities for executable state machines. On the resource requirements front, VML 2.1 features a reduced memory footprint in order to fit more capability into modestly sized flight processors, and endian-neutral data access for compatibility with Intel little-endian processors. Sequence packaging has been improved with object-oriented programming constructs and the use of implicit (rather than explicit) time tags on statements. Sequence event detection has been significantly enhanced with multi-variable waiting, which allows a sequence to detect and react to conditions defined by complex expressions with multiple global variables. This multi-variable waiting serves as the basis for implementing parallel rule checking, which in turn, makes possible executable state machines. The new state machine feature in VML 2.1 allows the creation of sophisticated autonomous reactive systems without the need to develop expensive flight software. Users specify named states and transitions, along with the truth conditions required, before taking transitions. Transitions with the same signal name allow separate state machines to coordinate actions: the conditions distributed across all state machines necessary to arm a particular signal are evaluated, and once found true, that

  1. Application of Least-Squares Support Vector Machines for Quantitative Evaluation of Known Contaminant in Water Distribution System Using Online Water Quality Parameters

    Directory of Open Access Journals (Sweden)

    Kexin Wang

    2018-03-01

    Full Text Available In water-quality, early warning systems and qualitative detection of contaminants are always challenging. There are a number of parameters that need to be measured which are not entirely linearly related to pollutant concentrations. Besides the complex correlations between variable water parameters that need to be analyzed also impairs the accuracy of quantitative detection. In aspects of these problems, the application of least-squares support vector machines (LS-SVM is used to evaluate the water contamination and various conventional water quality sensors quantitatively. The various contaminations may cause different correlative responses of sensors, and also the degree of response is related to the concentration of the injected contaminant. Therefore to enhance the reliability and accuracy of water contamination detection a new method is proposed. In this method, a new relative response parameter is introduced to calculate the differences between water quality parameters and their baselines. A variety of regression models has been examined, as result of its high performance, the regression model based on genetic algorithm (GA is combined with LS-SVM. In this paper, the practical application of the proposed method is considered, controlled experiments are designed, and data is collected from the experimental setup. The measured data is applied to analyze the water contamination concentration. The evaluation of results validated that the LS-SVM model can adapt to the local nonlinear variations between water quality parameters and contamination concentration with the excellent generalization ability and accuracy. The validity of the proposed approach in concentration evaluation for potassium ferricyanide is proven to be more than 0.5 mg/L in water distribution systems.

  2. Machine Protection

    International Nuclear Information System (INIS)

    Zerlauth, Markus; Schmidt, Rüdiger; Wenninger, Jörg

    2012-01-01

    The present architecture of the machine protection system is being recalled and the performance of the associated systems during the 2011 run will be briefly summarized. An analysis of the causes of beam dumps as well as an assessment of the dependability of the machine protection systems (MPS) itself is being presented. Emphasis will be given to events that risked exposing parts of the machine to damage. Further improvements and mitigations of potential holes in the protection systems will be evaluated along with their impact on the 2012 run. The role of rMPP during the various operational phases (commissioning, intensity ramp up, MDs...) will be discussed along with a proposal for the intensity ramp up for the start of beam operation in 2012

  3. Machine Learning

    Energy Technology Data Exchange (ETDEWEB)

    Chikkagoudar, Satish; Chatterjee, Samrat; Thomas, Dennis G.; Carroll, Thomas E.; Muller, George

    2017-04-21

    The absence of a robust and unified theory of cyber dynamics presents challenges and opportunities for using machine learning based data-driven approaches to further the understanding of the behavior of such complex systems. Analysts can also use machine learning approaches to gain operational insights. In order to be operationally beneficial, cybersecurity machine learning based models need to have the ability to: (1) represent a real-world system, (2) infer system properties, and (3) learn and adapt based on expert knowledge and observations. Probabilistic models and Probabilistic graphical models provide these necessary properties and are further explored in this chapter. Bayesian Networks and Hidden Markov Models are introduced as an example of a widely used data driven classification/modeling strategy.

  4. Machine Protection

    CERN Document Server

    Zerlauth, Markus; Wenninger, Jörg

    2012-01-01

    The present architecture of the machine protection system is being recalled and the performance of the associated systems during the 2011 run will be briefly summarized. An analysis of the causes of beam dumps as well as an assessment of the dependability of the machine protection systems (MPS) itself is being presented. Emphasis will be given to events that risked exposing parts of the machine to damage. Further improvements and mitigations of potential holes in the protection systems will be evaluated along with their impact on the 2012 run. The role of rMPP during the various operational phases (commissioning, intensity ramp up, MDs...) will be discussed along with a proposal for the intensity ramp up for the start of beam operation in 2012.

  5. Machine Protection

    Energy Technology Data Exchange (ETDEWEB)

    Zerlauth, Markus; Schmidt, Rüdiger; Wenninger, Jörg [European Organization for Nuclear Research, Geneva (Switzerland)

    2012-07-01

    The present architecture of the machine protection system is being recalled and the performance of the associated systems during the 2011 run will be briefly summarized. An analysis of the causes of beam dumps as well as an assessment of the dependability of the machine protection systems (MPS) itself is being presented. Emphasis will be given to events that risked exposing parts of the machine to damage. Further improvements and mitigations of potential holes in the protection systems will be evaluated along with their impact on the 2012 run. The role of rMPP during the various operational phases (commissioning, intensity ramp up, MDs...) will be discussed along with a proposal for the intensity ramp up for the start of beam operation in 2012.

  6. Teletherapy machine

    International Nuclear Information System (INIS)

    Panyam, Vinatha S.; Rakshit, Sougata; Kulkarni, M.S.; Pradeepkumar, K.S.

    2017-01-01

    Radiation Standards Section (RSS), RSSD, BARC is the national metrology institute for ionizing radiation. RSS develops and maintains radiation standards for X-ray, beta, gamma and neutron radiations. In radiation dosimetry, traceability, accuracy and consistency of radiation measurements is very important especially in radiotherapy where the success of patient treatment is dependent on the accuracy of the dose delivered to the tumour. Cobalt teletherapy machines have been used in the treatment of cancer since the early 1950s and India had its first cobalt teletherapy machine installed at the Cancer Institute, Chennai in 1956

  7. Splenectomy alters distribution and turnover but not numbers or protective capacity of de novo generated memory CD8 T cells.

    Directory of Open Access Journals (Sweden)

    Marie eKim

    2014-11-01

    Full Text Available The spleen is a highly compartmentalized lymphoid organ that allows for efficient antigen presentation and activation of immune responses. Additionally, the spleen itself functions to remove senescent red blood cells, filter bacteria, and sequester platelets. Splenectomy, commonly performed after blunt force trauma or splenomegaly, has been shown to increase risk of certain bacterial and parasitic infections years after removal of the spleen. Although previous studies report defects in memory B cells and IgM titers in splenectomized patients, the effect of splenectomy on CD8 T cell responses and memory CD8 T cell function remains ill defined. Using TCR-transgenic P14 cells, we demonstrate that homeostatic proliferation and representation of pathogen-specific memory CD8 T cells in the blood are enhanced in splenectomized compared to sham surgery mice. Surprisingly, despite the enhanced turnover, splenectomized mice displayed no changes in total memory CD8 T cell numbers nor impaired protection against lethal dose challenge with Listeria monocytogenes. Thus, our data suggest that memory CD8 T cell maintenance and function remain intact in the absence of the spleen.

  8. Machine testning

    DEFF Research Database (Denmark)

    De Chiffre, Leonardo

    This document is used in connection with a laboratory exercise of 3 hours duration as a part of the course GEOMETRICAL METROLOGY AND MACHINE TESTING. The exercise includes a series of tests carried out by the student on a conventional and a numerically controled lathe, respectively. This document...

  9. Machine Learning in Medicine.

    Science.gov (United States)

    Deo, Rahul C

    2015-11-17

    Spurred by advances in processing power, memory, storage, and an unprecedented wealth of data, computers are being asked to tackle increasingly complex learning tasks, often with astonishing success. Computers have now mastered a popular variant of poker, learned the laws of physics from experimental data, and become experts in video games - tasks that would have been deemed impossible not too long ago. In parallel, the number of companies centered on applying complex data analysis to varying industries has exploded, and it is thus unsurprising that some analytic companies are turning attention to problems in health care. The purpose of this review is to explore what problems in medicine might benefit from such learning approaches and use examples from the literature to introduce basic concepts in machine learning. It is important to note that seemingly large enough medical data sets and adequate learning algorithms have been available for many decades, and yet, although there are thousands of papers applying machine learning algorithms to medical data, very few have contributed meaningfully to clinical care. This lack of impact stands in stark contrast to the enormous relevance of machine learning to many other industries. Thus, part of my effort will be to identify what obstacles there may be to changing the practice of medicine through statistical learning approaches, and discuss how these might be overcome. © 2015 American Heart Association, Inc.

  10. Machine Learning in Medicine

    Science.gov (United States)

    Deo, Rahul C.

    2015-01-01

    Spurred by advances in processing power, memory, storage, and an unprecedented wealth of data, computers are being asked to tackle increasingly complex learning tasks, often with astonishing success. Computers have now mastered a popular variant of poker, learned the laws of physics from experimental data, and become experts in video games – tasks which would have been deemed impossible not too long ago. In parallel, the number of companies centered on applying complex data analysis to varying industries has exploded, and it is thus unsurprising that some analytic companies are turning attention to problems in healthcare. The purpose of this review is to explore what problems in medicine might benefit from such learning approaches and use examples from the literature to introduce basic concepts in machine learning. It is important to note that seemingly large enough medical data sets and adequate learning algorithms have been available for many decades – and yet, although there are thousands of papers applying machine learning algorithms to medical data, very few have contributed meaningfully to clinical care. This lack of impact stands in stark contrast to the enormous relevance of machine learning to many other industries. Thus part of my effort will be to identify what obstacles there may be to changing the practice of medicine through statistical learning approaches, and discuss how these might be overcome. PMID:26572668

  11. Machine rates for selected forest harvesting machines

    Science.gov (United States)

    R.W. Brinker; J. Kinard; Robert Rummer; B. Lanford

    2002-01-01

    Very little new literature has been published on the subject of machine rates and machine cost analysis since 1989 when the Alabama Agricultural Experiment Station Circular 296, Machine Rates for Selected Forest Harvesting Machines, was originally published. Many machines discussed in the original publication have undergone substantial changes in various aspects, not...

  12. Vibration of machine

    International Nuclear Information System (INIS)

    Kwak, Mun Gyu; Na, Sung Su; Baek, Gwang Hyeon; Song, Chul Gi; Han, Sang Bo

    2001-09-01

    This book deals with vibration of machine which gives descriptions of free vibration using SDOF system, forced vibration using SDOF system, vibration of multi-degree of freedom system like introduction and normal form, distribution system such as introduction, free vibration of bar and practice problem, approximate solution like lumped approximations and Raleigh's quotient, engineering by intuition and experience, real problem and experimental method such as technology of signal, fourier transform analysis, frequency analysis and sensor and actuator.

  13. A model for removing the increased recall of recent events from the temporal distribution of autobiographical memory

    NARCIS (Netherlands)

    Janssen, S.M.J.; Gralak, A.; Murre, J.M.J.

    2011-01-01

    The reminiscence bump is the tendency to recall relatively many personal events from the period in which the individual was between 10 and 30 years old. This effect has only been found in autobiographical memory studies that used participants who were older than 40 years of age. The increased recall

  14. Electric machines

    CERN Document Server

    Gross, Charles A

    2006-01-01

    BASIC ELECTROMAGNETIC CONCEPTSBasic Magnetic ConceptsMagnetically Linear Systems: Magnetic CircuitsVoltage, Current, and Magnetic Field InteractionsMagnetic Properties of MaterialsNonlinear Magnetic Circuit AnalysisPermanent MagnetsSuperconducting MagnetsThe Fundamental Translational EM MachineThe Fundamental Rotational EM MachineMultiwinding EM SystemsLeakage FluxThe Concept of Ratings in EM SystemsSummaryProblemsTRANSFORMERSThe Ideal n-Winding TransformerTransformer Ratings and Per-Unit ScalingThe Nonideal Three-Winding TransformerThe Nonideal Two-Winding TransformerTransformer Efficiency and Voltage RegulationPractical ConsiderationsThe AutotransformerOperation of Transformers in Three-Phase EnvironmentsSequence Circuit Models for Three-Phase Transformer AnalysisHarmonics in TransformersSummaryProblemsBASIC MECHANICAL CONSIDERATIONSSome General PerspectivesEfficiencyLoad Torque-Speed CharacteristicsMass Polar Moment of InertiaGearingOperating ModesTranslational SystemsA Comprehensive Example: The ElevatorP...

  15. Charging machine

    International Nuclear Information System (INIS)

    Medlin, J.B.

    1976-01-01

    A charging machine for loading fuel slugs into the process tubes of a nuclear reactor includes a tubular housing connected to the process tube, a charging trough connected to the other end of the tubular housing, a device for loading the charging trough with a group of fuel slugs, means for equalizing the coolant pressure in the charging trough with the pressure in the process tubes, means for pushing the group of fuel slugs into the process tube and a latch and a seal engaging the last object in the group of fuel slugs to prevent the fuel slugs from being ejected from the process tube when the pusher is removed and to prevent pressure liquid from entering the charging machine. 3 claims, 11 drawing figures

  16. Genesis machines

    CERN Document Server

    Amos, Martyn

    2014-01-01

    Silicon chips are out. Today's scientists are using real, wet, squishy, living biology to build the next generation of computers. Cells, gels and DNA strands are the 'wetware' of the twenty-first century. Much smaller and more intelligent, these organic computers open up revolutionary possibilities. Tracing the history of computing and revealing a brave new world to come, Genesis Machines describes how this new technology will change the way we think not just about computers - but about life itself.

  17. A Hybrid Approach to Processing Big Data Graphs on Memory-Restricted Systems

    KAUST Repository

    Harshvardhan,

    2015-05-01

    With the advent of big-data, processing large graphs quickly has become increasingly important. Most existing approaches either utilize in-memory processing techniques that can only process graphs that fit completely in RAM, or disk-based techniques that sacrifice performance. In this work, we propose a novel RAM-Disk hybrid approach to graph processing that can scale well from a single shared-memory node to large distributed-memory systems. It works by partitioning the graph into sub graphs that fit in RAM and uses a paging-like technique to load sub graphs. We show that without modifying the algorithms, this approach can scale from small memory-constrained systems (such as tablets) to large-scale distributed machines with 16, 000+ cores.

  18. Soft-Deep Boltzmann Machines

    OpenAIRE

    Kiwaki, Taichi

    2015-01-01

    We present a layered Boltzmann machine (BM) that can better exploit the advantages of a distributed representation. It is widely believed that deep BMs (DBMs) have far greater representational power than its shallow counterpart, restricted Boltzmann machines (RBMs). However, this expectation on the supremacy of DBMs over RBMs has not ever been validated in a theoretical fashion. In this paper, we provide both theoretical and empirical evidences that the representational power of DBMs can be a...

  19. Paging memory from random access memory to backing storage in a parallel computer

    Science.gov (United States)

    Archer, Charles J; Blocksome, Michael A; Inglett, Todd A; Ratterman, Joseph D; Smith, Brian E

    2013-05-21

    Paging memory from random access memory (`RAM`) to backing storage in a parallel computer that includes a plurality of compute nodes, including: executing a data processing application on a virtual machine operating system in a virtual machine on a first compute node; providing, by a second compute node, backing storage for the contents of RAM on the first compute node; and swapping, by the virtual machine operating system in the virtual machine on the first compute node, a page of memory from RAM on the first compute node to the backing storage on the second compute node.

  20. Simulating Pre-Asymptotic, Non-Fickian Transport Although Doing Simple Random Walks - Supported By Empirical Pore-Scale Velocity Distributions and Memory Effects

    Science.gov (United States)

    Most, S.; Jia, N.; Bijeljic, B.; Nowak, W.

    2016-12-01

    Pre-asymptotic characteristics are almost ubiquitous when analyzing solute transport processes in porous media. These pre-asymptotic aspects are caused by spatial coherence in the velocity field and by its heterogeneity. For the Lagrangian perspective of particle displacements, the causes of pre-asymptotic, non-Fickian transport are skewed velocity distribution, statistical dependencies between subsequent increments of particle positions (memory) and dependence between the x, y and z-components of particle increments. Valid simulation frameworks should account for these factors. We propose a particle tracking random walk (PTRW) simulation technique that can use empirical pore-space velocity distributions as input, enforces memory between subsequent random walk steps, and considers cross dependence. Thus, it is able to simulate pre-asymptotic non-Fickian transport phenomena. Our PTRW framework contains an advection/dispersion term plus a diffusion term. The advection/dispersion term produces time-series of particle increments from the velocity CDFs. These time series are equipped with memory by enforcing that the CDF values of subsequent velocities change only slightly. The latter is achieved through a random walk on the axis of CDF values between 0 and 1. The virtual diffusion coefficient for that random walk is our only fitting parameter. Cross-dependence can be enforced by constraining the random walk to certain combinations of CDF values between the three velocity components in x, y and z. We will show that this modelling framework is capable of simulating non-Fickian transport by comparison with a pore-scale transport simulation and we analyze the approach to asymptotic behavior.

  1. Representational Machines

    DEFF Research Database (Denmark)

    Photography not only represents space. Space is produced photographically. Since its inception in the 19th century, photography has brought to light a vast array of represented subjects. Always situated in some spatial order, photographic representations have been operatively underpinned by social...... to the enterprises of the medium. This is the subject of Representational Machines: How photography enlists the workings of institutional technologies in search of establishing new iconic and social spaces. Together, the contributions to this edited volume span historical epochs, social environments, technological...... possibilities, and genre distinctions. Presenting several distinct ways of producing space photographically, this book opens a new and important field of inquiry for photography research....

  2. Shear machines

    International Nuclear Information System (INIS)

    Astill, M.; Sunderland, A.; Waine, M.G.

    1980-01-01

    A shear machine for irradiated nuclear fuel elements has a replaceable shear assembly comprising a fuel element support block, a shear blade support and a clamp assembly which hold the fuel element to be sheared in contact with the support block. A first clamp member contacts the fuel element remote from the shear blade and a second clamp member contacts the fuel element adjacent the shear blade and is advanced towards the support block during shearing to compensate for any compression of the fuel element caused by the shear blade (U.K.)

  3. Distribution

    Science.gov (United States)

    John R. Jones

    1985-01-01

    Quaking aspen is the most widely distributed native North American tree species (Little 1971, Sargent 1890). It grows in a great diversity of regions, environments, and communities (Harshberger 1911). Only one deciduous tree species in the world, the closely related Eurasian aspen (Populus tremula), has a wider range (Weigle and Frothingham 1911)....

  4. Electricity of machine tool

    International Nuclear Information System (INIS)

    Gijeon media editorial department

    1977-10-01

    This book is divided into three parts. The first part deals with electricity machine, which can taints from generator to motor, motor a power source of machine tool, electricity machine for machine tool such as switch in main circuit, automatic machine, a knife switch and pushing button, snap switch, protection device, timer, solenoid, and rectifier. The second part handles wiring diagram. This concludes basic electricity circuit of machine tool, electricity wiring diagram in your machine like milling machine, planer and grinding machine. The third part introduces fault diagnosis of machine, which gives the practical solution according to fault diagnosis and the diagnostic method with voltage and resistance measurement by tester.

  5. Environmentally Friendly Machining

    CERN Document Server

    Dixit, U S; Davim, J Paulo

    2012-01-01

    Environment-Friendly Machining provides an in-depth overview of environmentally-friendly machining processes, covering numerous different types of machining in order to identify which practice is the most environmentally sustainable. The book discusses three systems at length: machining with minimal cutting fluid, air-cooled machining and dry machining. Also covered is a way to conserve energy during machining processes, along with useful data and detailed descriptions for developing and utilizing the most efficient modern machining tools. Researchers and engineers looking for sustainable machining solutions will find Environment-Friendly Machining to be a useful volume.

  6. Machine Protection

    CERN Document Server

    Schmidt, R

    2014-01-01

    The protection of accelerator equipment is as old as accelerator technology and was for many years related to high-power equipment. Examples are the protection of powering equipment from overheating (magnets, power converters, high-current cables), of superconducting magnets from damage after a quench and of klystrons. The protection of equipment from beam accidents is more recent. It is related to the increasing beam power of high-power proton accelerators such as ISIS, SNS, ESS and the PSI cyclotron, to the emission of synchrotron light by electron–positron accelerators and FELs, and to the increase of energy stored in the beam (in particular for hadron colliders such as LHC). Designing a machine protection system requires an excellent understanding of accelerator physics and operation to anticipate possible failures that could lead to damage. Machine protection includes beam and equipment monitoring, a system to safely stop beam operation (e.g. dumping the beam or stopping the beam at low energy) and an ...

  7. A microcomputer network for the control of digitising machines

    International Nuclear Information System (INIS)

    Seller, P.

    1981-01-01

    A distributed microcomputing network operates in the Bubble Chamber Research Group Scanning Laboratory at the Rutherford and Appleton Laboratories. A microcomputer at each digitising table buffers information, controls the functioning of the table and enhances the machine/operator interface. The system consists of fourteen microcomputers together with a VAX 11/780 computer used for data analysis. These are inter-connected via a packet switched network. This paper will describe the features of the combined system, including the distributed computing architecture and the packet switched method of communication. This paper will also describe in detail a high speed packet switching controller used as a central node of the network. This controller is a multiprocessor microcomputer system with eighteen central processor units, thirty-four direct memory access channels and thirty-four prioritorised and vectored interrupt channels. This microcomputer is of general interest as a communications controller due to its totally programmable nature. (orig.)

  8. Operating System For Numerically Controlled Milling Machine

    Science.gov (United States)

    Ray, R. B.

    1992-01-01

    OPMILL program is operating system for Kearney and Trecker milling machine providing fast easy way to program manufacture of machine parts with IBM-compatible personal computer. Gives machinist "equation plotter" feature, which plots equations that define movements and converts equations to milling-machine-controlling program moving cutter along defined path. System includes tool-manager software handling up to 25 tools and automatically adjusts to account for each tool. Developed on IBM PS/2 computer running DOS 3.3 with 1 MB of random-access memory.

  9. In memory of Alois Apfelbeck: An Interconnection between Cayley-Eisenstein-Pólya and Landau Probability Distributions

    Directory of Open Access Journals (Sweden)

    Vladimír Vojta

    2013-01-01

    Full Text Available The interconnection between the Cayley-Eisenstein-Pólya distribution and the Landau distribution is studied, and possibly new transform pairs for the Laplace and Mellin transform and integral expressions for the Lambert W function have been found.

  10. Determination of the Lowest-Energy States for the Model Distribution of Trained Restricted Boltzmann Machines Using a 1000 Qubit D-Wave 2X Quantum Computer.

    Science.gov (United States)

    Koshka, Yaroslav; Perera, Dilina; Hall, Spencer; Novotny, M A

    2017-07-01

    The possibility of using a quantum computer D-Wave 2X with more than 1000 qubits to determine the global minimum of the energy landscape of trained restricted Boltzmann machines is investigated. In order to overcome the problem of limited interconnectivity in the D-Wave architecture, the proposed RBM embedding combines multiple qubits to represent a particular RBM unit. The results for the lowest-energy (the ground state) and some of the higher-energy states found by the D-Wave 2X were compared with those of the classical simulated annealing (SA) algorithm. In many cases, the D-Wave machine successfully found the same RBM lowest-energy state as that found by SA. In some examples, the D-Wave machine returned a state corresponding to one of the higher-energy local minima found by SA. The inherently nonperfect embedding of the RBM into the Chimera lattice explored in this work (i.e., multiple qubits combined into a single RBM unit were found not to be guaranteed to be all aligned) and the existence of small, persistent biases in the D-Wave hardware may cause a discrepancy between the D-Wave and the SA results. In some of the investigated cases, introduction of a small bias field into the energy function or optimization of the chain-strength parameter in the D-Wave embedding successfully addressed difficulties of the particular RBM embedding. With further development of the D-Wave hardware, the approach will be suitable for much larger numbers of RBM units.

  11. A Prototyping Environment for Research on Human-Machine Interfaces in Process Control: Use of Microsoft WPF for Microworld and Distributed Control System Development

    Energy Technology Data Exchange (ETDEWEB)

    Roger Lew; Ronald L. Boring; Thomas A. Ulrich

    2014-08-01

    Operators of critical processes, such as nuclear power production, must contend with highly complex systems, procedures, and regulations. Developing human-machine interfaces (HMIs) that better support operators is a high priority for ensuring the safe and reliable operation of critical processes. Human factors engineering (HFE) provides a rich and mature set of tools for evaluating the performance of HMIs, but the set of tools for developing and designing HMIs is still in its infancy. Here we propose that Microsoft Windows Presentation Foundation (WPF) is well suited for many roles in the research and development of HMIs for process control.

  12. Analysis of machining and machine tools

    CERN Document Server

    Liang, Steven Y

    2016-01-01

    This book delivers the fundamental science and mechanics of machining and machine tools by presenting systematic and quantitative knowledge in the form of process mechanics and physics. It gives readers a solid command of machining science and engineering, and familiarizes them with the geometry and functionality requirements of creating parts and components in today’s markets. The authors address traditional machining topics, such as: single and multiple point cutting processes grinding components accuracy and metrology shear stress in cutting cutting temperature and analysis chatter They also address non-traditional machining, such as: electrical discharge machining electrochemical machining laser and electron beam machining A chapter on biomedical machining is also included. This book is appropriate for advanced undergraduate and graduate mechani cal engineering students, manufacturing engineers, and researchers. Each chapter contains examples, exercises and their solutions, and homework problems that re...

  13. Machine Protection

    International Nuclear Information System (INIS)

    Schmidt, R

    2014-01-01

    The protection of accelerator equipment is as old as accelerator technology and was for many years related to high-power equipment. Examples are the protection of powering equipment from overheating (magnets, power converters, high-current cables), of superconducting magnets from damage after a quench and of klystrons. The protection of equipment from beam accidents is more recent. It is related to the increasing beam power of high-power proton accelerators such as ISIS, SNS, ESS and the PSI cyclotron, to the emission of synchrotron light by electron–positron accelerators and FELs, and to the increase of energy stored in the beam (in particular for hadron colliders such as LHC). Designing a machine protection system requires an excellent understanding of accelerator physics and operation to anticipate possible failures that could lead to damage. Machine protection includes beam and equipment monitoring, a system to safely stop beam operation (e.g. dumping the beam or stopping the beam at low energy) and an interlock system providing the glue between these systems. The most recent accelerator, the LHC, will operate with about 3 × 10 14 protons per beam, corresponding to an energy stored in each beam of 360 MJ. This energy can cause massive damage to accelerator equipment in case of uncontrolled beam loss, and a single accident damaging vital parts of the accelerator could interrupt operation for years. This article provides an overview of the requirements for protection of accelerator equipment and introduces the various protection systems. Examples are mainly from LHC, SNS and ESS

  14. Machine terms dictionary

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1979-04-15

    This book gives descriptions of machine terms which includes machine design, drawing, the method of machine, machine tools, machine materials, automobile, measuring and controlling, electricity, basic of electron, information technology, quality assurance, Auto CAD and FA terms and important formula of mechanical engineering.

  15. Electrical machines with Matlab

    CERN Document Server

    Gonen, Turan

    2011-01-01

    Basic ConceptsDistribution SystemImpact of Dispersed Storage and GenerationBrief Overview of Basic Electrical MachinesReal and Reactive Powers in Single-Phase AC CircuitsThree-Phase CircuitsThree-Phase SystemsUnbalanced Three-Phase LoadsMeasurement of Average Power in Three-Phase CircuitsPower Factor CorrectionMagnetic CircuitsMagnetic Field of Current-Carrying ConductorsAmpère's Magnetic Circuital LawMagnetic CircuitsMagnetic Circuit with Air GapBrief Review of FerromagnetismMagnetic Core LossesHow to Determine Flux for a Given MMFPermanent MagnetsTransformersTransformer ConstructionBrief Rev

  16. Unconditional polarization qubit quantum memory at room temperature

    Science.gov (United States)

    Namazi, Mehdi; Kupchak, Connor; Jordaan, Bertus; Shahrokhshahi, Reihaneh; Figueroa, Eden

    2016-05-01

    The creation of global quantum key distribution and quantum communication networks requires multiple operational quantum memories. Achieving a considerable reduction in experimental and cost overhead in these implementations is thus a major challenge. Here we present a polarization qubit quantum memory fully-operational at 330K, an unheard frontier in the development of useful qubit quantum technology. This result is achieved through extensive study of how optical response of cold atomic medium is transformed by the motion of atoms at room temperature leading to an optimal characterization of room temperature quantum light-matter interfaces. Our quantum memory shows an average fidelity of 86.6 +/- 0.6% for optical pulses containing on average 1 photon per pulse, thereby defeating any classical strategy exploiting the non-unitary character of the memory efficiency. Our system significantly decreases the technological overhead required to achieve quantum memory operation and will serve as a building block for scalable and technologically simpler many-memory quantum machines. The work was supported by the US-Navy Office of Naval Research, Grant Number N00141410801 and the Simons Foundation, Grant Number SBF241180. B. J. acknowledges financial assistance of the National Research Foundation (NRF) of South Africa.

  17. Efficient tuning in supervised machine learning

    NARCIS (Netherlands)

    Koch, Patrick

    2013-01-01

    The tuning of learning algorithm parameters has become more and more important during the last years. With the fast growth of computational power and available memory databases have grown dramatically. This is very challenging for the tuning of parameters arising in machine learning, since the

  18. The influence of the rs6295 gene polymorphism on serotonin-1A receptor distribution investigated with PET in patients with major depression applying machine learning.

    Science.gov (United States)

    Kautzky, A; James, G M; Philippe, C; Baldinger-Melich, P; Kraus, C; Kranz, G S; Vanicek, T; Gryglewski, G; Wadsak, W; Mitterhauser, M; Rujescu, D; Kasper, S; Lanzenberger, R

    2017-06-13

    Major depressive disorder (MDD) is the most common neuropsychiatric disease and despite extensive research, its genetic substrate is still not sufficiently understood. The common polymorphism rs6295 of the serotonin-1A receptor gene (HTR1A) is affecting the transcriptional regulation of the 5-HT 1A receptor and has been closely linked to MDD. Here, we used positron emission tomography (PET) exploiting advances in data mining and statistics by using machine learning in 62 healthy subjects and 19 patients with MDD, which were scanned with PET using the radioligand [carbonyl- 11 C]WAY-100635. All the subjects were genotyped for rs6295 and genotype was grouped in GG vs C allele carriers. Mixed model was applied in a ROI-based (region of interest) approach. ROI binding potential (BP ND ) was divided by dorsal raphe BP ND as a specific measure to highlight rs6295 effects (BP Div ). Mixed model produced an interaction effect of ROI and genotype in the patients' group but no effects in healthy controls. Differences of BP Div was demonstrated in seven ROIs; parahippocampus, hippocampus, fusiform gyrus, gyrus rectus, supplementary motor area, inferior frontal occipital gyrus and lingual gyrus. For classification of genotype, 'RandomForest' and Support Vector Machines were used, however, no model with sufficient predictive capability could be computed. Our results are in line with preclinical data, mouse model knockout studies as well as previous clinical analyses, demonstrating the two-pronged effect of the G allele on 5-HT 1A BP ND for, we believe, the first time. Future endeavors should address epigenetic effects and allosteric heteroreceptor complexes. Replication in larger samples of MDD patients is necessary to substantiate our findings.

  19. A Machine-to-Machine protocol benchmark for eHealth applications - Use case: Respiratory rehabilitation.

    Science.gov (United States)

    Talaminos-Barroso, Alejandro; Estudillo-Valderrama, Miguel A; Roa, Laura M; Reina-Tosina, Javier; Ortega-Ruiz, Francisco

    2016-06-01

    M2M (Machine-to-Machine) communications represent one of the main pillars of the new paradigm of the Internet of Things (IoT), and is making possible new opportunities for the eHealth business. Nevertheless, the large number of M2M protocols currently available hinders the election of a suitable solution that satisfies the requirements that can demand eHealth applications. In the first place, to develop a tool that provides a benchmarking analysis in order to objectively select among the most relevant M2M protocols for eHealth solutions. In the second place, to validate the tool with a particular use case: the respiratory rehabilitation. A software tool, called Distributed Computing Framework (DFC), has been designed and developed to execute the benchmarking tests and facilitate the deployment in environments with a large number of machines, with independence of the protocol and performance metrics selected. DDS, MQTT, CoAP, JMS, AMQP and XMPP protocols were evaluated considering different specific performance metrics, including CPU usage, memory usage, bandwidth consumption, latency and jitter. The results obtained allowed to validate a case of use: respiratory rehabilitation of chronic obstructive pulmonary disease (COPD) patients in two scenarios with different types of requirement: Home-Based and Ambulatory. The results of the benchmark comparison can guide eHealth developers in the choice of M2M technologies. In this regard, the framework presented is a simple and powerful tool for the deployment of benchmark tests under specific environments and conditions. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  20. Addiction Machines

    Directory of Open Access Journals (Sweden)

    James Godley

    2011-10-01

    Full Text Available Entry into the crypt William Burroughs shared with his mother opened and shut around a failed re-enactment of William Tell’s shot through the prop placed upon a loved one’s head. The accidental killing of his wife Joan completed the installation of the addictation machine that spun melancholia as manic dissemination. An early encryptment to which was added the audio portion of abuse deposited an undeliverable message in WB. Wil- liam could never tell, although his corpus bears the in- scription of this impossibility as another form of pos- sibility. James Godley is currently a doctoral candidate in Eng- lish at SUNY Buffalo, where he studies psychoanalysis, Continental philosophy, and nineteenth-century litera- ture and poetry (British and American. His work on the concept of mourning and “the dead” in Freudian and Lacanian approaches to psychoanalytic thought and in Gothic literature has also spawned an essay on zombie porn. Since entering the Academy of Fine Arts Karlsruhe in 2007, Valentin Hennig has studied in the classes of Sil- via Bächli, Claudio Moser, and Corinne Wasmuht. In 2010 he spent a semester at the Dresden Academy of Fine Arts. His work has been shown in group exhibi- tions in Freiburg and Karlsruhe.

  1. Machine musicianship

    Science.gov (United States)

    Rowe, Robert

    2002-05-01

    The training of musicians begins by teaching basic musical concepts, a collection of knowledge commonly known as musicianship. Computer programs designed to implement musical skills (e.g., to make sense of what they hear, perform music expressively, or compose convincing pieces) can similarly benefit from access to a fundamental level of musicianship. Recent research in music cognition, artificial intelligence, and music theory has produced a repertoire of techniques that can make the behavior of computer programs more musical. Many of these were presented in a recently published book/CD-ROM entitled Machine Musicianship. For use in interactive music systems, we are interested in those which are fast enough to run in real time and that need only make reference to the material as it appears in sequence. This talk will review several applications that are able to identify the tonal center of musical material during performance. Beyond this specific task, the design of real-time algorithmic listening through the concurrent operation of several connected analyzers is examined. The presentation includes discussion of a library of C++ objects that can be combined to perform interactive listening and a demonstration of their capability.

  2. Removing the Restrictions Imposed on Finite State Machines ...

    African Journals Online (AJOL)

    This study determines an effective method of removing the fixed and finite state amount of memory that restricts finite state machines from carrying out compilation jobs that require larger amount of memory. The study is ... The conclusion reviewed the various steps followed and made projections for further reading. Keyword: ...

  3. The Effect of SiC Polytypes on the Heat Distribution Efficiency of a Phase Change Memory.

    Science.gov (United States)

    Aziz, M. S.; Mohammed, Z.; Alip, R. I.

    2018-03-01

    The amorphous to crystalline transition of germanium-antimony-tellurium (GST) using three types of silicon carbide’s structure as a heating element was investigated. Simulation was done using COMSOL Multiphysic 5.0 software with separate heater structure. Silicon carbide (SiC) has three types of structure; 3C-SiC, 4H-SiC and 6H-SiC. These structures have a different thermal conductivity. The temperature of GST and phase transition of GST can be obtained from the simulation. The temperature of GST when using 3C-SiC, 4H-SiC and 6H-SiC are 467K, 466K and 460K, respectively. The phase transition of GST from amorphous to crystalline state for three type of SiC’s structure can be determined in this simulation. Based on the result, the thermal conductivity of SiC can affecting the temperature of GST and changed of phase change memory (PCM).

  4. Memory architecture

    NARCIS (Netherlands)

    2012-01-01

    A memory architecture is presented. The memory architecture comprises a first memory and a second memory. The first memory has at least a bank with a first width addressable by a single address. The second memory has a plurality of banks of a second width, said banks being addressable by components

  5. The cognitive approach to conscious machines

    CERN Document Server

    Haikonen, Pentti O

    2003-01-01

    Could a machine have an immaterial mind? The author argues that true conscious machines can be built, but rejects artificial intelligence and classical neural networks in favour of the emulation of the cognitive processes of the brain-the flow of inner speech, inner imagery and emotions. This results in a non-numeric meaning-processing machine with distributed information representation and system reactions. It is argued that this machine would be conscious; it would be aware of its own existence and its mental content and perceive this as immaterial. Novel views on consciousness and the mind-

  6. Commonality and Variability Analysis for Xenon Family of Separation Virtual Machine Monitors (CVAX)

    Science.gov (United States)

    2017-07-18

    the sponsor (e.g., military, intelligence community, other government, commercial, medical ) and upon the type of system (e.g., application in the...loads. • Machine memory. Xen’s terminology for hardware memory present on a chip. • Misuse case. Abuse case. Attacker-product interaction that the...on connections between domains. • Physical memory. Xen’s terminology , short for pseudo-physical memory. Physical memory is the Xen term for the

  7. Machine technology: a survey

    International Nuclear Information System (INIS)

    Barbier, M.M.

    1981-01-01

    An attempt was made to find existing machines that have been upgraded and that could be used for large-scale decontamination operations outdoors. Such machines are in the building industry, the mining industry, and the road construction industry. The road construction industry has yielded the machines in this presentation. A review is given of operations that can be done with the machines available

  8. Machine Shop Lathes.

    Science.gov (United States)

    Dunn, James

    This guide, the second in a series of five machine shop curriculum manuals, was designed for use in machine shop courses in Oklahoma. The purpose of the manual is to equip students with basic knowledge and skills that will enable them to enter the machine trade at the machine-operator level. The curriculum is designed so that it can be used in…

  9. Superconducting rotating machines

    International Nuclear Information System (INIS)

    Smith, J.L. Jr.; Kirtley, J.L. Jr.; Thullen, P.

    1975-01-01

    The opportunities and limitations of the applications of superconductors in rotating electric machines are given. The relevant properties of superconductors and the fundamental requirements for rotating electric machines are discussed. The current state-of-the-art of superconducting machines is reviewed. Key problems, future developments and the long range potential of superconducting machines are assessed

  10. Advanced Electrical Machines and Machine-Based Systems for Electric and Hybrid Vehicles

    Directory of Open Access Journals (Sweden)

    Ming Cheng

    2015-09-01

    Full Text Available The paper presents a number of advanced solutions on electric machines and machine-based systems for the powertrain of electric vehicles (EVs. Two types of systems are considered, namely the drive systems designated to the EV propulsion and the power split devices utilized in the popular series-parallel hybrid electric vehicle architecture. After reviewing the main requirements for the electric drive systems, the paper illustrates advanced electric machine topologies, including a stator permanent magnet (stator-PM motor, a hybrid-excitation motor, a flux memory motor and a redundant motor structure. Then, it illustrates advanced electric drive systems, such as the magnetic-geared in-wheel drive and the integrated starter generator (ISG. Finally, three machine-based implementations of the power split devices are expounded, built up around the dual-rotor PM machine, the dual-stator PM brushless machine and the magnetic-geared dual-rotor machine. As a conclusion, the development trends in the field of electric machines and machine-based systems for EVs are summarized.

  11. Von Krahli Teatri „Luikede järv“ kui mälumasin: esteetiline absoluut ja sotsiaalne kontekst / Von Krahl Theatre’s “Swan Lake” as a Memory Machine: Aesthetic Absolute and Social Context

    Directory of Open Access Journals (Sweden)

    Riina Oruaas

    2014-12-01

    rather weakly based on the dramatis personae of the ballet „Swan Lake“. The stage action can be described using the metaphors of machinery or game; its relationship to the screen adds the metaphor of a dream world. The movement of the dancers remained mechanical although the choreography was postmodern and more playful than that of a ballet. The crisis of totalitarian order can be seen in the changes taking place in the middle of the performance where the rigid structure of characters became decomposed and the movement became more dynamic. The video projections are examined according to three models of intermediality: hierarchical, inter-relational and hybrid (Lavender 2010. The hierarchical model (one media dominating the others appeared in certain situations only. The most frequent model of the relationship between stage action and screens was the inter-relational. This model is structured by the gaps and (opening up of spaces and „fissures“ in the performance, here appearing through parallelisms of the screen images and stage and at given moments of intersection. The least represented model was the hybrid model, i.e. the one where figures of different media were merged; it existed only in a hybrid reality created by computer animation.  On a social level, the performance represented a totalitarian, closed system that lacked any opportunity for a positive exit. Aesthetically, the production can be compared to theatre of the absurd, which is also characterized by a representation of a certain state of being and a feeling of inescapability. The result of juxtaposing post-modern dance and videos was a complex co-existence of several choreographic languages and body techniques. The main relationship between the stage and the screens in the situation of the totalitarian society in this production was dissonance and the production can be summed up as a performative memory machine.

  12. Sensory Dissonance Using Memory Model

    DEFF Research Database (Denmark)

    Jensen, Karl Kristoffer

    2015-01-01

    Music may occur concurrently or in temporal sequences. Current machine-based methods for the estimation of qualities of the music are unable to take into account the influence of temporal context. A method for calculating dissonance from audio, called sensory dissonance is improved by the use of ...... of a memory model. This approach is validated here by the comparison of the sensory dissonance using memory model to data obtained using human subjects....

  13. Compound induction electric rotating machine

    Energy Technology Data Exchange (ETDEWEB)

    Decesare, D

    1987-07-28

    The present invention generally relates to dynamo-electric machines cabable of operating in a generator mode or in a motor mode and more specifically, to increased efficiency compound interaction AC and/or DC dynamo-electric machines. This patent describes such a machine having a distributed armature winding in a cylindrical rotor wound to form axial and substantially radial winding portions and including permanent and/or electromagnets to couple magnetic flux into the peripheral or circumferential surface of the rotor, and to provide interaction between a magnetic field formed beyond the rotor axial surfaces and the rotor to thereby enhance the total induction of flux into the rotor for improved, more efficient operation. 28 figs.,

  14. Virtual machine performance benchmarking.

    Science.gov (United States)

    Langer, Steve G; French, Todd

    2011-10-01

    The attractions of virtual computing are many: reduced costs, reduced resources and simplified maintenance. Any one of these would be compelling for a medical imaging professional attempting to support a complex practice on limited resources in an era of ever tightened reimbursement. In particular, the ability to run multiple operating systems optimized for different tasks (computational image processing on Linux versus office tasks on Microsoft operating systems) on a single physical machine is compelling. However, there are also potential drawbacks. High performance requirements need to be carefully considered if they are to be executed in an environment where the running software has to execute through multiple layers of device drivers before reaching the real disk or network interface. Our lab has attempted to gain insight into the impact of virtualization on performance by benchmarking the following metrics on both physical and virtual platforms: local memory and disk bandwidth, network bandwidth, and integer and floating point performance. The virtual performance metrics are compared to baseline performance on "bare metal." The results are complex, and indeed somewhat surprising.

  15. Applications and modelling of bulk HTSs in brushless ac machines

    International Nuclear Information System (INIS)

    Barnes, G.J.

    2000-01-01

    The use of high temperature superconducting material in its bulk form for engineering applications is attractive due to the large power densities that can be achieved. In brushless electrical machines, there are essentially four properties that can be exploited; their hysteretic nature, their flux shielding properties, their ability to trap large flux densities and their ability to produce levitation. These properties translate to hysteresis machines, reluctance machines, trapped-field synchronous machines and linear motors respectively. Each one of these machines is addressed separately and computer simulations that reveal the current and field distributions within the machines are used to explain their operation. (author)

  16. Mechanical design of walking machines.

    Science.gov (United States)

    Arikawa, Keisuke; Hirose, Shigeo

    2007-01-15

    The performance of existing actuators, such as electric motors, is very limited, be it power-weight ratio or energy efficiency. In this paper, we discuss the method to design a practical walking machine under this severe constraint with focus on two concepts, the gravitationally decoupled actuation (GDA) and the coupled drive. The GDA decouples the driving system against the gravitational field to suppress generation of negative power and improve energy efficiency. On the other hand, the coupled drive couples the driving system to distribute the output power equally among actuators and maximize the utilization of installed actuator power. First, we depict the GDA and coupled drive in detail. Then, we present actual machines, TITAN-III and VIII, quadruped walking machines designed on the basis of the GDA, and NINJA-I and II, quadruped wall walking machines designed on the basis of the coupled drive. Finally, we discuss walking machines that travel on three-dimensional terrain (3D terrain), which includes the ground, walls and ceiling. Then, we demonstrate with computer simulation that we can selectively leverage GDA and coupled drive by walking posture control.

  17. Secure Virtualization Environment Based on Advanced Memory Introspection

    Directory of Open Access Journals (Sweden)

    Shuhui Zhang

    2018-01-01

    Full Text Available Most existing virtual machine introspection (VMI technologies analyze the status of a target virtual machine under the assumption that the operating system (OS version and kernel structure information are known at the hypervisor level. In this paper, we propose a model of virtual machine (VM security monitoring based on memory introspection. Using a hardware-based approach to acquire the physical memory of the host machine in real time, the security of the host machine and VM can be diagnosed. Furthermore, a novel approach for VM memory forensics based on the virtual machine control structure (VMCS is put forward. By analyzing the memory of the host machine, the running VMs can be detected and their high-level semantic information can be reconstructed. Then, malicious activity in the VMs can be identified in a timely manner. Moreover, by mutually analyzing the memory content of the host machine and VMs, VM escape may be detected. Compared with previous memory introspection technologies, our solution can automatically reconstruct the comprehensive running state of a target VM without any prior knowledge and is strongly resistant to attacks with high reliability. We developed a prototype system called the VEDefender. Experimental results indicate that our system can handle the VMs of mainstream Linux and Windows OS versions with high efficiency and does not influence the performance of the host machine and VMs.

  18. Customizable Memory Schemes for Data Parallel Architectures

    NARCIS (Netherlands)

    Gou, C.

    2011-01-01

    Memory system efficiency is crucial for any processor to achieve high performance, especially in the case of data parallel machines. Processing capabilities of parallel lanes will be wasted, when data requests are not accomplished in a sustainable and timely manner. Irregular vector memory accesses

  19. The parallel processing of EGS4 code on distributed memory scalar parallel computer:Intel Paragon XP/S15-256

    Energy Technology Data Exchange (ETDEWEB)

    Takemiya, Hiroshi; Ohta, Hirofumi; Honma, Ichirou

    1996-03-01

    The parallelization of Electro-Magnetic Cascade Monte Carlo Simulation Code, EGS4 on distributed memory scalar parallel computer: Intel Paragon XP/S15-256 is described. EGS4 has the feature that calculation time for one incident particle is quite different from each other because of the dynamic generation of secondary particles and different behavior of each particle. Granularity for parallel processing, parallel programming model and the algorithm of parallel random number generation are discussed and two kinds of method, each of which allocates particles dynamically or statically, are used for the purpose of realizing high speed parallel processing of this code. Among four problems chosen for performance evaluation, the speedup factors for three problems have been attained to nearly 100 times with 128 processor. It has been found that when both the calculation time for each incident particles and its dispersion are large, it is preferable to use dynamic particle allocation method which can average the load for each processor. And it has also been found that when they are small, it is preferable to use static particle allocation method which reduces the communication overhead. Moreover, it is pointed out that to get the result accurately, it is necessary to use double precision variables in EGS4 code. Finally, the workflow of program parallelization is analyzed and tools for program parallelization through the experience of the EGS4 parallelization are discussed. (author).

  20. Distance measurements across randomly distributed nitroxide probes from the temperature dependence of the electron spin phase memory time at 240 GHz

    Science.gov (United States)

    Edwards, Devin T.; Takahashi, Susumu; Sherwin, Mark S.; Han, Songi

    2012-10-01

    At 8.5 T, the polarization of an ensemble of electron spins is essentially 100% at 2 K, and decreases to 30% at 20 K. The strong temperature dependence of the electron spin polarization between 2 and 20 K leads to the phenomenon of spin bath quenching: temporal fluctuations of the dipolar magnetic fields associated with the energy-conserving spin "flip-flop" process are quenched as the temperature of the spin bath is lowered to the point of nearly complete spin polarization. This work uses pulsed electron paramagnetic resonance (EPR) at 240 GHz to investigate the effects of spin bath quenching on the phase memory times (TM) of randomly-distributed ensembles of nitroxide molecules below 20 K at 8.5 T. For a given electron spin concentration, a characteristic, dipolar flip-flop rate (W) is extracted by fitting the temperature dependence of TM to a simple model of decoherence driven by the spin flip-flop process. In frozen solutions of 4-Amino-TEMPO, a stable nitroxide radical in a deuterated water-glass, a calibration is used to quantify average spin-spin distances as large as r¯=6.6 nm from the dipolar flip-flop rate. For longer distances, nuclear spin fluctuations, which are not frozen out, begin to dominate over the electron spin flip-flop processes, placing an effective ceiling on this method for nitroxide molecules. For a bulk solution with a three-dimensional distribution of nitroxide molecules at concentration n, we find W∝n∝1/r, which is consistent with magnetic dipolar spin interactions. Alternatively, we observe W∝n for nitroxides tethered to a quasi two-dimensional surface of large (Ø ˜ 200 nm), unilamellar, lipid vesicles, demonstrating that the quantification of spin bath quenching can also be used to discern the geometry of molecular assembly or organization.

  1. MEMORY MODULATION

    Science.gov (United States)

    Roozendaal, Benno; McGaugh, James L.

    2011-01-01

    Our memories are not all created equally strong: Some experiences are well remembered while others are remembered poorly, if at all. Research on memory modulation investigates the neurobiological processes and systems that contribute to such differences in the strength of our memories. Extensive evidence from both animal and human research indicates that emotionally significant experiences activate hormonal and brain systems that regulate the consolidation of newly acquired memories. These effects are integrated through noradrenergic activation of the basolateral amygdala which regulates memory consolidation via interactions with many other brain regions involved in consolidating memories of recent experiences. Modulatory systems not only influence neurobiological processes underlying the consolidation of new information, but also affect other mnemonic processes, including memory extinction, memory recall and working memory. In contrast to their enhancing effects on consolidation, adrenal stress hormones impair memory retrieval and working memory. Such effects, as with memory consolidation, require noradrenergic activation of the basolateral amygdala and interactions with other brain regions. PMID:22122145

  2. Memory Matters

    Science.gov (United States)

    ... Staying Safe Videos for Educators Search English Español Memory Matters KidsHealth / For Kids / Memory Matters What's in ... of your complex and multitalented brain. What Is Memory? When an event happens, when you learn something, ...

  3. Machine tool structures

    CERN Document Server

    Koenigsberger, F

    1970-01-01

    Machine Tool Structures, Volume 1 deals with fundamental theories and calculation methods for machine tool structures. Experimental investigations into stiffness are discussed, along with the application of the results to the design of machine tool structures. Topics covered range from static and dynamic stiffness to chatter in metal cutting, stability in machine tools, and deformations of machine tool structures. This volume is divided into three sections and opens with a discussion on stiffness specifications and the effect of stiffness on the behavior of the machine under forced vibration c

  4. Machine assisted histogram classification

    Science.gov (United States)

    Benyó, B.; Gaspar, C.; Somogyi, P.

    2010-04-01

    LHCb is one of the four major experiments under completion at the Large Hadron Collider (LHC). Monitoring the quality of the acquired data is important, because it allows the verification of the detector performance. Anomalies, such as missing values or unexpected distributions can be indicators of a malfunctioning detector, resulting in poor data quality. Spotting faulty or ageing components can be either done visually using instruments, such as the LHCb Histogram Presenter, or with the help of automated tools. In order to assist detector experts in handling the vast monitoring information resulting from the sheer size of the detector, we propose a graph based clustering tool combined with machine learning algorithm and demonstrate its use by processing histograms representing 2D hitmaps events. We prove the concept by detecting ion feedback events in the LHCb experiment's RICH subdetector.

  5. Machine assisted histogram classification

    Energy Technology Data Exchange (ETDEWEB)

    Benyo, B; Somogyi, P [BME-IIT, H-1117 Budapest, Magyar tudosok koerutja 2. (Hungary); Gaspar, C, E-mail: Peter.Somogyi@cern.c [CERN-PH, CH-1211 Geneve 23 (Switzerland)

    2010-04-01

    LHCb is one of the four major experiments under completion at the Large Hadron Collider (LHC). Monitoring the quality of the acquired data is important, because it allows the verification of the detector performance. Anomalies, such as missing values or unexpected distributions can be indicators of a malfunctioning detector, resulting in poor data quality. Spotting faulty or ageing components can be either done visually using instruments, such as the LHCb Histogram Presenter, or with the help of automated tools. In order to assist detector experts in handling the vast monitoring information resulting from the sheer size of the detector, we propose a graph based clustering tool combined with machine learning algorithm and demonstrate its use by processing histograms representing 2D hitmaps events. We prove the concept by detecting ion feedback events in the LHCb experiment's RICH subdetector.

  6. Frequent Statement and Dereference Elimination for Imperative and Object-Oriented Distributed Programs

    Science.gov (United States)

    El-Zawawy, Mohamed A.

    2014-01-01

    This paper introduces new approaches for the analysis of frequent statement and dereference elimination for imperative and object-oriented distributed programs running on parallel machines equipped with hierarchical memories. The paper uses languages whose address spaces are globally partitioned. Distributed programs allow defining data layout and threads writing to and reading from other thread memories. Three type systems (for imperative distributed programs) are the tools of the proposed techniques. The first type system defines for every program point a set of calculated (ready) statements and memory accesses. The second type system uses an enriched version of types of the first type system and determines which of the ready statements and memory accesses are used later in the program. The third type system uses the information gather so far to eliminate unnecessary statement computations and memory accesses (the analysis of frequent statement and dereference elimination). Extensions to these type systems are also presented to cover object-oriented distributed programs. Two advantages of our work over related work are the following. The hierarchical style of concurrent parallel computers is similar to the memory model used in this paper. In our approach, each analysis result is assigned a type derivation (serves as a correctness proof). PMID:24892098

  7. MITS machine operations

    International Nuclear Information System (INIS)

    Flinchem, J.

    1980-01-01

    This document contains procedures which apply to operations performed on individual P-1c machines in the Machine Interface Test System (MITS) at AiResearch Manufacturing Company's Torrance, California Facility

  8. Brain versus Machine Control.

    Directory of Open Access Journals (Sweden)

    Jose M Carmena

    2004-12-01

    Full Text Available Dr. Octopus, the villain of the movie "Spiderman 2", is a fusion of man and machine. Neuroscientist Jose Carmena examines the facts behind this fictional account of a brain- machine interface

  9. Applied machining technology

    CERN Document Server

    Tschätsch, Heinz

    2010-01-01

    Machining and cutting technologies are still crucial for many manufacturing processes. This reference presents all important machining processes in a comprehensive and coherent way. It includes many examples of concrete calculations, problems and solutions.

  10. Machining with abrasives

    CERN Document Server

    Jackson, Mark J

    2011-01-01

    Abrasive machining is key to obtaining the desired geometry and surface quality in manufacturing. This book discusses the fundamentals and advances in the abrasive machining processes. It provides a complete overview of developing areas in the field.

  11. Machine medical ethics

    CERN Document Server

    Pontier, Matthijs

    2015-01-01

    The essays in this book, written by researchers from both humanities and sciences, describe various theoretical and experimental approaches to adding medical ethics to a machine in medical settings. Medical machines are in close proximity with human beings, and getting closer: with patients who are in vulnerable states of health, who have disabilities of various kinds, with the very young or very old, and with medical professionals. In such contexts, machines are undertaking important medical tasks that require emotional sensitivity, knowledge of medical codes, human dignity, and privacy. As machine technology advances, ethical concerns become more urgent: should medical machines be programmed to follow a code of medical ethics? What theory or theories should constrain medical machine conduct? What design features are required? Should machines share responsibility with humans for the ethical consequences of medical actions? How ought clinical relationships involving machines to be modeled? Is a capacity for e...

  12. Avalanches and generalized memory associativity in a network model for conscious and unconscious mental functioning

    Science.gov (United States)

    Siddiqui, Maheen; Wedemann, Roseli S.; Jensen, Henrik Jeldtoft

    2018-01-01

    We explore statistical characteristics of avalanches associated with the dynamics of a complex-network model, where two modules corresponding to sensorial and symbolic memories interact, representing unconscious and conscious mental processes. The model illustrates Freud's ideas regarding the neuroses and that consciousness is related with symbolic and linguistic memory activity in the brain. It incorporates the Stariolo-Tsallis generalization of the Boltzmann Machine in order to model memory retrieval and associativity. In the present work, we define and measure avalanche size distributions during memory retrieval, in order to gain insight regarding basic aspects of the functioning of these complex networks. The avalanche sizes defined for our model should be related to the time consumed and also to the size of the neuronal region which is activated, during memory retrieval. This allows the qualitative comparison of the behaviour of the distribution of cluster sizes, obtained during fMRI measurements of the propagation of signals in the brain, with the distribution of avalanche sizes obtained in our simulation experiments. This comparison corroborates the indication that the Nonextensive Statistical Mechanics formalism may indeed be more well suited to model the complex networks which constitute brain and mental structure.

  13. Peak performance: remote memory revisited

    NARCIS (Netherlands)

    Mühleisen, H.; Gonçalves, R.; Kersten, M.; Johnson, R.; Kemper, A.

    2013-01-01

    Many database systems share a need for large amounts of fast storage. However, economies of scale limit the utility of extending a single machine with an arbitrary amount of memory. The recent broad availability of the zero-copy data transfer protocol RDMA over low-latency and high-throughput

  14. Machine protection systems

    CERN Document Server

    Macpherson, A L

    2010-01-01

    A summary of the Machine Protection System of the LHC is given, with particular attention given to the outstanding issues to be addressed, rather than the successes of the machine protection system from the 2009 run. In particular, the issues of Safe Machine Parameter system, collimation and beam cleaning, the beam dump system and abort gap cleaning, injection and dump protection, and the overall machine protection program for the upcoming run are summarised.

  15. Emotional organization of autobiographical memory.

    Science.gov (United States)

    Schulkind, Matthew D; Woldorf, Gillian M

    2005-09-01

    The emotional organization of autobiographical memory was examined by determining whether emotional cues would influence autobiographical retrieval in younger and older adults. Unfamiliar musical cues that represented orthogonal combinations of positive and negative valence and high and low arousal were used. Whereas cue valence influenced the valence of the retrieved memories, cue arousal did not affect arousal ratings. However, high-arousal cues were associated with reduced response latencies. A significant bias to report positive memories was observed, especially for the older adults, but neither the distribution of memories across the life span nor response latencies varied across memories differing in valence or arousal. These data indicate that emotional information can serve as effective cues for autobiographical memories and that autobiographical memories are organized in terms of emotional valence but not emotional arousal. Thus, current theories of autobiographical memory must be expanded to include emotional valence as a primary dimension of organization.

  16. Dictionary of machine terms

    International Nuclear Information System (INIS)

    1990-06-01

    This book has introduction of dictionary of machine terms, and a compilation committee and introductory remarks. It gives descriptions of the machine terms in alphabetical order from a to Z and also includes abbreviation of machine terms and symbol table, way to read mathematical symbols and abbreviation and terms of drawings.

  17. Mankind, machines and people

    Energy Technology Data Exchange (ETDEWEB)

    Hugli, A

    1984-01-01

    The following questions are addressed: is there a difference between machines and men, between human communication and communication with machines. Will we ever reach the point when the dream of artificial intelligence becomes a reality. Will thinking machines be able to replace the human spirit in all its aspects. Social consequences and philosophical aspects are addressed. 8 references.

  18. A Universal Reactive Machine

    DEFF Research Database (Denmark)

    Andersen, Henrik Reif; Mørk, Simon; Sørensen, Morten U.

    1997-01-01

    Turing showed the existence of a model universal for the set of Turing machines in the sense that given an encoding of any Turing machine asinput the universal Turing machine simulates it. We introduce the concept of universality for reactive systems and construct a CCS processuniversal...

  19. HTS machine laboratory prototype

    DEFF Research Database (Denmark)

    machine. The machine comprises six stationary HTS field windings wound from both YBCO and BiSCOO tape operated at liquid nitrogen temperature and enclosed in a cryostat, and a three phase armature winding spinning at up to 300 rpm. This design has full functionality of HTS synchronous machines. The design...

  20. Your Sewing Machine.

    Science.gov (United States)

    Peacock, Marion E.

    The programed instruction manual is designed to aid the student in learning the parts, uses, and operation of the sewing machine. Drawings of sewing machine parts are presented, and space is provided for the student's written responses. Following an introductory section identifying sewing machine parts, the manual deals with each part and its…

  1. Mapping robust parallel multigrid algorithms to scalable memory architectures

    Science.gov (United States)

    Overman, Andrea; Vanrosendale, John

    1993-01-01

    The convergence rate of standard multigrid algorithms degenerates on problems with stretched grids or anisotropic operators. The usual cure for this is the use of line or plane relaxation. However, multigrid algorithms based on line and plane relaxation have limited and awkward parallelism and are quite difficult to map effectively to highly parallel architectures. Newer multigrid algorithms that overcome anisotropy through the use of multiple coarse grids rather than relaxation are better suited to massively parallel architectures because they require only simple point-relaxation smoothers. In this paper, we look at the parallel implementation of a V-cycle multiple semicoarsened grid (MSG) algorithm on distributed-memory architectures such as the Intel iPSC/860 and Paragon computers. The MSG algorithms provide two levels of parallelism: parallelism within the relaxation or interpolation on each grid and across the grids on each multigrid level. Both levels of parallelism must be exploited to map these algorithms effectively to parallel architectures. This paper describes a mapping of an MSG algorithm to distributed-memory architectures that demonstrates how both levels of parallelism can be exploited. The result is a robust and effective multigrid algorithm for distributed-memory machines.

  2. Learning and memory.

    Science.gov (United States)

    Brem, Anna-Katharine; Ran, Kathy; Pascual-Leone, Alvaro

    2013-01-01

    Learning and memory functions are crucial in the interaction of an individual with the environment and involve the interplay of large, distributed brain networks. Recent advances in technologies to explore neurobiological correlates of neuropsychological paradigms have increased our knowledge about human learning and memory. In this chapter we first review and define memory and learning processes from a neuropsychological perspective. Then we provide some illustrations of how noninvasive brain stimulation can play a major role in the investigation of memory functions, as it can be used to identify cause-effect relationships and chronometric properties of neural processes underlying cognitive steps. In clinical medicine, transcranial magnetic stimulation may be used as a diagnostic tool to understand memory and learning deficits in various patient populations. Furthermore, noninvasive brain stimulation is also being applied to enhance cognitive functions, offering exciting translational therapeutic opportunities in neurology and psychiatry. © 2013 Elsevier B.V. All rights reserved.

  3. High-bandwidth memory interface

    CERN Document Server

    Kim, Chulwoo; Song, Junyoung

    2014-01-01

    This book provides an overview of recent advances in memory interface design at both the architecture and circuit levels. Coverage includes signal integrity and testing, TSV interface, high-speed serial interface including equalization, ODT, pre-emphasis, wide I/O interface including crosstalk, skew cancellation, and clock generation and distribution. Trends for further bandwidth enhancement are also covered.   • Enables readers with minimal background in memory design to understand the basics of high-bandwidth memory interface design; • Presents state-of-the-art techniques for memory interface design; • Covers memory interface design at both the circuit level and system architecture level.

  4. Quantum machine learning.

    Science.gov (United States)

    Biamonte, Jacob; Wittek, Peter; Pancotti, Nicola; Rebentrost, Patrick; Wiebe, Nathan; Lloyd, Seth

    2017-09-13

    Fuelled by increasing computer power and algorithmic advances, machine learning techniques have become powerful tools for finding patterns in data. Quantum systems produce atypical patterns that classical systems are thought not to produce efficiently, so it is reasonable to postulate that quantum computers may outperform classical computers on machine learning tasks. The field of quantum machine learning explores how to devise and implement quantum software that could enable machine learning that is faster than that of classical computers. Recent work has produced quantum algorithms that could act as the building blocks of machine learning programs, but the hardware and software challenges are still considerable.

  5. Asynchronized synchronous machines

    CERN Document Server

    Botvinnik, M M

    1964-01-01

    Asynchronized Synchronous Machines focuses on the theoretical research on asynchronized synchronous (AS) machines, which are "hybrids” of synchronous and induction machines that can operate with slip. Topics covered in this book include the initial equations; vector diagram of an AS machine; regulation in cases of deviation from the law of full compensation; parameters of the excitation system; and schematic diagram of an excitation regulator. The possible applications of AS machines and its calculations in certain cases are also discussed. This publication is beneficial for students and indiv

  6. An innovative approach to achieve re-centering and ductility of cement mortar beams through randomly distributed pseudo-elastic shape memory alloy fibers

    Science.gov (United States)

    Shajil, N.; Srinivasan, S. M.; Santhanam, M.

    2012-04-01

    Fibers can play a major role in post cracking behavior of concrete members, because of their ability to bridge cracks and distribute the stress across the crack. Addition of steel fibers in mortar and concrete can improve toughness of the structural member and impart significant energy dissipation through slow pull out. However, steel fibers undergo plastic deformation at low strain levels, and cannot regain their shape upon unloading. This is a major disadvantage in strong cyclic loading conditions, such as those caused by earthquakes, where self-centering ability of the fibers is a desired characteristic in addition to ductility of the reinforced cement concrete. Fibers made from an alternative material such as shape memory alloy (SMA) could offer a scope for re-centering, thus improving performance especially after a severe loading has occurred. In this study, the load-deformation characteristics of SMA fiber reinforced cement mortar beams under cyclic loading conditions were investigated to assess the re-centering performance. This study involved experiments on prismatic members, and related analysis for the assessment and prediction of re-centering. The performances of NiTi fiber reinforced mortars are compared with mortars with same volume fraction of steel fibers. Since re-entrant corners and beam columns joints are prone to failure during a strong ground motion, a study was conducted to determine the behavior of these reinforced with NiTi fiber. Comparison is made with the results of steel fiber reinforced cases. NiTi fibers showed significantly improved re-centering and energy dissipation characteristics compared to the steel fibers.

  7. Design and Implementation of Distributed Crawler System Based on Scrapy

    Science.gov (United States)

    Fan, Yuhao

    2018-01-01

    At present, some large-scale search engines at home and abroad only provide users with non-custom search services, and a single-machine web crawler cannot sovle the difficult task. In this paper, Through the study and research of the original Scrapy framework, the original Scrapy framework is improved by combining Scrapy and Redis, a distributed crawler system based on Web information Scrapy framework is designed and implemented, and Bloom Filter algorithm is applied to dupefilter modul to reduce memory consumption. The movie information captured from douban is stored in MongoDB, so that the data can be processed and analyzed. The results show that distributed crawler system based on Scrapy framework is more efficient and stable than the single-machine web crawler system.

  8. The distribution of air bubble size in the pneumo-mechanical flotation machine . Rozkład wielkości pęcherzyków powietrza w pneumo-mechanicznej maszynie flotacyjnej

    Science.gov (United States)

    Brożek, Marian; Młynarczykowska, Anna

    2012-12-01

    The flotation rate constant is the value characterizing the kinetics of cyclic flotation. In the statistical theory of flotation its value is the function of probabilities of collision, adhesion and detachment of particle from the air bubble. The particle - air bubble collision plays a key role since there must be a prior collision before the particle - air bubble adhesion happens. The probability of such an event to occur is proportional to the ratio of the particle diameter to the bubble diameter. When the particle size is given, it is possible to control the value of collision probability by means of the size of air bubble. Consequently, it is significant to find the effect of physical and physicochemical factors upon the diameter of air bubbles in the form of a mathematical dependence. In the pneumo-mechanical flotation machine the air bubbles are generated by the blades of the rotor. The dispergation rate is affected by, among others, rotational speed of the rotor, the air flow rate and the liquid surface tension, depending on the type and concentration of applied flotation reagents. In the proposed paper the authors will present the distribution of air bubble diameters on the grounds of the above factors, according to the laws of thermodynamics. The correctness of the derived dependences will be verified empirically.

  9. Quantum cloning machines and the applications

    Energy Technology Data Exchange (ETDEWEB)

    Fan, Heng, E-mail: hfan@iphy.ac.cn [Beijing National Laboratory for Condensed Matter Physics, Institute of Physics, Chinese Academy of Sciences, Beijing 100190 (China); Collaborative Innovation Center of Quantum Matter, Beijing 100190 (China); Wang, Yi-Nan; Jing, Li [School of Physics, Peking University, Beijing 100871 (China); Yue, Jie-Dong [Beijing National Laboratory for Condensed Matter Physics, Institute of Physics, Chinese Academy of Sciences, Beijing 100190 (China); Shi, Han-Duo; Zhang, Yong-Liang; Mu, Liang-Zhu [School of Physics, Peking University, Beijing 100871 (China)

    2014-11-20

    No-cloning theorem is fundamental for quantum mechanics and for quantum information science that states an unknown quantum state cannot be cloned perfectly. However, we can try to clone a quantum state approximately with the optimal fidelity, or instead, we can try to clone it perfectly with the largest probability. Thus various quantum cloning machines have been designed for different quantum information protocols. Specifically, quantum cloning machines can be designed to analyze the security of quantum key distribution protocols such as BB84 protocol, six-state protocol, B92 protocol and their generalizations. Some well-known quantum cloning machines include universal quantum cloning machine, phase-covariant cloning machine, the asymmetric quantum cloning machine and the probabilistic quantum cloning machine. In the past years, much progress has been made in studying quantum cloning machines and their applications and implementations, both theoretically and experimentally. In this review, we will give a complete description of those important developments about quantum cloning and some related topics. On the other hand, this review is self-consistent, and in particular, we try to present some detailed formulations so that further study can be taken based on those results.

  10. Quantum cloning machines and the applications

    International Nuclear Information System (INIS)

    Fan, Heng; Wang, Yi-Nan; Jing, Li; Yue, Jie-Dong; Shi, Han-Duo; Zhang, Yong-Liang; Mu, Liang-Zhu

    2014-01-01

    No-cloning theorem is fundamental for quantum mechanics and for quantum information science that states an unknown quantum state cannot be cloned perfectly. However, we can try to clone a quantum state approximately with the optimal fidelity, or instead, we can try to clone it perfectly with the largest probability. Thus various quantum cloning machines have been designed for different quantum information protocols. Specifically, quantum cloning machines can be designed to analyze the security of quantum key distribution protocols such as BB84 protocol, six-state protocol, B92 protocol and their generalizations. Some well-known quantum cloning machines include universal quantum cloning machine, phase-covariant cloning machine, the asymmetric quantum cloning machine and the probabilistic quantum cloning machine. In the past years, much progress has been made in studying quantum cloning machines and their applications and implementations, both theoretically and experimentally. In this review, we will give a complete description of those important developments about quantum cloning and some related topics. On the other hand, this review is self-consistent, and in particular, we try to present some detailed formulations so that further study can be taken based on those results

  11. Scikit-learn: Machine Learning in Python

    OpenAIRE

    Pedregosa, Fabian; Varoquaux, Gaël; Gramfort, Alexandre; Michel, Vincent; Thirion, Bertrand; Grisel, Olivier; Blondel, Mathieu; Prettenhofer, Peter; Weiss, Ron; Dubourg, Vincent; Vanderplas, Jake; Passos, Alexandre; Cournapeau, David; Brucher, Matthieu; Perrot, Matthieu

    2011-01-01

    International audience; Scikit-learn is a Python module integrating a wide range of state-of-the-art machine learning algorithms for medium-scale supervised and unsupervised problems. This package focuses on bringing machine learning to non-specialists using a general-purpose high-level language. Emphasis is put on ease of use, performance, documentation, and API consistency. It has minimal dependencies and is distributed under the simplified BSD license, encouraging its use in both academic ...

  12. Scikit-learn: Machine Learning in Python

    OpenAIRE

    Pedregosa, Fabian; Varoquaux, Gaël; Gramfort, Alexandre; Michel, Vincent; Thirion, Bertrand; Grisel, Olivier; Blondel, Mathieu; Louppe, Gilles; Prettenhofer, Peter; Weiss, Ron; Dubourg, Vincent; Vanderplas, Jake; Passos, Alexandre; Cournapeau, David; Brucher, Matthieu

    2012-01-01

    Scikit-learn is a Python module integrating a wide range of state-of-the-art machine learning algorithms for medium-scale supervised and unsupervised problems. This package focuses on bringing machine learning to non-specialists using a general-purpose high-level language. Emphasis is put on ease of use, performance, documentation, and API consistency. It has minimal dependencies and is distributed under the simplified BSD license, encouraging its use in both academic and commercial settings....

  13. Axial flux permanent magnet brushless machines

    CERN Document Server

    Gieras, Jacek F; Kamper, Maarten J

    2008-01-01

    Axial Flux Permanent Magnet (AFPM) brushless machines are modern electrical machines with a lot of advantages over their conventional counterparts. They are being increasingly used in consumer electronics, public life, instrumentation and automation system, clinical engineering, industrial electromechanical drives, automobile manufacturing industry, electric and hybrid electric vehicles, marine vessels and toys. They are also used in more electric aircrafts and many other applications on larger scale. New applications have also emerged in distributed generation systems (wind turbine generators

  14. The memory of volatility

    Directory of Open Access Journals (Sweden)

    Kai R. Wenger

    2018-03-01

    Full Text Available The focus of the volatility literature on forecasting and the predominance of theconceptually simpler HAR model over long memory stochastic volatility models has led to the factthat the actual degree of memory estimates has rarely been considered. Estimates in the literaturerange roughly between 0.4 and 0.6 - that is from the higher stationary to the lower non-stationaryregion. This difference, however, has important practical implications - such as the existence or nonexistenceof the fourth moment of the return distribution. Inference on the memory order is complicatedby the presence of measurement error in realized volatility and the potential of spurious long memory.In this paper we provide a comprehensive analysis of the memory in variances of international stockindices and exchange rates. On the one hand, we find that the variance of exchange rates is subject tospurious long memory and the true memory parameter is in the higher stationary range. Stock indexvariances, on the other hand, are free of low frequency contaminations and the memory is in the lowernon-stationary range. These results are obtained using state of the art local Whittle methods that allowconsistent estimation in presence of perturbations or low frequency contaminations.

  15. Formation of the distributed NiSiGe nanocrystals nonvolatile memory formed by rapidly annealing in N2 and O2 ambient

    International Nuclear Information System (INIS)

    Hu, Chih-Wei; Chang, Ting-Chang; Tu, Chun-Hao; Chiang, Cheng-Neng; Lin, Chao-Cheng; Chen, Min-Chen; Chang, Chun-Yen; Sze, Simon M.; Tseng, Tseung-Yuen

    2010-01-01

    In this work, electrical characteristics of the Ge-incorporated Nickel silicide (NiSiGe) nanocrystals memory device formed by the rapidly thermal annealing in N 2 and O 2 ambient have been studied. The trapping layer was deposited by co-sputtering the NiSi 2 and Ge, simultaneously. Transmission electron microscope results indicate that the NiSiGe nanocrystals were formed obviously in both the samples. The memory devices show obvious charge-storage ability under capacitance-voltage measurement. However, it is found that the NiSiGe nanocrystals device formed by annealing in N 2 ambient has smaller memory window and better retention characteristics than in O 2 ambient. Then, related material analyses were used to confirm that the oxidized Ge elements affect the charge-storage sites and the electrical performance of the NCs memory.

  16. Distribution and levels of [125I]IGF-I, [125I]IGF-II and [125I]insulin receptor binding sites in the hippocampus of aged memory-unimpaired and -impaired rats

    International Nuclear Information System (INIS)

    Quirion, R.; Rowe, W.; Kar, S.; Dore, S.

    1997-01-01

    The insulin-like growth factors (IGF-I and IGF-II) and insulin are localized within distinct brain regions and their respective functions are mediated by specific membrane receptors. High densities of binding sites for these growth factors are discretely and differentially distributed throughout the brain, with prominent levels localized to the hippocampal formation. IGFs and insulin, in addition to their growth promoting actions, are considered to play important roles in the development and maintenance of normal cell functions throughout life. We compared the anatomical distribution and levels of IGF and insulin receptors in young (five month) and aged (25 month) memory-impaired and memory-unimpaired male Long-Evans rats as determined in the Morris water maze task in order to determine if alterations in IGF and insulin activity may be related to the emergence of cognitive deficits in the aged memory-impaired rat. In the hippocampus, [ 125 I]IGF-I receptors are concentrated primarily in the dentate gyrus (DG) and the CA3 sub-field while high amounts of [ 125 I]IGF-II binding sites are localized to the pyramidal cell layer, and the granular cell layer of the DG. [ 125 I]insulin binding sites are mostly found in the molecular layer of the DG and the CA1 sub-field. No significant differences were found in [ 125 I]IGF-I, [ 125 I]IGF-II or [ 125 I]insulin binding levels in any regions or laminae of the hippocampus of young vs aged rats, and deficits in cognitive performance did not relate to altered levels of these receptors in aged memory-impaired vs aged memory-unimpaired rats. Other regions, including various cortical areas, were also examined and failed to reveal any significant differences between the three groups studied.It thus appears that IGF-I, IGF-II and insulin receptor sites are not markedly altered during the normal ageing process in the Long-Evans rat, in spite of significant learning deficits in a sub-group (memory-impaired) of aged animals. Hence

  17. Analysis towards VMEM File of a Suspended Virtual Machine

    Science.gov (United States)

    Song, Zheng; Jin, Bo; Sun, Yongqing

    With the popularity of virtual machines, forensic investigators are challenged with more complicated situations, among which discovering the evidences in virtualized environment is of significant importance. This paper mainly analyzes the file suffixed with .vmem in VMware Workstation, which stores all pseudo-physical memory into an image. The internal file structure of .vmem file is studied and disclosed. Key information about processes and threads of a suspended virtual machine is revealed. Further investigation into the Windows XP SP3 heap contents is conducted and a proof-of-concept tool is provided. Different methods to obtain forensic memory images are introduced, with both advantages and limits analyzed. We conclude with an outlook.

  18. Identification of memory reactivation during sleep by EEG classification.

    Science.gov (United States)

    Belal, Suliman; Cousins, James; El-Deredy, Wael; Parkes, Laura; Schneider, Jules; Tsujimura, Hikaru; Zoumpoulaki, Alexia; Perapoch, Marta; Santamaria, Lorena; Lewis, Penelope

    2018-04-17

    Memory reactivation during sleep is critical for consolidation, but also extremely difficult to measure as it is subtle, distributed and temporally unpredictable. This article reports a novel method for detecting such reactivation in standard sleep recordings. During learning, participants produced a complex sequence of finger presses, with each finger cued by a distinct audio-visual stimulus. Auditory cues were then re-played during subsequent sleep to trigger neural reactivation through a method known as targeted memory reactivation (TMR). Next, we used electroencephalography data from the learning session to train a machine learning classifier, and then applied this classifier to sleep data to determine how successfully each tone had elicited memory reactivation. Neural reactivation was classified above chance in all participants when TMR was applied in SWS, and in 5 of the 14 participants to whom TMR was applied in N2. Classification success reduced across numerous repetitions of the tone cue, suggesting either a gradually reducing responsiveness to such cues or a plasticity-related change in the neural signature as a result of cueing. We believe this method will be valuable for future investigations of memory consolidation. Copyright © 2018 Elsevier Inc. All rights reserved.

  19. Efficient Machine Learning Approach for Optimizing Scientific Computing Applications on Emerging HPC Architectures

    Energy Technology Data Exchange (ETDEWEB)

    Arumugam, Kamesh [Old Dominion Univ., Norfolk, VA (United States)

    2017-05-01

    Efficient parallel implementations of scientific applications on multi-core CPUs with accelerators such as GPUs and Xeon Phis is challenging. This requires - exploiting the data parallel architecture of the accelerator along with the vector pipelines of modern x86 CPU architectures, load balancing, and efficient memory transfer between different devices. It is relatively easy to meet these requirements for highly structured scientific applications. In contrast, a number of scientific and engineering applications are unstructured. Getting performance on accelerators for these applications is extremely challenging because many of these applications employ irregular algorithms which exhibit data-dependent control-ow and irregular memory accesses. Furthermore, these applications are often iterative with dependency between steps, and thus making it hard to parallelize across steps. As a result, parallelism in these applications is often limited to a single step. Numerical simulation of charged particles beam dynamics is one such application where the distribution of work and memory access pattern at each time step is irregular. Applications with these properties tend to present significant branch and memory divergence, load imbalance between different processor cores, and poor compute and memory utilization. Prior research on parallelizing such irregular applications have been focused around optimizing the irregular, data-dependent memory accesses and control-ow during a single step of the application independent of the other steps, with the assumption that these patterns are completely unpredictable. We observed that the structure of computation leading to control-ow divergence and irregular memory accesses in one step is similar to that in the next step. It is possible to predict this structure in the current step by observing the computation structure of previous steps. In this dissertation, we present novel machine learning based optimization techniques to address

  20. Machine Distribution. Microcomputing Working Papers Series.

    Science.gov (United States)

    Drexel Univ., Philadelphia, PA. Microcomputing Program.

    During the academic year 1983-84, Drexel University instituted a new policy requiring all incoming students to have access to a microcomputer. The computer chosen to fulfill this requirement was the Macintosh from Apple Computer, Inc. This paper provides a brief description of the process undertaken to select the appropriate computer (i.e.,…

  1. One-way shared memory

    DEFF Research Database (Denmark)

    Schoeberl, Martin

    2018-01-01

    Standard multicore processors use the shared main memory via the on-chip caches for communication between cores. However, this form of communication has two limitations: (1) it is hardly time-predictable and therefore not a good solution for real-time systems and (2) this single shared memory...... is a bottleneck in the system. This paper presents a communication architecture for time-predictable multicore systems where core-local memories are distributed on the chip. A network-on-chip constantly copies data from a sender core-local memory to a receiver core-local memory. As this copying is performed...... in one direction we call this architecture a one-way shared memory. With the use of time-division multiplexing for the memory accesses and the network-on-chip routers we achieve a time-predictable solution where the communication latency and bandwidth can be bounded. An example architecture for a 3...

  2. Global aspects of radiation memory

    International Nuclear Information System (INIS)

    Winicour, J

    2014-01-01

    Gravitational radiation has a memory effect represented by a net change in the relative positions of test particles. Both the linear and nonlinear sources proposed for this radiation memory are of the ‘electric’ type, or E mode, as characterized by the even parity of the polarization pattern. Although ‘magnetic’ type, or B mode, radiation memory is mathematically possible, no physically realistic source has been identified. There is an electromagnetic counterpart to radiation memory in which the velocity of charged test particles obtain a net ‘kick’. Again, the physically realistic sources of electromagnetic radiation memory that have been identified are of the electric type. In this paper, a global null cone description of the electromagnetic field is applied to establish the non-existence of B-mode radiation memory and the non-existence of E-mode radiation memory due to a bound charge distribution. (paper)

  3. Pattern recognition & machine learning

    CERN Document Server

    Anzai, Y

    1992-01-01

    This is the first text to provide a unified and self-contained introduction to visual pattern recognition and machine learning. It is useful as a general introduction to artifical intelligence and knowledge engineering, and no previous knowledge of pattern recognition or machine learning is necessary. Basic for various pattern recognition and machine learning methods. Translated from Japanese, the book also features chapter exercises, keywords, and summaries.

  4. Support vector machines applications

    CERN Document Server

    Guo, Guodong

    2014-01-01

    Support vector machines (SVM) have both a solid mathematical background and good performance in practical applications. This book focuses on the recent advances and applications of the SVM in different areas, such as image processing, medical practice, computer vision, pattern recognition, machine learning, applied statistics, business intelligence, and artificial intelligence. The aim of this book is to create a comprehensive source on support vector machine applications, especially some recent advances.

  5. The Newest Machine Material

    International Nuclear Information System (INIS)

    Seo, Yeong Seop; Choe, Byeong Do; Bang, Meong Sung

    2005-08-01

    This book gives descriptions of machine material with classification of machine material and selection of machine material, structure and connection of material, coagulation of metal and crystal structure, equilibrium diagram, properties of metal material, elasticity and plasticity, biopsy of metal, material test and nondestructive test. It also explains steel material such as heat treatment of steel, cast iron and cast steel, nonferrous metal materials, non metallic materials, and new materials.

  6. Introduction to machine learning

    OpenAIRE

    Baştanlar, Yalın; Özuysal, Mustafa

    2014-01-01

    The machine learning field, which can be briefly defined as enabling computers make successful predictions using past experiences, has exhibited an impressive development recently with the help of the rapid increase in the storage capacity and processing power of computers. Together with many other disciplines, machine learning methods have been widely employed in bioinformatics. The difficulties and cost of biological analyses have led to the development of sophisticated machine learning app...

  7. Machinability of advanced materials

    CERN Document Server

    Davim, J Paulo

    2014-01-01

    Machinability of Advanced Materials addresses the level of difficulty involved in machining a material, or multiple materials, with the appropriate tooling and cutting parameters.  A variety of factors determine a material's machinability, including tool life rate, cutting forces and power consumption, surface integrity, limiting rate of metal removal, and chip shape. These topics, among others, and multiple examples comprise this research resource for engineering students, academics, and practitioners.

  8. Machining of titanium alloys

    CERN Document Server

    2014-01-01

    This book presents a collection of examples illustrating the resent research advances in the machining of titanium alloys. These materials have excellent strength and fracture toughness as well as low density and good corrosion resistance; however, machinability is still poor due to their low thermal conductivity and high chemical reactivity with cutting tool materials. This book presents solutions to enhance machinability in titanium-based alloys and serves as a useful reference to professionals and researchers in aerospace, automotive and biomedical fields.

  9. Tribology in machine design

    CERN Document Server

    Stolarski, Tadeusz

    1999-01-01

    ""Tribology in Machine Design is strongly recommended for machine designers, and engineers and scientists interested in tribology. It should be in the engineering library of companies producing mechanical equipment.""Applied Mechanics ReviewTribology in Machine Design explains the role of tribology in the design of machine elements. It shows how algorithms developed from the basic principles of tribology can be used in a range of practical applications within mechanical devices and systems.The computer offers today's designer the possibility of greater stringen

  10. Induction machine handbook

    CERN Document Server

    Boldea, Ion

    2002-01-01

    Often called the workhorse of industry, the advent of power electronics and advances in digital control are transforming the induction motor into the racehorse of industrial motion control. Now, the classic texts on induction machines are nearly three decades old, while more recent books on electric motors lack the necessary depth and detail on induction machines.The Induction Machine Handbook fills industry's long-standing need for a comprehensive treatise embracing the many intricate facets of induction machine analysis and design. Moving gradually from simple to complex and from standard to

  11. Chaotic Boltzmann machines

    Science.gov (United States)

    Suzuki, Hideyuki; Imura, Jun-ichi; Horio, Yoshihiko; Aihara, Kazuyuki

    2013-01-01

    The chaotic Boltzmann machine proposed in this paper is a chaotic pseudo-billiard system that works as a Boltzmann machine. Chaotic Boltzmann machines are shown numerically to have computing abilities comparable to conventional (stochastic) Boltzmann machines. Since no randomness is required, efficient hardware implementation is expected. Moreover, the ferromagnetic phase transition of the Ising model is shown to be characterised by the largest Lyapunov exponent of the proposed system. In general, a method to relate probabilistic models to nonlinear dynamics by derandomising Gibbs sampling is presented. PMID:23558425

  12. Electrical machines & drives

    CERN Document Server

    Hammond, P

    1985-01-01

    Containing approximately 200 problems (100 worked), the text covers a wide range of topics concerning electrical machines, placing particular emphasis upon electrical-machine drive applications. The theory is concisely reviewed and focuses on features common to all machine types. The problems are arranged in order of increasing levels of complexity and discussions of the solutions are included where appropriate to illustrate the engineering implications. This second edition includes an important new chapter on mathematical and computer simulation of machine systems and revised discussions o

  13. Nanocomposites for Machining Tools

    Directory of Open Access Journals (Sweden)

    Daria Sidorenko

    2017-10-01

    Full Text Available Machining tools are used in many areas of production. To a considerable extent, the performance characteristics of the tools determine the quality and cost of obtained products. The main materials used for producing machining tools are steel, cemented carbides, ceramics and superhard materials. A promising way to improve the performance characteristics of these materials is to design new nanocomposites based on them. The application of micromechanical modeling during the elaboration of composite materials for machining tools can reduce the financial and time costs for development of new tools, with enhanced performance. This article reviews the main groups of nanocomposites for machining tools and their performance.

  14. Machine listening intelligence

    Science.gov (United States)

    Cella, C. E.

    2017-05-01

    This manifesto paper will introduce machine listening intelligence, an integrated research framework for acoustic and musical signals modelling, based on signal processing, deep learning and computational musicology.

  15. Machine learning with R

    CERN Document Server

    Lantz, Brett

    2013-01-01

    Written as a tutorial to explore and understand the power of R for machine learning. This practical guide that covers all of the need to know topics in a very systematic way. For each machine learning approach, each step in the process is detailed, from preparing the data for analysis to evaluating the results. These steps will build the knowledge you need to apply them to your own data science tasks.Intended for those who want to learn how to use R's machine learning capabilities and gain insight from your data. Perhaps you already know a bit about machine learning, but have never used R; or

  16. Rotating electrical machines

    CERN Document Server

    Le Doeuff, René

    2013-01-01

    In this book a general matrix-based approach to modeling electrical machines is promulgated. The model uses instantaneous quantities for key variables and enables the user to easily take into account associations between rotating machines and static converters (such as in variable speed drives).   General equations of electromechanical energy conversion are established early in the treatment of the topic and then applied to synchronous, induction and DC machines. The primary characteristics of these machines are established for steady state behavior as well as for variable speed scenarios. I

  17. Are there intelligent Turing machines?

    OpenAIRE

    Bátfai, Norbert

    2015-01-01

    This paper introduces a new computing model based on the cooperation among Turing machines called orchestrated machines. Like universal Turing machines, orchestrated machines are also designed to simulate Turing machines but they can also modify the original operation of the included Turing machines to create a new layer of some kind of collective behavior. Using this new model we can define some interested notions related to cooperation ability of Turing machines such as the intelligence quo...

  18. Visualization and characterization of individual type III protein secretion machines in live bacteria.

    Science.gov (United States)

    Zhang, Yongdeng; Lara-Tejero, María; Bewersdorf, Jörg; Galán, Jorge E

    2017-06-06

    Type III protein secretion machines have evolved to deliver bacterially encoded effector proteins into eukaryotic cells. Although electron microscopy has provided a detailed view of these machines in isolation or fixed samples, little is known about their organization in live bacteria. Here we report the visualization and characterization of the Salmonella type III secretion machine in live bacteria by 2D and 3D single-molecule switching superresolution microscopy. This approach provided access to transient components of this machine, which previously could not be analyzed. We determined the subcellular distribution of individual machines, the stoichiometry of the different components of this machine in situ, and the spatial distribution of the substrates of this machine before secretion. Furthermore, by visualizing this machine in Salmonella mutants we obtained major insights into the machine's assembly. This study bridges a major resolution gap in the visualization of this nanomachine and may serve as a paradigm for the examination of other bacterially encoded molecular machines.

  19. Cognitive memory.

    Science.gov (United States)

    Widrow, Bernard; Aragon, Juan Carlos

    2013-05-01

    Regarding the workings of the human mind, memory and pattern recognition seem to be intertwined. You generally do not have one without the other. Taking inspiration from life experience, a new form of computer memory has been devised. Certain conjectures about human memory are keys to the central idea. The design of a practical and useful "cognitive" memory system is contemplated, a memory system that may also serve as a model for many aspects of human memory. The new memory does not function like a computer memory where specific data is stored in specific numbered registers and retrieval is done by reading the contents of the specified memory register, or done by matching key words as with a document search. Incoming sensory data would be stored at the next available empty memory location, and indeed could be stored redundantly at several empty locations. The stored sensory data would neither have key words nor would it be located in known or specified memory locations. Sensory inputs concerning a single object or subject are stored together as patterns in a single "file folder" or "memory folder". When the contents of the folder are retrieved, sights, sounds, tactile feel, smell, etc., are obtained all at the same time. Retrieval would be initiated by a query or a prompt signal from a current set of sensory inputs or patterns. A search through the memory would be made to locate stored data that correlates with or relates to the prompt input. The search would be done by a retrieval system whose first stage makes use of autoassociative artificial neural networks and whose second stage relies on exhaustive search. Applications of cognitive memory systems have been made to visual aircraft identification, aircraft navigation, and human facial recognition. Concerning human memory, reasons are given why it is unlikely that long-term memory is stored in the synapses of the brain's neural networks. Reasons are given suggesting that long-term memory is stored in DNA or RNA

  20. Smile (System/Machine-Independent Local Environment)

    Energy Technology Data Exchange (ETDEWEB)

    Fletcher, J.G.

    1988-04-01

    This document defines the characteristics of Smile, a System/machine-independent local environment. This environment consists primarily of a number of primitives (types, macros, procedure calls, and variables) that a program may use; these primitives provide facilities, such as memory allocation, timing, tasking and synchronization beyond those typically provided by a programming language. The intent is that a program will be portable from system to system and from machine to machine if it relies only on the portable aspects of its programming language and on the Smile primitives. For this to be so, Smile itself must be implemented on each system and machine, most likely using non-portable constructions; that is, while the environment provided by Smile is intended to be portable, the implementation of Smile is not necessarily so. In order to make the implementation of Smile as easy as possible and thereby expedite the porting of programs to a new system or a new machine, Smile has been defined to provide a minimal portable environment; that is, simple primitives are defined, out of which more complex facilities may be constructed using portable procedures. The implementation of Smile can be as any of the following: the underlying software environment for the operating system of an otherwise {open_quotes}bare{close_quotes} machine, a {open_quotes}guest{close_quotes} system environment built upon a preexisting operating system, an environment within a {open_quotes}user{close_quotes} process run by an operating system, or a single environment for an entire machine, encompassing both system and {open_quotes}user{close_quotes} processes. In the first three of these cases the tasks provided by Smile are {open_quotes}lightweight processes{close_quotes} multiplexed within preexisting processes or the system, while in the last case they also include the system processes themselves.

  1. Improvement of automatic fish feeder machine design

    Science.gov (United States)

    Chui Wei, How; Salleh, S. M.; Ezree, Abdullah Mohd; Zaman, I.; Hatta, M. H.; Zain, B. A. Md; Mahzan, S.; Rahman, M. N. A.; Mahmud, W. A. W.

    2017-10-01

    Nation Plan of action for management of fishing is target to achieve an efficient, equitable and transparent management of fishing capacity in marine capture fisheries by 2018. However, several factors influence the fishery production and efficiency of marine system such as automatic fish feeder machine could be taken in consideration. Two latest fish feeder machines have been chosen as the reference for this study. Based on the observation, it has found that the both machine was made with heavy structure, low water and temperature resistance materials. This research’s objective is to develop the automatic feeder machine to increase the efficiency of fish feeding. The experiment has conducted to testing the new design of machine. The new machine with maximum storage of 5 kg and functioning with two DC motors. This machine able to distribute 500 grams of pellets within 90 seconds and longest distance of 4.7 meter. The higher speed could reduce time needed and increase the distance as well. The minimum speed range for both motor is 110 and 120 with same full speed range of 255.

  2. Bionic machines and systems

    Energy Technology Data Exchange (ETDEWEB)

    Halme, A.; Paanajaervi, J. (eds.)

    2004-07-01

    Introduction Biological systems form a versatile and complex entirety on our planet. One evolutionary branch of primates, called humans, has created an extraordinary skill, called technology, by the aid of which it nowadays dominate life on the planet. Humans use technology for producing and harvesting food, healthcare and reproduction, increasing their capability to commute and communicate, defending their territory etc., and to develop more technology. As a result of this, humans have become much technology dependent, so that they have been forced to form a specialized class of humans, called engineers, who take care of the knowledge of technology developing it further and transferring it to later generations. Until now, technology has been relatively independent from biology, although some of its branches, e.g. biotechnology and biomedical engineering, have traditionally been in close contact with it. There exist, however, an increasing interest to expand the interface between technology and biology either by directly utilizing biological processes or materials by combining them with 'dead' technology, or by mimicking in technological solutions the biological innovations created by evolution. The latter theme is in focus of this report, which has been written as the proceeding of the post-graduate seminar 'Bionic Machines and Systems' held at HUT Automation Technology Laboratory in autumn 2003. The underlaying idea of the seminar was to analyze biological species by considering them as 'robotic machines' having various functional subsystems, such as for energy, motion and motion control, perception, navigation, mapping and localization. We were also interested about intelligent capabilities, such as learning and communication, and social structures like swarming behavior and its mechanisms. The word 'bionic machine' comes from the book which was among the initial material when starting our mission to the fascinating world

  3. In Schizophrenia, Depression, Anxiety, and Physiosomatic Symptoms Are Strongly Related to Psychotic Symptoms and Excitation, Impairments in Episodic Memory, and Increased Production of Neurotoxic Tryptophan Catabolites: a Multivariate and Machine Learning Study.

    Science.gov (United States)

    Kanchanatawan, Buranee; Thika, Supaksorn; Sirivichayakul, Sunee; Carvalho, André F; Geffard, Michel; Maes, Michael

    2018-04-01

    The depression, anxiety and physiosomatic symptoms (DAPS) of schizophrenia are associated with negative symptoms and changes in tryptophan catabolite (TRYCAT) patterning. The aim of this study is to delineate the associations between DAPS and psychosis, hostility, excitation, and mannerism (PHEM) symptoms, cognitive tests as measured using the Consortium to Establish a Registry for Alzheimer's Disease (CERAD) and IgA/IgM responses to TRYCATs. We included 40 healthy controls and 80 participants with schizophrenia. Depression and anxiety symptoms were measured with The Hamilton Depression (HAM-D) and Anxiety (HAM-A) Rating Scales, respectively. Physiosomatic symptoms were assessed with the Fibromyalgia and Chronic Fatigue Syndrome Rating Scale (FF). Negative symptoms as well as CERAD tests, including Verbal Fluency Test (VFT), Mini-Mental State Examination (MMSE), Word List Memory (WLM), and WL Delayed Recall were measured, while ratios of IgA responses to noxious/protective TRYCATs (IgA NOX_PRO) were computed. Schizophrenia symptoms consisted of two dimensions, a first comprising PHEM and negative symptoms, and a second DAPS symptoms. A large part of the variance in DAPS was explained by psychotic symptoms and WLM. Of the variance in HAM-D, 58.9% was explained by the regression on excitement, IgA NOX_PRO ratio, WLM, and VFT; 29.9% of the variance in HAM-A by psychotic symptoms and IgA NOX/PRO; and 45.5% of the variance in FF score by psychotic symptoms, IgA NOX/PRO, and WLM. Neural network modeling shows that PHEM, IgA NOX_PRO, WLM, and MMSE are the dominant variables predicting DAPS. DAPS appear to be driven by PHEM and negative symptoms coupled with impairments in episodic memory, especially false memory creation, while all symptom dimension and cognitive impairments may be driven by an increased production of noxious TRYCATs, including picolinic, quinolinic, and xanthurenic acid.

  4. A Context-Dependent Role for IL-21 in Modulating the Differentiation, Distribution, and Abundance of Effector and Memory CD8 T Cell Subsets.

    Science.gov (United States)

    Tian, Yuan; Cox, Maureen A; Kahan, Shannon M; Ingram, Jennifer T; Bakshi, Rakesh K; Zajac, Allan J

    2016-03-01

    The activation of naive CD8 T cells typically results in the formation of effector cells (TE) as well as phenotypically distinct memory cells that are retained over time. Memory CD8 T cells can be further subdivided into central memory, effector memory (TEM), and tissue-resident memory (TRM) subsets, which cooperate to confer immunological protection. Using mixed bone marrow chimeras and adoptive transfer studies in which CD8 T cells either do or do not express IL-21R, we discovered that under homeostatic or lymphopenic conditions IL-21 acts directly on CD8 T cells to favor the accumulation of TE/TEM populations. The inability to perceive IL-21 signals under competitive conditions also resulted in lower levels of TRM phenotype cells and reduced expression of granzyme B in the small intestine. IL-21 differentially promoted the expression of the chemokine receptor CX3CR1 and the integrin α4β7 on CD8 T cells primed in vitro and on circulating CD8 T cells in the mixed bone marrow chimeras. The requirement for IL-21 to establish CD8 TE/TEM and TRM subsets was overcome by acute lymphocytic choriomeningitis virus infection; nevertheless, memory virus-specific CD8 T cells remained dependent on IL-21 for optimal accumulation in lymphopenic environments. Overall, this study reveals a context-dependent role for IL-21 in sustaining effector phenotype CD8 T cells and influencing their migratory properties, accumulation, and functions. Copyright © 2016 by The American Association of Immunologists, Inc.

  5. Intelligent machines in the twenty-first century: foundations of inference and inquiry.

    Science.gov (United States)

    Knuth, Kevin H

    2003-12-15

    The last century saw the application of Boolean algebra to the construction of computing machines, which work by applying logical transformations to information contained in their memory. The development of information theory and the generalization of Boolean algebra to Bayesian inference have enabled these computing machines, in the last quarter of the twentieth century, to be endowed with the ability to learn by making inferences from data. This revolution is just beginning as new computational techniques continue to make difficult problems more accessible. Recent advances in our understanding of the foundations of probability theory have revealed implications for areas other than logic. Of relevance to intelligent machines, we recently identified the algebra of questions as the free distributive algebra, which will now allow us to work with questions in a way analogous to that which Boolean algebra enables us to work with logical statements. In this paper, we examine the foundations of inference and inquiry. We begin with a history of inferential reasoning, highlighting key concepts that have led to the automation of inference in modern machine-learning systems. We then discuss the foundations of inference in more detail using a modern viewpoint that relies on the mathematics of partially ordered sets and the scaffolding of lattice theory. This new viewpoint allows us to develop the logic of inquiry and introduce a measure describing the relevance of a proposed question to an unresolved issue. Last, we will demonstrate the automation of inference, and discuss how this new logic of inquiry will enable intelligent machines to ask questions. Automation of both inference and inquiry promises to allow robots to perform science in the far reaches of our solar system and in other star systems by enabling them not only to make inferences from data, but also to decide which question to ask, which experiment to perform, or which measurement to take given what they have

  6. Memory Modulation

    NARCIS (Netherlands)

    Roozendaal, Benno; McGaugh, James L.

    2011-01-01

    Our memories are not all created equally strong: Some experiences are well remembered while others are remembered poorly, if at all. Research on memory modulation investigates the neurobiological processes and systems that contribute to such differences in the strength of our memories. Extensive

  7. Smoothing type buffer memory device

    International Nuclear Information System (INIS)

    Podorozhnyj, D.M.; Yashin, I.V.

    1990-01-01

    The layout of the micropower 4-bit smoothing type buffer memory device allowing one to record without counting the sequence of input randomly distributed pulses in multi-channel devices with serial poll, is given. The power spent by a memory cell for one binary digit recording is not greater than 0.15 mW, the device dead time is 10 mus

  8. What happens when we compare the lifespan distributions of life script events and autobiographical memories of life story events? A cross-cultural study

    DEFF Research Database (Denmark)

    Zaragoza Scherman, Alejandra; Salgado, Sinué; Shao, Zhifang

    Cultural Life Script Theory (Berntsen and Rubin, 2004), provides a cultural explanation of the reminiscence bump: adults older than 40 years remember a significantly greater amount of life events happening between 15 - 30 years of age (Rubin, Rahal, & Poon, 1998), compared to other lifetime periods...... and memories of life story events, we can determine the degree to which the cultural life script serves as a recall template for autobiographical memories, especially of positive life events from adolescence and early adulthood, also known as the reminiscence bump period....

  9. Microsoft Azure machine learning

    CERN Document Server

    Mund, Sumit

    2015-01-01

    The book is intended for those who want to learn how to use Azure Machine Learning. Perhaps you already know a bit about Machine Learning, but have never used ML Studio in Azure; or perhaps you are an absolute newbie. In either case, this book will get you up-and-running quickly.

  10. The Hooey Machine.

    Science.gov (United States)

    Scarnati, James T.; Tice, Craig J.

    1992-01-01

    Describes how students can make and use Hooey Machines to learn how mechanical energy can be transferred from one object to another within a system. The Hooey Machine is made using a pencil, eight thumbtacks, one pushpin, tape, scissors, graph paper, and a plastic lid. (PR)

  11. Nanocomposites for Machining Tools

    DEFF Research Database (Denmark)

    Sidorenko, Daria; Loginov, Pavel; Mishnaevsky, Leon

    2017-01-01

    Machining tools are used in many areas of production. To a considerable extent, the performance characteristics of the tools determine the quality and cost of obtained products. The main materials used for producing machining tools are steel, cemented carbides, ceramics and superhard materials...

  12. A nucleonic weighing machine

    International Nuclear Information System (INIS)

    Anon.

    1978-01-01

    The design and operation of a nucleonic weighing machine fabricated for continuous weighing of material over conveyor belt are described. The machine uses a 40 mCi cesium-137 line source and a 10 litre capacity ionization chamber. It is easy to maintain as there are no moving parts. It can also be easily removed and reinstalled. (M.G.B.)

  13. An asymptotical machine

    Science.gov (United States)

    Cristallini, Achille

    2016-07-01

    A new and intriguing machine may be obtained replacing the moving pulley of a gun tackle with a fixed point in the rope. Its most important feature is the asymptotic efficiency. Here we obtain a satisfactory description of this machine by means of vector calculus and elementary trigonometry. The mathematical model has been compared with experimental data and briefly discussed.

  14. Machine learning with R

    CERN Document Server

    Lantz, Brett

    2015-01-01

    Perhaps you already know a bit about machine learning but have never used R, or perhaps you know a little R but are new to machine learning. In either case, this book will get you up and running quickly. It would be helpful to have a bit of familiarity with basic programming concepts, but no prior experience is required.

  15. The deleuzian abstract machines

    DEFF Research Database (Denmark)

    Werner Petersen, Erik

    2005-01-01

    To most people the concept of abstract machines is connected to the name of Alan Turing and the development of the modern computer. The Turing machine is universal, axiomatic and symbolic (E.g. operating on symbols). Inspired by Foucault, Deleuze and Guattari extended the concept of abstract...

  16. Human Machine Learning Symbiosis

    Science.gov (United States)

    Walsh, Kenneth R.; Hoque, Md Tamjidul; Williams, Kim H.

    2017-01-01

    Human Machine Learning Symbiosis is a cooperative system where both the human learner and the machine learner learn from each other to create an effective and efficient learning environment adapted to the needs of the human learner. Such a system can be used in online learning modules so that the modules adapt to each learner's learning state both…

  17. Operational derivation of Boltzmann distribution with Maxwell's demon model.

    Science.gov (United States)

    Hosoya, Akio; Maruyama, Koji; Shikano, Yutaka

    2015-11-24

    The resolution of the Maxwell's demon paradox linked thermodynamics with information theory through information erasure principle. By considering a demon endowed with a Turing-machine consisting of a memory tape and a processor, we attempt to explore the link towards the foundations of statistical mechanics and to derive results therein in an operational manner. Here, we present a derivation of the Boltzmann distribution in equilibrium as an example, without hypothesizing the principle of maximum entropy. Further, since the model can be applied to non-equilibrium processes, in principle, we demonstrate the dissipation-fluctuation relation to show the possibility in this direction.

  18. Research on machine learning framework based on random forest algorithm

    Science.gov (United States)

    Ren, Qiong; Cheng, Hui; Han, Hai

    2017-03-01

    With the continuous development of machine learning, industry and academia have released a lot of machine learning frameworks based on distributed computing platform, and have been widely used. However, the existing framework of machine learning is limited by the limitations of machine learning algorithm itself, such as the choice of parameters and the interference of noises, the high using threshold and so on. This paper introduces the research background of machine learning framework, and combined with the commonly used random forest algorithm in machine learning classification algorithm, puts forward the research objectives and content, proposes an improved adaptive random forest algorithm (referred to as ARF), and on the basis of ARF, designs and implements the machine learning framework.

  19. Memory Dysfunction

    Science.gov (United States)

    Matthews, Brandy R.

    2015-01-01

    Purpose of Review: This article highlights the dissociable human memory systems of episodic, semantic, and procedural memory in the context of neurologic illnesses known to adversely affect specific neuroanatomic structures relevant to each memory system. Recent Findings: Advances in functional neuroimaging and refinement of neuropsychological and bedside assessment tools continue to support a model of multiple memory systems that are distinct yet complementary and to support the potential for one system to be engaged as a compensatory strategy when a counterpart system fails. Summary: Episodic memory, the ability to recall personal episodes, is the subtype of memory most often perceived as dysfunctional by patients and informants. Medial temporal lobe structures, especially the hippocampal formation and associated cortical and subcortical structures, are most often associated with episodic memory loss. Episodic memory dysfunction may present acutely, as in concussion; transiently, as in transient global amnesia (TGA); subacutely, as in thiamine deficiency; or chronically, as in Alzheimer disease. Semantic memory refers to acquired knowledge about the world. Anterior and inferior temporal lobe structures are most often associated with semantic memory loss. The semantic variant of primary progressive aphasia (svPPA) is the paradigmatic disorder resulting in predominant semantic memory dysfunction. Working memory, associated with frontal lobe function, is the active maintenance of information in the mind that can be potentially manipulated to complete goal-directed tasks. Procedural memory, the ability to learn skills that become automatic, involves the basal ganglia, cerebellum, and supplementary motor cortex. Parkinson disease and related disorders result in procedural memory deficits. Most memory concerns warrant bedside cognitive or neuropsychological evaluation and neuroimaging to assess for specific neuropathologies and guide treatment. PMID:26039844

  20. High speed operation of permanent magnet machines

    Science.gov (United States)

    El-Refaie, Ayman M.

    This work proposes methods to extend the high-speed operating capabilities of both the interior PM (IPM) and surface PM (SPM) machines. For interior PM machines, this research has developed and presented the first thorough analysis of how a new bi-state magnetic material can be usefully applied to the design of IPM machines. Key elements of this contribution include identifying how the unique properties of the bi-state magnetic material can be applied most effectively in the rotor design of an IPM machine by "unmagnetizing" the magnet cavity center posts rather than the outer bridges. The importance of elevated rotor speed in making the best use of the bi-state magnetic material while recognizing its limitations has been identified. For surface PM machines, this research has provided, for the first time, a clear explanation of how fractional-slot concentrated windings can be applied to SPM machines in order to achieve the necessary conditions for optimal flux weakening. A closed-form analytical procedure for analyzing SPM machines designed with concentrated windings has been developed. Guidelines for designing SPM machines using concentrated windings in order to achieve optimum flux weakening are provided. Analytical and numerical finite element analysis (FEA) results have provided promising evidence of the scalability of the concentrated winding technique with respect to the number of poles, machine aspect ratio, and output power rating. Useful comparisons between the predicted performance characteristics of SPM machines equipped with concentrated windings and both SPM and IPM machines designed with distributed windings are included. Analytical techniques have been used to evaluate the impact of the high pole number on various converter performance metrics. Both analytical techniques and FEA have been used for evaluating the eddy-current losses in the surface magnets due to the stator winding subharmonics. Techniques for reducing these losses have been

  1. Precision machining commercialization

    International Nuclear Information System (INIS)

    1978-01-01

    To accelerate precision machining development so as to realize more of the potential savings within the next few years of known Department of Defense (DOD) part procurement, the Air Force Materials Laboratory (AFML) is sponsoring the Precision Machining Commercialization Project (PMC). PMC is part of the Tri-Service Precision Machine Tool Program of the DOD Manufacturing Technology Five-Year Plan. The technical resources supporting PMC are provided under sponsorship of the Department of Energy (DOE). The goal of PMC is to minimize precision machining development time and cost risk for interested vendors. PMC will do this by making available the high precision machining technology as developed in two DOE contractor facilities, the Lawrence Livermore Laboratory of the University of California and the Union Carbide Corporation, Nuclear Division, Y-12 Plant, at Oak Ridge, Tennessee

  2. Introduction to machine learning.

    Science.gov (United States)

    Baştanlar, Yalin; Ozuysal, Mustafa

    2014-01-01

    The machine learning field, which can be briefly defined as enabling computers make successful predictions using past experiences, has exhibited an impressive development recently with the help of the rapid increase in the storage capacity and processing power of computers. Together with many other disciplines, machine learning methods have been widely employed in bioinformatics. The difficulties and cost of biological analyses have led to the development of sophisticated machine learning approaches for this application area. In this chapter, we first review the fundamental concepts of machine learning such as feature assessment, unsupervised versus supervised learning and types of classification. Then, we point out the main issues of designing machine learning experiments and their performance evaluation. Finally, we introduce some supervised learning methods.

  3. LHC Report: machine development

    CERN Multimedia

    Rogelio Tomás García for the LHC team

    2015-01-01

    Machine development weeks are carefully planned in the LHC operation schedule to optimise and further study the performance of the machine. The first machine development session of Run 2 ended on Saturday, 25 July. Despite various hiccoughs, it allowed the operators to make great strides towards improving the long-term performance of the LHC.   The main goals of this first machine development (MD) week were to determine the minimum beam-spot size at the interaction points given existing optics and collimation constraints; to test new beam instrumentation; to evaluate the effectiveness of performing part of the beam-squeezing process during the energy ramp; and to explore the limits on the number of protons per bunch arising from the electromagnetic interactions with the accelerator environment and the other beam. Unfortunately, a series of events reduced the machine availability for studies to about 50%. The most critical issue was the recurrent trip of a sextupolar corrector circuit –...

  4. Changing concepts of working memory

    Science.gov (United States)

    Ma, Wei Ji; Husain, Masud; Bays, Paul M

    2014-01-01

    Working memory is widely considered to be limited in capacity, holding a fixed, small number of items, such as Miller's ‘magical number’ seven or Cowan's four. It has recently been proposed that working memory might better be conceptualized as a limited resource that is distributed flexibly among all items to be maintained in memory. According to this view, the quality rather than the quantity of working memory representations determines performance. Here we consider behavioral and emerging neural evidence for this proposal. PMID:24569831

  5. Declarative memory.

    Science.gov (United States)

    Riedel, Wim J; Blokland, Arjan

    2015-01-01

    Declarative Memory consists of memory for events (episodic memory) and facts (semantic memory). Methods to test declarative memory are key in investigating effects of potential cognition-enhancing substances--medicinal drugs or nutrients. A number of cognitive performance tests assessing declarative episodic memory tapping verbal learning, logical memory, pattern recognition memory, and paired associates learning are described. These tests have been used as outcome variables in 34 studies in humans that have been described in the literature in the past 10 years. Also, the use of episodic tests in animal research is discussed also in relation to the drug effects in these tasks. The results show that nutritional supplementation of polyunsaturated fatty acids has been investigated most abundantly and, in a number of cases, but not all, show indications of positive effects on declarative memory, more so in elderly than in young subjects. Studies investigating effects of registered anti-Alzheimer drugs, cholinesterase inhibitors in mild cognitive impairment, show positive and negative effects on declarative memory. Studies mainly carried out in healthy volunteers investigating the effects of acute dopamine stimulation indicate enhanced memory consolidation as manifested specifically by better delayed recall, especially at time points long after learning and more so when drug is administered after learning and if word lists are longer. The animal studies reveal a different picture with respect to the effects of different drugs on memory performance. This suggests that at least for episodic memory tasks, the translational value is rather poor. For the human studies, detailed parameters of the compositions of word lists for declarative memory tests are discussed and it is concluded that tailored adaptations of tests to fit the hypothesis under study, rather than "off-the-shelf" use of existing tests, are recommended.

  6. Quantum memory Quantum memory

    Science.gov (United States)

    Le Gouët, Jean-Louis; Moiseev, Sergey

    2012-06-01

    Interaction of quantum radiation with multi-particle ensembles has sparked off intense research efforts during the past decade. Emblematic of this field is the quantum memory scheme, where a quantum state of light is mapped onto an ensemble of atoms and then recovered in its original shape. While opening new access to the basics of light-atom interaction, quantum memory also appears as a key element for information processing applications, such as linear optics quantum computation and long-distance quantum communication via quantum repeaters. Not surprisingly, it is far from trivial to practically recover a stored quantum state of light and, although impressive progress has already been accomplished, researchers are still struggling to reach this ambitious objective. This special issue provides an account of the state-of-the-art in a fast-moving research area that makes physicists, engineers and chemists work together at the forefront of their discipline, involving quantum fields and atoms in different media, magnetic resonance techniques and material science. Various strategies have been considered to store and retrieve quantum light. The explored designs belong to three main—while still overlapping—classes. In architectures derived from photon echo, information is mapped over the spectral components of inhomogeneously broadened absorption bands, such as those encountered in rare earth ion doped crystals and atomic gases in external gradient magnetic field. Protocols based on electromagnetic induced transparency also rely on resonant excitation and are ideally suited to the homogeneous absorption lines offered by laser cooled atomic clouds or ion Coulomb crystals. Finally off-resonance approaches are illustrated by Faraday and Raman processes. Coupling with an optical cavity may enhance the storage process, even for negligibly small atom number. Multiple scattering is also proposed as a way to enlarge the quantum interaction distance of light with matter. The

  7. A Linear Algebra Framework for Static High Performance Fortran Code Distribution

    Directory of Open Access Journals (Sweden)

    Corinne Ancourt

    1997-01-01

    Full Text Available High Performance Fortran (HPF was developed to support data parallel programming for single-instruction multiple-data (SIMD and multiple-instruction multiple-data (MIMD machines with distributed memory. The programmer is provided a familiar uniform logical address space and specifies the data distribution by directives. The compiler then exploits these directives to allocate arrays in the local memories, to assign computations to elementary processors, and to migrate data between processors when required. We show here that linear algebra is a powerful framework to encode HPF directives and to synthesize distributed code with space-efficient array allocation, tight loop bounds, and vectorized communications for INDEPENDENT loops. The generated code includes traditional optimizations such as guard elimination, message vectorization and aggregation, and overlap analysis. The systematic use of an affine framework makes it possible to prove the compilation scheme correct.

  8. Machine Learning and Radiology

    Science.gov (United States)

    Wang, Shijun; Summers, Ronald M.

    2012-01-01

    In this paper, we give a short introduction to machine learning and survey its applications in radiology. We focused on six categories of applications in radiology: medical image segmentation, registration, computer aided detection and diagnosis, brain function or activity analysis and neurological disease diagnosis from fMR images, content-based image retrieval systems for CT or MRI images, and text analysis of radiology reports using natural language processing (NLP) and natural language understanding (NLU). This survey shows that machine learning plays a key role in many radiology applications. Machine learning identifies complex patterns automatically and helps radiologists make intelligent decisions on radiology data such as conventional radiographs, CT, MRI, and PET images and radiology reports. In many applications, the performance of machine learning-based automatic detection and diagnosis systems has shown to be comparable to that of a well-trained and experienced radiologist. Technology development in machine learning and radiology will benefit from each other in the long run. Key contributions and common characteristics of machine learning techniques in radiology are discussed. We also discuss the problem of translating machine learning applications to the radiology clinical setting, including advantages and potential barriers. PMID:22465077

  9. Machine learning and radiology.

    Science.gov (United States)

    Wang, Shijun; Summers, Ronald M

    2012-07-01

    In this paper, we give a short introduction to machine learning and survey its applications in radiology. We focused on six categories of applications in radiology: medical image segmentation, registration, computer aided detection and diagnosis, brain function or activity analysis and neurological disease diagnosis from fMR images, content-based image retrieval systems for CT or MRI images, and text analysis of radiology reports using natural language processing (NLP) and natural language understanding (NLU). This survey shows that machine learning plays a key role in many radiology applications. Machine learning identifies complex patterns automatically and helps radiologists make intelligent decisions on radiology data such as conventional radiographs, CT, MRI, and PET images and radiology reports. In many applications, the performance of machine learning-based automatic detection and diagnosis systems has shown to be comparable to that of a well-trained and experienced radiologist. Technology development in machine learning and radiology will benefit from each other in the long run. Key contributions and common characteristics of machine learning techniques in radiology are discussed. We also discuss the problem of translating machine learning applications to the radiology clinical setting, including advantages and potential barriers. Copyright © 2012. Published by Elsevier B.V.

  10. INTRODUCCIÓN DE ELEMENTOS DE MEMORIA EN EL MÉTODO SIMULATED ANNEALING PARA RESOLVER PROBLEMAS DE PROGRAMACIÓN MULTIOBJETIVO DE MÁQUINAS PARALELAS INTRODUCTION OF MEMORY ELEMENTS IN SIMULATED ANNEALING METHOD TO SOLVE MULTIOBJECTIVE PARALLEL MACHINE SCHEDULING PROBLEMS

    Directory of Open Access Journals (Sweden)

    Felipe Baesler

    2008-12-01

    Full Text Available El presente artículo introduce una variante de la metaheurística simulated annealing, para la resolución de problemas de optimización multiobjetivo. Este enfoque se demonina MultiObjective Simulated Annealing with Random Trajectory Search, MOSARTS. Esta técnica agrega al algoritmo Simulated Annealing elementos de memoria de corto y largo plazo para realizar una búsqueda que permita balancear el esfuerzo entre todos los objetivos involucrados en el problema. Los resultados obtenidos se compararon con otras tres metodologías en un problema real de programación de máquinas paralelas, compuesto por 24 trabajos y 2 máquinas idénticas. Este problema corresponde a un caso de estudio real de la industria regional del aserrío. En los experimentos realizados, MOSARTS se comportó de mejor manera que el resto de la herramientas de comparación, encontrando mejores soluciones en términos de dominancia y dispersión.This paper introduces a variant of the metaheuristic simulated annealing, oriented to solve multiobjective optimization problems. This technique is called MultiObjective Simulated Annealing with Random Trajectory Search (MOSARTS. This technique incorporates short an long term memory concepts to Simulated Annealing in order to balance the search effort among all the objectives involved in the problem. The algorithm was tested against three different techniques on a real life parallel machine scheduling problem, composed of 24 jobs and two identical machines. This problem represents a real life case study of the local sawmill industry. The results showed that MOSARTS behaved much better than the other methods utilized, because found better solutions in terms of dominance and frontier dispersion.

  11. DNA-based machines.

    Science.gov (United States)

    Wang, Fuan; Willner, Bilha; Willner, Itamar

    2014-01-01

    The base sequence in nucleic acids encodes substantial structural and functional information into the biopolymer. This encoded information provides the basis for the tailoring and assembly of DNA machines. A DNA machine is defined as a molecular device that exhibits the following fundamental features. (1) It performs a fuel-driven mechanical process that mimics macroscopic machines. (2) The mechanical process requires an energy input, "fuel." (3) The mechanical operation is accompanied by an energy consumption process that leads to "waste products." (4) The cyclic operation of the DNA devices, involves the use of "fuel" and "anti-fuel" ingredients. A variety of DNA-based machines are described, including the construction of "tweezers," "walkers," "robots," "cranes," "transporters," "springs," "gears," and interlocked cyclic DNA structures acting as reconfigurable catenanes, rotaxanes, and rotors. Different "fuels", such as nucleic acid strands, pH (H⁺/OH⁻), metal ions, and light, are used to trigger the mechanical functions of the DNA devices. The operation of the devices in solution and on surfaces is described, and a variety of optical, electrical, and photoelectrochemical methods to follow the operations of the DNA machines are presented. We further address the possible applications of DNA machines and the future perspectives of molecular DNA devices. These include the application of DNA machines as functional structures for the construction of logic gates and computing, for the programmed organization of metallic nanoparticle structures and the control of plasmonic properties, and for controlling chemical transformations by DNA machines. We further discuss the future applications of DNA machines for intracellular sensing, controlling intracellular metabolic pathways, and the use of the functional nanostructures for drug delivery and medical applications.

  12. Memory and History: Some considerations on antinomies and paradoxes

    Directory of Open Access Journals (Sweden)

    J Šubrt

    2015-12-01

    Full Text Available Collective memory does not retain the memories of the past as historical events really happened, but as they are remembered in the present. Memory includes only elements of the past, not the past as a whole. Theoretical thinking about memory has been shaped by opinions often arising from very different starting points. This article outlines ten antinomies characterised by the following terms: individual and collective memory, spirit and matter, saving and deleting, irrevocable and revocable history, spontaneous and purposeful memory, myth and science, rationality and irrationality. The text explains that memory works in a selective way and the contents which are stored in it have no permanent form, but change over time according to the needs of the specific present. Human memory does not work as a rational machine, but rather is prone to distortions and errors. An important role in shaping collective memory is played by ideological influence and deep-rooted historical myths.

  13. Fundamentals of machine design

    CERN Document Server

    Karaszewski, Waldemar

    2011-01-01

    A forum of researchers, educators and engineers involved in various aspects of Machine Design provided the inspiration for this collection of peer-reviewed papers. The resultant dissemination of the latest research results, and the exchange of views concerning the future research directions to be taken in this field will make the work of immense value to all those having an interest in the topics covered. The book reflects the cooperative efforts made in seeking out the best strategies for effecting improvements in the quality and the reliability of machines and machine parts and for extending

  14. Machine Learning for Hackers

    CERN Document Server

    Conway, Drew

    2012-01-01

    If you're an experienced programmer interested in crunching data, this book will get you started with machine learning-a toolkit of algorithms that enables computers to train themselves to automate useful tasks. Authors Drew Conway and John Myles White help you understand machine learning and statistics tools through a series of hands-on case studies, instead of a traditional math-heavy presentation. Each chapter focuses on a specific problem in machine learning, such as classification, prediction, optimization, and recommendation. Using the R programming language, you'll learn how to analyz

  15. Creativity in Machine Learning

    OpenAIRE

    Thoma, Martin

    2016-01-01

    Recent machine learning techniques can be modified to produce creative results. Those results did not exist before; it is not a trivial combination of the data which was fed into the machine learning system. The obtained results come in multiple forms: As images, as text and as audio. This paper gives a high level overview of how they are created and gives some examples. It is meant to be a summary of the current work and give people who are new to machine learning some starting points.

  16. Machine Tool Software

    Science.gov (United States)

    1988-01-01

    A NASA-developed software package has played a part in technical education of students who major in Mechanical Engineering Technology at William Rainey Harper College. Professor Hack has been using (APT) Automatically Programmed Tool Software since 1969 in his CAD/CAM Computer Aided Design and Manufacturing curriculum. Professor Hack teaches the use of APT programming languages for control of metal cutting machines. Machine tool instructions are geometry definitions written in APT Language to constitute a "part program." The part program is processed by the machine tool. CAD/CAM students go from writing a program to cutting steel in the course of a semester.

  17. Power Electronics and Electric Machines | Transportation Research | NREL

    Science.gov (United States)

    Power Electronics and Electric Machines NREL's power electronics and electric machines research helping boost the performance of power electronics components and systems, while driving down size, weight technical barriers to EDV commercialization. EDVs rely heavily on power electronics to distribute the proper

  18. Tensor Network Quantum Virtual Machine (TNQVM)

    Energy Technology Data Exchange (ETDEWEB)

    2016-11-18

    There is a lack of state-of-the-art quantum computing simulation software that scales on heterogeneous systems like Titan. Tensor Network Quantum Virtual Machine (TNQVM) provides a quantum simulator that leverages a distributed network of GPUs to simulate quantum circuits in a manner that leverages recent results from tensor network theory.

  19. Memory Reconsolidation and Computational Learning

    Science.gov (United States)

    2010-03-01

    Siegelmann-Danieli and H.T. Siegelmann, "Robust Artificial Life Via Artificial Programmed Death," Artificial Inteligence 172(6-7), April 2008: 884-898. F...STATEMENT Unrestricted 13. SUPPLEMENTARY NOTES 20100402019 14. ABSTRACT Memory models are central to Artificial Intelligence and Machine...beyond [1]. The advances cited are a significant step toward creating Artificial Intelligence via neural networks at the human level. Our network

  20. Quality assurance of a helical tomotherapy machine

    International Nuclear Information System (INIS)

    Fenwick, J D; Tome, W A; Jaradat, H A; Hui, S K; James, J A; Balog, J P; DeSouza, C N; Lucas, D B; Olivera, G H; Mackie, T R; Paliwal, B R

    2004-01-01

    Helical tomotherapy has been developed at the University of Wisconsin, and 'Hi-Art II' clinical machines are now commercially manufactured. At the core of each machine lies a ring-gantry-mounted short linear accelerator which generates x-rays that are collimated into a fan beam of intensity-modulated radiation by a binary multileaf, the modulation being variable with gantry angle. Patients are treated lying on a couch which is translated continuously through the bore of the machine as the gantry rotates. Highly conformal dose-distributions can be delivered using this technique, which is the therapy equivalent of spiral computed tomography. The approach requires synchrony of gantry rotation, couch translation, accelerator pulsing and the opening and closing of the leaves of the binary multileaf collimator used to modulate the radiation beam. In the course of clinically implementing helical tomotherapy, we have developed a quality assurance (QA) system for our machine. The system is analogous to that recommended for conventional clinical linear accelerator QA by AAPM Task Group 40 but contains some novel components, reflecting differences between the Hi-Art devices and conventional clinical accelerators. Here the design and dosimetric characteristics of Hi-Art machines are summarized and the QA system is set out along with experimental details of its implementation. Connections between this machine-based QA work, pre-treatment patient-specific delivery QA and fraction-by-fraction dose verification are discussed

  1. Testing the ghost with the machine

    International Nuclear Information System (INIS)

    De Zubicaray, G.

    2002-01-01

    Since its introduction during the 1990s, functional magnetic resonance imaging (fMRI) has been used to investigate brain activity occurring during a bewildering variety of sensory, motor and cognitive tasks. That is, a machine is being used to test 'the ghost in the machine' - the human mind. The use of imaging techniques to investigate these issues has even led to the emergence of a new scientific field called cognitive neuroscience. Currently, there are only a few groups in Australia actively publishing fMRI studies in the international literature, and the majority of these laboratories are clustered on the east coast. My own research with fMRI has focused on areas such as language and memory, with a special interest in how we solve competitive processes in our thinking

  2. Designing anticancer peptides by constructive machine learning.

    Science.gov (United States)

    Grisoni, Francesca; Neuhaus, Claudia; Gabernet, Gisela; Müller, Alex; Hiss, Jan; Schneider, Gisbert

    2018-04-21

    Constructive machine learning enables the automated generation of novel chemical structures without the need for explicit molecular design rules. This study presents the experimental application of such a generative model to design membranolytic anticancer peptides (ACPs) de novo. A recurrent neural network with long short-term memory cells was trained on alpha-helical cationic amphipathic peptide sequences and then fine-tuned with 26 known ACPs. This optimized model was used to generate unique and novel amino acid sequences. Twelve of the peptides were synthesized and tested for their activity on MCF7 human breast adenocarcinoma cells and selectivity against human erythrocytes. Ten of these peptides were active against cancer cells. Six of the active peptides killed MCF7 cancer cells without affecting human erythrocytes with at least threefold selectivity. These results advocate constructive machine learning for the automated design of peptides with desired biological activities. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  3. Memory design

    DEFF Research Database (Denmark)

    Tanderup, Sisse

    by cultural forms, often specifically by the concept of memory in philosophy, sociology and psychology, while Danish design traditionally has been focusing on form and function with frequent references to the forms of nature. Alessi's motivation for investigating the concept of memory is that it adds......Mind and Matter - Nordik 2009 Conference for Art Historians Design Matters Contributed Memory design BACKGROUND My research concerns the use of memory categories in the designs by the companies Alessi and Georg Jensen. When Alessi's designers create their products, they are usually inspired...... a cultural dimension to the design objects, enabling the objects to make an identity-forming impact. Whether or not the concept of memory plays a significant role in Danish design has not yet been elucidated fully. TERMINOLOGY The concept of "memory design" refers to the idea that design carries...

  4. Disputed Memory

    DEFF Research Database (Denmark)

    , individual and political discourse and electronic social media. Analyzing memory disputes in various local, national and transnational contexts, the chapters demonstrate the political power and social impact of painful and disputed memories. The book brings new insights into current memory disputes...... in Central, Eastern and Southeastern Europe. It contributes to the understanding of processes of memory transmission and negotiation across borders and cultures in Europe, emphasizing the interconnectedness of memory with emotions, mediation and politics....... century in the region. Written by an international group of scholars from a diversity of disciplines, the chapters approach memory disputes in methodologically innovative ways, studying representations and negotiations of disputed pasts in different media, including monuments, museum exhibitions...

  5. Coordinate measuring machines

    DEFF Research Database (Denmark)

    De Chiffre, Leonardo

    This document is used in connection with three exercises of 2 hours duration as a part of the course GEOMETRICAL METROLOGY AND MACHINE TESTING. The exercises concern three aspects of coordinate measuring: 1) Measuring and verification of tolerances on coordinate measuring machines, 2) Traceabilit...... and uncertainty during coordinate measurements, 3) Digitalisation and Reverse Engineering. This document contains a short description of each step in the exercise and schemes with room for taking notes of the results.......This document is used in connection with three exercises of 2 hours duration as a part of the course GEOMETRICAL METROLOGY AND MACHINE TESTING. The exercises concern three aspects of coordinate measuring: 1) Measuring and verification of tolerances on coordinate measuring machines, 2) Traceability...

  6. Machine Vision Handbook

    CERN Document Server

    2012-01-01

    The automation of visual inspection is becoming more and more important in modern industry as a consistent, reliable means of judging the quality of raw materials and manufactured goods . The Machine Vision Handbook  equips the reader with the practical details required to engineer integrated mechanical-optical-electronic-software systems. Machine vision is first set in the context of basic information on light, natural vision, colour sensing and optics. The physical apparatus required for mechanized image capture – lenses, cameras, scanners and light sources – are discussed followed by detailed treatment of various image-processing methods including an introduction to the QT image processing system. QT is unique to this book, and provides an example of a practical machine vision system along with extensive libraries of useful commands, functions and images which can be implemented by the reader. The main text of the book is completed by studies of a wide variety of applications of machine vision in insp...

  7. Enter the machine

    Science.gov (United States)

    Palittapongarnpim, Pantita; Sanders, Barry C.

    2018-05-01

    Quantum tomography infers quantum states from measurement data, but it becomes infeasible for large systems. Machine learning enables tomography of highly entangled many-body states and suggests a new powerful approach to this problem.

  8. Introduction to AC machine design

    CERN Document Server

    Lipo, Thomas A

    2018-01-01

    AC electrical machine design is a key skill set for developing competitive electric motors and generators for applications in industry, aerospace, and defense. This book presents a thorough treatment of AC machine design, starting from basic electromagnetic principles and continuing through the various design aspects of an induction machine. Introduction to AC Machine Design includes one chapter each on the design of permanent magnet machines, synchronous machines, and thermal design. It also offers a basic treatment of the use of finite elements to compute the magnetic field within a machine without interfering with the initial comprehension of the core subject matter. Based on the author's notes, as well as after years of classroom instruction, Introduction to AC Machine Design: * Brings to light more advanced principles of machine design--not just the basic principles of AC and DC machine behavior * Introduces electrical machine design to neophytes while also being a resource for experienced designers * ...

  9. Main Memory

    OpenAIRE

    Boncz, Peter; Liu, Lei; Özsu, M.

    2008-01-01

    htmlabstractPrimary storage, presently known as main memory, is the largest memory directly accessible to the CPU in the prevalent Von Neumann model and stores both data and instructions (program code). The CPU continuously reads instructions stored there and executes them. It is also called Random Access Memory (RAM), to indicate that load/store instructions can access data at any location at the same cost, is usually implemented using DRAM chips, which are connected to the CPU and other per...

  10. Meeting the memory challenges of brain-scale network simulation

    Directory of Open Access Journals (Sweden)

    Susanne eKunkel

    2012-01-01

    Full Text Available The development of high-performance simulation software is crucial for studying the brain connectome. Using connectome data to generate neurocomputational models requires software capable of coping with models on a variety of scales: from the microscale, investigating plasticity and dynamics of circuits in local networks, to the macroscale, investigating the interactions between distinct brain regions. Prior to any serious dynamical investigation, the first task of network simulations is to check the consistency of data integrated in the connectome and constrain ranges for yet unknown parameters. Thanks to distributed computing techniques, it is possible today to routinely simulate local cortical networks of around 10^5 neurons with up to 10^9 synapses on clusters and multi-processor shared-memory machines. However, brain-scale networks are one or two orders of magnitude larger than such local networks, in terms of numbers of neurons and synapses as well as in terms of computational load. Such networks have been studied in individual studies, but the underlying simulation technologies have neither been described in sufficient detail to be reproducible nor made publicly available. Here, we discover that as the network model sizes approach the regime of meso- and macroscale simulations, memory consumption on individual compute nodes becomes a critical bottleneck. This is especially relevant on modern supercomputers such as the Bluegene/P architecture where the available working memory per CPU core is rather limited. We develop a simple linear model to analyze the memory consumption of the constituent components of a neuronal simulator as a function of network size and the number of cores used. This approach has multiple benefits. The model enables identification of key contributing components to memory saturation and prediction of the effects of potential improvements to code before any implementation takes place.

  11. Metalworking and machining fluids

    Science.gov (United States)

    Erdemir, Ali; Sykora, Frank; Dorbeck, Mark

    2010-10-12

    Improved boron-based metal working and machining fluids. Boric acid and boron-based additives that, when mixed with certain carrier fluids, such as water, cellulose and/or cellulose derivatives, polyhydric alcohol, polyalkylene glycol, polyvinyl alcohol, starch, dextrin, in solid and/or solvated forms result in improved metalworking and machining of metallic work pieces. Fluids manufactured with boric acid or boron-based additives effectively reduce friction, prevent galling and severe wear problems on cutting and forming tools.

  12. Superconducting machines. Chapter 4

    International Nuclear Information System (INIS)

    Appleton, A.D.

    1977-01-01

    A brief account is given of the principles of superconductivity and superconductors. The properties of Nb-Ti superconductors and the method of flux stabilization are described. The basic features of superconducting d.c. machines are illustrated by the use of these machines for ship propulsion, steel-mill drives, industrial drives, aluminium production, and other d.c. power supplies. Superconducting a.c. generators and their design parameters are discussed. (U.K.)

  13. Quantum Machine Learning

    OpenAIRE

    Romero García, Cristian

    2017-01-01

    [EN] In a world in which accessible information grows exponentially, the selection of the appropriate information turns out to be an extremely relevant problem. In this context, the idea of Machine Learning (ML), a subfield of Artificial Intelligence, emerged to face problems in data mining, pattern recognition, automatic prediction, among others. Quantum Machine Learning is an interdisciplinary research area combining quantum mechanics with methods of ML, in which quantum properties allow fo...

  14. Some relations between quantum Turing machines and Turing machines

    OpenAIRE

    Sicard, Andrés; Vélez, Mario

    1999-01-01

    For quantum Turing machines we present three elements: Its components, its time evolution operator and its local transition function. The components are related with the components of deterministic Turing machines, the time evolution operator is related with the evolution of reversible Turing machines and the local transition function is related with the transition function of probabilistic and reversible Turing machines.

  15. vDNN: Virtualized Deep Neural Networks for Scalable, Memory-Efficient Neural Network Design

    OpenAIRE

    Rhu, Minsoo; Gimelshein, Natalia; Clemons, Jason; Zulfiqar, Arslan; Keckler, Stephen W.

    2016-01-01

    The most widely used machine learning frameworks require users to carefully tune their memory usage so that the deep neural network (DNN) fits into the DRAM capacity of a GPU. This restriction hampers a researcher's flexibility to study different machine learning algorithms, forcing them to either use a less desirable network architecture or parallelize the processing across multiple GPUs. We propose a runtime memory manager that virtualizes the memory usage of DNNs such that both GPU and CPU...

  16. On the Parallel Elliptic Single/Multigrid Solutions about Aligned and Nonaligned Bodies Using the Virtual Machine for Multiprocessors

    Directory of Open Access Journals (Sweden)

    A. Averbuch

    1994-01-01

    Full Text Available Parallel elliptic single/multigrid solutions around an aligned and nonaligned body are presented and implemented on two multi-user and single-user shared memory multiprocessors (Sequent Symmetry and MOS and on a distributed memory multiprocessor (a Transputer network. Our parallel implementation uses the Virtual Machine for Muli-Processors (VMMP, a software package that provides a coherent set of services for explicitly parallel application programs running on diverse multiple instruction multiple data (MIMD multiprocessors, both shared memory and message passing. VMMP is intended to simplify parallel program writing and to promote portable and efficient programming. Furthermore, it ensures high portability of application programs by implementing the same services on all target multiprocessors. The performance of our algorithm is investigated in detail. It is seen to fit well the above architectures when the number of processors is less than the maximal number of grid points along the axes. In general, the efficiency in the nonaligned case is higher than in the aligned case. Alignment overhead is observed to be up to 200% in the shared-memory case and up to 65% in the message-passing case. We have demonstrated that when using VMMP, the portability of the algorithms is straightforward and efficient.

  17. Distributed Merge Trees

    Energy Technology Data Exchange (ETDEWEB)

    Morozov, Dmitriy; Weber, Gunther

    2013-01-08

    Improved simulations and sensors are producing datasets whose increasing complexity exhausts our ability to visualize and comprehend them directly. To cope with this problem, we can detect and extract significant features in the data and use them as the basis for subsequent analysis. Topological methods are valuable in this context because they provide robust and general feature definitions. As the growth of serial computational power has stalled, data analysis is becoming increasingly dependent on massively parallel machines. To satisfy the computational demand created by complex datasets, algorithms need to effectively utilize these computer architectures. The main strength of topological methods, their emphasis on global information, turns into an obstacle during parallelization. We present two approaches to alleviate this problem. We develop a distributed representation of the merge tree that avoids computing the global tree on a single processor and lets us parallelize subsequent queries. To account for the increasing number of cores per processor, we develop a new data structure that lets us take advantage of multiple shared-memory cores to parallelize the work on a single node. Finally, we present experiments that illustrate the strengths of our approach as well as help identify future challenges.

  18. Reactor refueling machine simulator

    International Nuclear Information System (INIS)

    Rohosky, T.L.; Swidwa, K.J.

    1987-01-01

    This patent describes in combination: a nuclear reactor; a refueling machine having a bridge, trolley and hoist each driven by a separate motor having feedback means for generating a feedback signal indicative of movement thereof. The motors are operable to position the refueling machine over the nuclear reactor for refueling the same. The refueling machine also has a removable control console including means for selectively generating separate motor signals for operating the bridge, trolley and hoist motors and for processing the feedback signals to generate an indication of the positions thereof, separate output leads connecting each of the motor signals to the respective refueling machine motor, and separate input leads for connecting each of the feedback means to the console; and a portable simulator unit comprising: a single simulator motor; a single simulator feedback signal generator connected to the simulator motor for generating a simulator feedback signal in response to operation of the simulator motor; means for selectively connecting the output leads of the console to the simulator unit in place of the refueling machine motors, and for connecting the console input leads to the simulator unit in place of the refueling machine motor feedback means; and means for driving the single simulator motor in response to any of the bridge, trolley or hoist motor signals generated by the console and means for applying the simulator feedback signal to the console input lead associated with the motor signal being generated by the control console

  19. Parallel algorithms for testing finite state machines:Generating UIO sequences

    OpenAIRE

    Hierons, RM; Turker, UC

    2016-01-01

    This paper describes an efficient parallel algorithm that uses many-core GPUs for automatically deriving Unique Input Output sequences (UIOs) from Finite State Machines. The proposed algorithm uses the global scope of the GPU's global memory through coalesced memory access and minimises the transfer between CPU and GPU memory. The results of experiments indicate that the proposed method yields considerably better results compared to a single core UIO construction algorithm. Our algorithm is s...

  20. Dedup Est Machina : Memory Deduplication as an Advanced Exploitation Vector

    NARCIS (Netherlands)

    Bosman, Erik; Razavi, Kaveh; Bos, Herbert; Giuffrida, Cristiano

    2016-01-01

    Memory deduplication, a well-known technique to reduce the memory footprint across virtual machines, is now also a default-on feature inside the Windows 8.1 and Windows 10 operating systems. Deduplication maps multiple identical copies of a physical page onto a single shared copy with copy-on-write

  1. [A new machinability test machine and the machinability of composite resins for core built-up].

    Science.gov (United States)

    Iwasaki, N

    2001-06-01

    A new machinability test machine especially for dental materials was contrived. The purpose of this study was to evaluate the effects of grinding conditions on machinability of core built-up resins using this machine, and to confirm the relationship between machinability and other properties of composite resins. The experimental machinability test machine consisted of a dental air-turbine handpiece, a control weight unit, a driving unit of the stage fixing the test specimen, and so on. The machinability was evaluated as the change in volume after grinding using a diamond point. Five kinds of core built-up resins and human teeth were used in this study. The machinabilities of these composite resins increased with an increasing load during grinding, and decreased with repeated grinding. There was no obvious correlation between the machinability and Vickers' hardness; however, a negative correlation was observed between machinability and scratch width.

  2. The Knife Machine. Module 15.

    Science.gov (United States)

    South Carolina State Dept. of Education, Columbia. Office of Vocational Education.

    This module on the knife machine, one in a series dealing with industrial sewing machines, their attachments, and operation, covers one topic: performing special operations on the knife machine (a single needle or multi-needle machine which sews and cuts at the same time). These components are provided: an introduction, directions, an objective,…

  3. The Buttonhole Machine. Module 13.

    Science.gov (United States)

    South Carolina State Dept. of Education, Columbia. Office of Vocational Education.

    This module on the bottonhole machine, one in a series dealing with industrial sewing machines, their attachments, and operation, covers two topics: performing special operations on the buttonhole machine (parts and purpose) and performing special operations on the buttonhole machine (gauged buttonholes). For each topic these components are…

  4. Superconducting Coil Winding Machine Control System

    Energy Technology Data Exchange (ETDEWEB)

    Nogiec, J. M. [Fermilab; Kotelnikov, S. [Fermilab; Makulski, A. [Fermilab; Walbridge, D. [Fermilab; Trombly-Freytag, K. [Fermilab

    2016-10-05

    The Spirex coil winding machine is used at Fermilab to build coils for superconducting magnets. Recently this ma-chine was equipped with a new control system, which al-lows operation from both a computer and a portable remote control unit. This control system is distributed between three layers, implemented on a PC, real-time target, and FPGA, providing respectively HMI, operational logic and direct controls. The system controls motion of all mechan-ical components and regulates the cable tension. Safety is ensured by a failsafe, redundant system.

  5. A translator and simulator for the Burroughs D machine

    Science.gov (United States)

    Roberts, J.

    1972-01-01

    The D Machine is described as a small user microprogrammable computer designed to be a versatile building block for such diverse functions as: disk file controllers, I/O controllers, and emulators. TRANSLANG is an ALGOL-like language, which allows D Machine users to write microprograms in an English-like format as opposed to creating binary bit pattern maps. The TRANSLANG translator parses TRANSLANG programs into D Machine microinstruction bit patterns which can be executed on the D Machine simulator. In addition to simulation and translation, the two programs also offer several debugging tools, such as: a full set of diagnostic error messages, register dumps, simulated memory dumps, traces on instructions and groups of instructions, and breakpoints.

  6. Support vector machine for diagnosis cancer disease: A comparative study

    Directory of Open Access Journals (Sweden)

    Nasser H. Sweilam

    2010-12-01

    Full Text Available Support vector machine has become an increasingly popular tool for machine learning tasks involving classification, regression or novelty detection. Training a support vector machine requires the solution of a very large quadratic programming problem. Traditional optimization methods cannot be directly applied due to memory restrictions. Up to now, several approaches exist for circumventing the above shortcomings and work well. Another learning algorithm, particle swarm optimization, Quantum-behave Particle Swarm for training SVM is introduced. Another approach named least square support vector machine (LSSVM and active set strategy are introduced. The obtained results by these methods are tested on a breast cancer dataset and compared with the exact solution model problem.

  7. Exploring cluster Monte Carlo updates with Boltzmann machines

    Science.gov (United States)

    Wang, Lei

    2017-11-01

    Boltzmann machines are physics informed generative models with broad applications in machine learning. They model the probability distribution of an input data set with latent variables and generate new samples accordingly. Applying the Boltzmann machines back to physics, they are ideal recommender systems to accelerate the Monte Carlo simulation of physical systems due to their flexibility and effectiveness. More intriguingly, we show that the generative sampling of the Boltzmann machines can even give different cluster Monte Carlo algorithms. The latent representation of the Boltzmann machines can be designed to mediate complex interactions and identify clusters of the physical system. We demonstrate these findings with concrete examples of the classical Ising model with and without four-spin plaquette interactions. In the future, automatic searches in the algorithm space parametrized by Boltzmann machines may discover more innovative Monte Carlo updates.

  8. Exploring cluster Monte Carlo updates with Boltzmann machines.

    Science.gov (United States)

    Wang, Lei

    2017-11-01

    Boltzmann machines are physics informed generative models with broad applications in machine learning. They model the probability distribution of an input data set with latent variables and generate new samples accordingly. Applying the Boltzmann machines back to physics, they are ideal recommender systems to accelerate the Monte Carlo simulation of physical systems due to their flexibility and effectiveness. More intriguingly, we show that the generative sampling of the Boltzmann machines can even give different cluster Monte Carlo algorithms. The latent representation of the Boltzmann machines can be designed to mediate complex interactions and identify clusters of the physical system. We demonstrate these findings with concrete examples of the classical Ising model with and without four-spin plaquette interactions. In the future, automatic searches in the algorithm space parametrized by Boltzmann machines may discover more innovative Monte Carlo updates.

  9. Collaging Memories

    Science.gov (United States)

    Wallach, Michele

    2011-01-01

    Even middle school students can have memories of their childhoods, of an earlier time. The art of Romare Bearden and the writings of Paul Auster can be used to introduce ideas about time and memory to students and inspire works of their own. Bearden is an exceptional role model for young artists, not only because of his astounding art, but also…

  10. Memory Magic.

    Science.gov (United States)

    Hartman, Thomas G.; Nowak, Norman

    This paper outlines several "tricks" that aid students in improving their memories. The distinctions between operational and figural thought processes are noted. Operational memory is described as something that allows adults to make generalizations about numbers and the rules by which they may be combined, thus leading to easier memorization.…

  11. Memory loss

    Science.gov (United States)

    ... barbiturates or ( hypnotics ) ECT (electroconvulsive therapy) (most often short-term memory loss) Epilepsy that is not well controlled Illness that ... appointment. Medical history questions may include: Type of memory loss, such as short-term or long-term Time pattern, such as how ...

  12. Episodic Memories

    Science.gov (United States)

    Conway, Martin A.

    2009-01-01

    An account of episodic memories is developed that focuses on the types of knowledge they represent, their properties, and the functions they might serve. It is proposed that episodic memories consist of "episodic elements," summary records of experience often in the form of visual images, associated to a "conceptual frame" that provides a…

  13. Flavor Memory

    NARCIS (Netherlands)

    Mojet, Jos; Köster, Ep

    2016-01-01

    Odor, taste, texture, temperature, and pain all contribute to the perception and memory of food flavor. Flavor memory is also strongly linked to the situational aspects of previous encounters with the flavor, but does not depend on the precise recollection of its sensory features as in vision and

  14. Main Memory

    NARCIS (Netherlands)

    P.A. Boncz (Peter); L. Liu (Lei); M. Tamer Özsu

    2008-01-01

    htmlabstractPrimary storage, presently known as main memory, is the largest memory directly accessible to the CPU in the prevalent Von Neumann model and stores both data and instructions (program code). The CPU continuously reads instructions stored there and executes them. It is also called Random

  15. Computational Model-Based Prediction of Human Episodic Memory Performance Based on Eye Movements

    Science.gov (United States)

    Sato, Naoyuki; Yamaguchi, Yoko

    Subjects' episodic memory performance is not simply reflected by eye movements. We use a ‘theta phase coding’ model of the hippocampus to predict subjects' memory performance from their eye movements. Results demonstrate the ability of the model to predict subjects' memory performance. These studies provide a novel approach to computational modeling in the human-machine interface.

  16. Built-In Test Engine For Memory Test

    OpenAIRE

    McEvoy, Paul; Farrell, Ronan

    2004-01-01

    In this paper we will present an on-chip method for testing high performance memory devices, that occupies minimal area and retains full flexibility. This is achieved through microcode test instructions and the associated on-chip state machine. In addition, the proposed methodology will enable at-speed testing of memory devices. The relevancy of this work is placed in context with an introduction to memory testing and the techniques and algorithms generally used today.

  17. Properties of a memory network in psychology

    Science.gov (United States)

    Wedemann, Roseli S.; Donangelo, Raul; de Carvalho, Luís A. V.

    2007-12-01

    We have previously described neurotic psychopathology and psychoanalytic working-through by an associative memory mechanism, based on a neural network model, where memory was modelled by a Boltzmann machine (BM). Since brain neural topology is selectively structured, we simulated known microscopic mechanisms that control synaptic properties, showing that the network self-organizes to a hierarchical, clustered structure. Here, we show some statistical mechanical properties of the complex networks which result from this self-organization. They indicate that a generalization of the BM may be necessary to model memory.

  18. Properties of a memory network in psychology

    International Nuclear Information System (INIS)

    Wedemann, Roseli S.; Donangelo, Raul; Carvalho, Luis A. V. de

    2007-01-01

    We have previously described neurotic psychopathology and psychoanalytic working-through by an associative memory mechanism, based on a neural network model, where memory was modelled by a Boltzmann machine (BM). Since brain neural topology is selectively structured, we simulated known microscopic mechanisms that control synaptic properties, showing that the network self-organizes to a hierarchical, clustered structure. Here, we show some statistical mechanical properties of the complex networks which result from this self-organization. They indicate that a generalization of the BM may be necessary to model memory

  19. Machining of Metal Matrix Composites

    CERN Document Server

    2012-01-01

    Machining of Metal Matrix Composites provides the fundamentals and recent advances in the study of machining of metal matrix composites (MMCs). Each chapter is written by an international expert in this important field of research. Machining of Metal Matrix Composites gives the reader information on machining of MMCs with a special emphasis on aluminium matrix composites. Chapter 1 provides the mechanics and modelling of chip formation for traditional machining processes. Chapter 2 is dedicated to surface integrity when machining MMCs. Chapter 3 describes the machinability aspects of MMCs. Chapter 4 contains information on traditional machining processes and Chapter 5 is dedicated to the grinding of MMCs. Chapter 6 describes the dry cutting of MMCs with SiC particulate reinforcement. Finally, Chapter 7 is dedicated to computational methods and optimization in the machining of MMCs. Machining of Metal Matrix Composites can serve as a useful reference for academics, manufacturing and materials researchers, manu...

  20. Accessing memory

    Science.gov (United States)

    Yoon, Doe Hyun; Muralimanohar, Naveen; Chang, Jichuan; Ranganthan, Parthasarathy

    2017-09-26

    A disclosed example method involves performing simultaneous data accesses on at least first and second independently selectable logical sub-ranks to access first data via a wide internal data bus in a memory device. The memory device includes a translation buffer chip, memory chips in independently selectable logical sub-ranks, a narrow external data bus to connect the translation buffer chip to a memory controller, and the wide internal data bus between the translation buffer chip and the memory chips. A data access is performed on only the first independently selectable logical sub-rank to access second data via the wide internal data bus. The example method also involves locating a first portion of the first data, a second portion of the first data, and the second data on the narrow external data bus during separate data transfers.

  1. Virtual Machine in Automation Projects

    OpenAIRE

    Xing, Xiaoyuan

    2010-01-01

    Virtual machine, as an engineering tool, has recently been introduced into automation projects in Tetra Pak Processing System AB. The goal of this paper is to examine how to better utilize virtual machine for the automation projects. This paper designs different project scenarios using virtual machine. It analyzes installability, performance and stability of virtual machine from the test results. Technical solutions concerning virtual machine are discussed such as the conversion with physical...

  2. Non-conventional electrical machines

    CERN Document Server

    Rezzoug, Abderrezak

    2013-01-01

    The developments of electrical machines are due to the convergence of material progress, improved calculation tools, and new feeding sources. Among the many recent machines, the authors have chosen, in this first book, to relate the progress in slow speed machines, high speed machines, and superconducting machines. The first part of the book is dedicated to materials and an overview of magnetism, mechanic, and heat transfer.

  3. Advanced SLARette delivery machine

    International Nuclear Information System (INIS)

    Bodner, R.R.

    1995-01-01

    SLARette 1 equipment, comprising of a SLARette Delivery Machine, SLAR Tools, SLAR power supplies and SLAR Inspection Systems was designed, developed and manufactured to service fuel channels of CANDU 6 stations during the regular yearly station outages. The Mark 2 SLARette Delivery Machine uses a Push Tube system to provide the axial and rotary movements of the SLAR Tool. The Push Tubes are operated remotely but must be attached and removed manually. Since this operation is performed at the Reactor face, there is radiation dose involved for the workers. An Advanced SLARette Delivery Machine which incorporates a computer controlled telescoping Ram in the place of the Push Tubes has been recently designed and manufactured. Utilization of the Advanced SLARette Delivery Machine significantly reduces the amount of radiation dose picked up by the workers because the need to have workers at the face of the Reactor during the SLARette operation is greatly reduced. This paper describes the design, development and manufacturing process utilized to produce the Advanced SLARette Delivery Machine and the experience gained during the Gentilly-2 NGS Spring outage. (author)

  4. The Bearingless Electrical Machine

    Science.gov (United States)

    Bichsel, J.

    1992-01-01

    Electromagnetic bearings allow the suspension of solids. For rotary applications, the most important physical effect is the force of a magnetic circuit to a high permeable armature, called the MAXWELL force. Contrary to the commonly used MAXWELL bearings, the bearingless electrical machine will take advantage of the reaction force of a conductor carrying a current in a magnetic field. This kind of force, called Lorentz force, generates the torque in direct current, asynchronous and synchronous machines. The magnetic field, which already exists in electrical machines and helps to build up the torque, can also be used for the suspension of the rotor. Besides the normal winding of the stator, a special winding was added, which generates forces for levitation. So a radial bearing, which is integrated directly in the active part of the machine, and the motor use the laminated core simultaneously. The winding was constructed for the levitating forces in a special way so that commercially available standard ac inverters for drives can be used. Besides wholly magnetic suspended machines, there is a wide range of applications for normal drives with ball bearings. Resonances of the rotor, especially critical speeds, can be damped actively.

  5. Asymmetric quantum cloning machines

    International Nuclear Information System (INIS)

    Cerf, N.J.

    1998-01-01

    A family of asymmetric cloning machines for quantum bits and N-dimensional quantum states is introduced. These machines produce two approximate copies of a single quantum state that emerge from two distinct channels. In particular, an asymmetric Pauli cloning machine is defined that makes two imperfect copies of a quantum bit, while the overall input-to-output operation for each copy is a Pauli channel. A no-cloning inequality is derived, characterizing the impossibility of copying imposed by quantum mechanics. If p and p ' are the probabilities of the depolarizing channels associated with the two outputs, the domain in (√p,√p ' )-space located inside a particular ellipse representing close-to-perfect cloning is forbidden. This ellipse tends to a circle when copying an N-dimensional state with N→∞, which has a simple semi-classical interpretation. The symmetric Pauli cloning machines are then used to provide an upper bound on the quantum capacity of the Pauli channel of probabilities p x , p y and p z . The capacity is proven to be vanishing if (√p x , √p y , √p z ) lies outside an ellipsoid whose pole coincides with the depolarizing channel that underlies the universal cloning machine. Finally, the tradeoff between the quality of the two copies is shown to result from a complementarity akin to Heisenberg uncertainty principle. (author)

  6. La perversa máquina del olvido: cómics y memoria de la posguerra en la España de los 90 = The perverse machine of oblivion: comics and postwar memory in 90’s Spain

    Directory of Open Access Journals (Sweden)

    Pedro Pérez del Solar

    2015-06-01

    Maus (Art Spiegelman se sabe que los cómics pueden vehicular reflexiones sobre la memoria de formas muy distintas a las de la literatura o el cine; El Artefacto Perverso, lejos de cualquier fórmula, muestra que esa exploración continúa. This paper analyses the Spanish graphic novel El Artefacto Perverso (The Perverse Machine, 1996 by Felipe Hernández Cava and Federico del Barrio. This study shows how, by employing comics resources at their best, this work constructs a complex and compelling reflection on memory, especially that of the losers of the Spanish civil war (1936–39. The subject of memory is at the core of El Artefacto Perverso. Characters speak about it; plots concur in it. The main one, the level of ‘reality’, is set in post-war Madrid. There, those who lost the civil war try to forget their past, struggling for surviving in a silence that the guerrilla strives to break. Around this one, other stories emerge and dialogue with each other. One narrates the memories of a Spanish republican refugee in France. Another shows an agonizing republican agent that speaks about the erasure of all the losers of the war from the pages of history. Finally, there is the comic book written by the main character, where an all-Spanish-hero, Pedro Guzmán, saves the world from a perverse machine that causes oblivion. The contact with the main plot, charges this ingenuous story line with critical significance. Style also speaks about memory: Fading memories of refugee camp and feverish narrations have, each one, a correlated graphic representation. Pedro Guzmán’s adventures bring to mind the post-war by employing conventions of Spanish comics of the forties. All of this has implications about contemporary Spain. There, El Artefacto Perverso confronts the official discourse of the democracy regime, where war and post war periods have been consciously neglected. Since Maus, we are sure that comics can vehicle reflections on memory in

  7. Digital Extension of Music Memory Music as a Collective Cultural Memory

    Directory of Open Access Journals (Sweden)

    Dimitrije Buzarovski

    2014-11-01

    Full Text Available Artistic works represent a very important part of collective cultural memory. Every artistic work, by definition, can confirm its existence only through the presence in collective cultural memory. The migration from author’s individual memory to common collective cultural memory forms the cultural heritage. This equally applies to tangible and intangible cultural artifacts. Being part of collective cultural memory, music reflects the spatial (geographic and temporal (historic dimensions of this memory. Until the appearance of written signs (scores music was preserved only through collective cultural memory. Scores have facilitated further distribution of music artifacts. The appearance of different means for audio, and later audio/video recordings have greatly improved the distribution of music. The transition from analog to digital recording and carriers has been a revolutionary step which substantially extended the chances for the survival of music artifacts in collective memory.

  8. Modelling tick abundance using machine learning techniques and satellite imagery

    DEFF Research Database (Denmark)

    Kjær, Lene Jung; Korslund, L.; Kjelland, V.

    satellite images to run Boosted Regression Tree machine learning algorithms to predict overall distribution (presence/absence of ticks) and relative tick abundance of nymphs and larvae in southern Scandinavia. For nymphs, the predicted abundance had a positive correlation with observed abundance...... the predicted distribution of larvae was mostly even throughout Denmark, it was primarily around the coastlines in Norway and Sweden. Abundance was fairly low overall except in some fragmented patches corresponding to forested habitats in the region. Machine learning techniques allow us to predict for larger...... the collected ticks for pathogens and using the same machine learning techniques to develop prevalence maps of the ScandTick region....

  9. Running and machine studies in 1990

    International Nuclear Information System (INIS)

    1991-03-01

    This annual report described the GANIL performance and machine studies. During the year 1990, the machine has been operated for 36 weeks divided into periods of 5, 6 or 7 weeks; consequently the number of beam setting up has been reduced. From 5682 hours of scheduled beam 3239 hours have been delivered on target. Very heavy ions (Pb, U) are now accelerated owing to the OAE modification. Many experiments have been completed with the new medium energy beam facility. The machine studies were devoted to the development ot the following items: production of 157 Gd 19+ ions, acceleration of 238 U 59+ at 24 MeV/u, SSC1 orbit precession, charge state distribution and energy spread after stripping [fr

  10. Human-Machine Communication

    International Nuclear Information System (INIS)

    Farbrot, J.E.; Nihlwing, Ch.; Svengren, H.

    2005-01-01

    New requirements for enhanced safety and design changes in process systems often leads to a step-wise installation of new information and control equipment in the control room of older nuclear power plants, where nowadays modern digital I and C solutions with screen-based human-machine interfaces (HMI) most often are introduced. Human factors (HF) expertise is then required to assist in specifying a unified, integrated HMI, where the entire integration of information is addressed to ensure an optimal and effective interplay between human (operators) and machine (process). Following a controlled design process is the best insurance for ending up with good solutions. This paper addresses the approach taken when introducing modern human-machine communication in the Oskarshamn 1 NPP, the results, and the lessons learned from this work with high operator involvement seen from an HF point of view. Examples of possibilities modern technology might offer for the operators are also addressed. (orig.)

  11. Machines and Metaphors

    Directory of Open Access Journals (Sweden)

    Ángel Martínez García-Posada

    2016-10-01

    Full Text Available The edition La ley del reloj. Arquitectura, máquinas y cultura moderna (Cátedra, Madrid, 2016 registers the useful paradox of the analogy between architecture and technique. Its author, the architect Eduardo Prieto, also a philosopher, professor and writer, acknowledges the obvious distance from machines to buildings, so great that it can only be solved using strange comparisons, since architecture does not move nor are the machines habitable, however throughout the book, from the origin of the metaphor of the machine, with clarity in his essay and enlightening erudition, he points out with certainty some concomitances of high interest, drawing throughout history a beautiful cartography of the fruitful encounter between organics and mechanics.

  12. Machine Learning for Security

    CERN Multimedia

    CERN. Geneva

    2015-01-01

    Applied statistics, aka ‘Machine Learning’, offers a wealth of techniques for answering security questions. It’s a much hyped topic in the big data world, with many companies now providing machine learning as a service. This talk will demystify these techniques, explain the math, and demonstrate their application to security problems. The presentation will include how-to’s on classifying malware, looking into encrypted tunnels, and finding botnets in DNS data. About the speaker Josiah is a security researcher with HP TippingPoint DVLabs Research Group. He has over 15 years of professional software development experience. Josiah used to do AI, with work focused on graph theory, search, and deductive inference on large knowledge bases. As rules only get you so far, he moved from AI to using machine learning techniques identifying failure modes in email traffic. There followed digressions into clustered data storage and later integrated control systems. Current ...

  13. Chatter and machine tools

    CERN Document Server

    Stone, Brian

    2014-01-01

    Focussing on occurrences of unstable vibrations, or Chatter, in machine tools, this book gives important insights into how to eliminate chatter with associated improvements in product quality, surface finish and tool wear. Covering a wide range of machining processes, including turning, drilling, milling and grinding, the author uses his research expertise and practical knowledge of vibration problems to provide solutions supported by experimental evidence of their effectiveness. In addition, this book contains links to supplementary animation programs that help readers to visualise the ideas detailed in the text. Advancing knowledge in chatter avoidance and suggesting areas for new innovations, Chatter and Machine Tools serves as a handbook for those desiring to achieve significant reductions in noise, longer tool and grinding wheel life and improved product finish.

  14. Nonlinear machine learning and design of reconfigurable digital colloids.

    Science.gov (United States)

    Long, Andrew W; Phillips, Carolyn L; Jankowksi, Eric; Ferguson, Andrew L

    2016-09-14

    Digital colloids, a cluster of freely rotating "halo" particles tethered to the surface of a central particle, were recently proposed as ultra-high density memory elements for information storage. Rational design of these digital colloids for memory storage applications requires a quantitative understanding of the thermodynamic and kinetic stability of the configurational states within which information is stored. We apply nonlinear machine learning to Brownian dynamics simulations of these digital colloids to extract the low-dimensional intrinsic manifold governing digital colloid morphology, thermodynamics, and kinetics. By modulating the relative size ratio between halo particles and central particles, we investigate the size-dependent configurational stability and transition kinetics for the 2-state tetrahedral (N = 4) and 30-state octahedral (N = 6) digital colloids. We demonstrate the use of this framework to guide the rational design of a memory storage element to hold a block of text that trades off the competing design criteria of memory addressability and volatility.

  15. Clojure for machine learning

    CERN Document Server

    Wali, Akhil

    2014-01-01

    A book that brings out the strengths of Clojure programming that have to facilitate machine learning. Each topic is described in substantial detail, and examples and libraries in Clojure are also demonstrated.This book is intended for Clojure developers who want to explore the area of machine learning. Basic understanding of the Clojure programming language is required, but thorough acquaintance with the standard Clojure library or any libraries are not required. Familiarity with theoretical concepts and notation of mathematics and statistics would be an added advantage.

  16. Machine learning systems

    Energy Technology Data Exchange (ETDEWEB)

    Forsyth, R

    1984-05-01

    With the dramatic rise of expert systems has come a renewed interest in the fuel that drives them-knowledge. For it is specialist knowledge which gives expert systems their power. But extracting knowledge from human experts in symbolic form has proved arduous and labour-intensive. So the idea of machine learning is enjoying a renaissance. Machine learning is any automatic improvement in the performance of a computer system over time, as a result of experience. Thus a learning algorithm seeks to do one or more of the following: cover a wider range of problems, deliver more accurate solutions, obtain answers more cheaply, and simplify codified knowledge. 6 references.

  17. Machine tool evaluation

    International Nuclear Information System (INIS)

    Lunsford, B.E.

    1976-01-01

    Continued improvement in numerical control (NC) units and the mechanical components used in the construction of today's machine tools, necessitate the use of more precise instrumentation to calibrate and determine the capabilities of these systems. It is now necessary to calibrate most tape-control lathes to a tool-path positioning accuracy of +-300 microinches in the full slide travel and, on some special turning and boring machines, a capability of +-100 microinches must be achieved. The use of a laser interferometer to determine tool-path capabilities is described

  18. Electrical machines & their applications

    CERN Document Server

    Hindmarsh, J

    1984-01-01

    A self-contained, comprehensive and unified treatment of electrical machines, including consideration of their control characteristics in both conventional and semiconductor switched circuits. This new edition has been expanded and updated to include material which reflects current thinking and practice. All references have been updated to conform to the latest national (BS) and international (IEC) recommendations and a new appendix has been added which deals more fully with the theory of permanent-magnets, recognising the growing importance of permanent-magnet machines. The text is so arra

  19. Machine shop basics

    CERN Document Server

    Miller, Rex

    2004-01-01

    Use the right tool the right wayHere, fully updated to include new machines and electronic/digital controls, is the ultimate guide to basic machine shop equipment and how to use it. Whether you're a professional machinist, an apprentice, a trade student, or a handy homeowner, this fully illustrated volume helps you define tools and use them properly and safely. It's packed with review questions for students, and loaded with answers you need on the job.Mark Richard Miller is a Professor and Chairman of the Industrial Technology Department at Texas A&M University in Kingsville, T

  20. Electrical machines diagnosis

    CERN Document Server

    Trigeassou, Jean-Claude

    2013-01-01

    Monitoring and diagnosis of electrical machine faults is a scientific and economic issue which is motivated by objectives for reliability and serviceability in electrical drives.This book provides a survey of the techniques used to detect the faults occurring in electrical drives: electrical, thermal and mechanical faults of the electrical machine, faults of the static converter and faults of the energy storage unit.Diagnosis of faults occurring in electrical drives is an essential part of a global monitoring system used to improve reliability and serviceability. This diagnosis is perf