WorldWideScience

Sample records for single processor computers

  1. Introduction to programming multiple-processor computers

    International Nuclear Information System (INIS)

    Hicks, H.R.; Lynch, V.E.

    1985-04-01

    FORTRAN applications programs can be executed on multiprocessor computers in either a unitasking (traditional) or multitasking form. The latter allows a single job to use more than one processor simultaneously, with a consequent reduction in wall-clock time and, perhaps, the cost of the calculation. An introduction to programming in this environment is presented. The concepts of synchronization and data sharing using EVENTS and LOCKS are illustrated with examples. The strategy of strong synchronization and the use of synchronization templates are proposed. We emphasize that incorrect multitasking programs can produce irreproducible results, which makes debugging more difficult

  2. Multiple Embedded Processors for Fault-Tolerant Computing

    Science.gov (United States)

    Bolotin, Gary; Watson, Robert; Katanyoutanant, Sunant; Burke, Gary; Wang, Mandy

    2005-01-01

    A fault-tolerant computer architecture has been conceived in an effort to reduce vulnerability to single-event upsets (spurious bit flips caused by impingement of energetic ionizing particles or photons). As in some prior fault-tolerant architectures, the redundancy needed for fault tolerance is obtained by use of multiple processors in one computer. Unlike prior architectures, the multiple processors are embedded in a single field-programmable gate array (FPGA). What makes this new approach practical is the recent commercial availability of FPGAs that are capable of having multiple embedded processors. A working prototype (see figure) consists of two embedded IBM PowerPC 405 processor cores and a comparator built on a Xilinx Virtex-II Pro FPGA. This relatively simple instantiation of the architecture implements an error-detection scheme. A planned future version, incorporating four processors and two comparators, would correct some errors in addition to detecting them.

  3. Vector and parallel processors in computational science

    International Nuclear Information System (INIS)

    Duff, I.S.; Reid, J.K.

    1985-01-01

    These proceedings contain the articles presented at the named conference. These concern hardware and software for vector and parallel processors, numerical methods and algorithms for the computation on such processors, as well as applications of such methods to different fields of physics and related sciences. See hints under the relevant topics. (HSI)

  4. Computer Generated Inputs for NMIS Processor Verification

    International Nuclear Information System (INIS)

    J. A. Mullens; J. E. Breeding; J. A. McEvers; R. W. Wysor; L. G. Chiang; J. R. Lenarduzzi; J. T. Mihalczo; J. K. Mattingly

    2001-01-01

    Proper operation of the Nuclear Identification Materials System (NMIS) processor can be verified using computer-generated inputs [BIST (Built-In-Self-Test)] at the digital inputs. Preselected sequences of input pulses to all channels with known correlation functions are compared to the output of the processor. These types of verifications have been utilized in NMIS type correlation processors at the Oak Ridge National Laboratory since 1984. The use of this test confirmed a malfunction in a NMIS processor at the All-Russian Scientific Research Institute of Experimental Physics (VNIIEF) in 1998. The NMIS processor boards were returned to the U.S. for repair and subsequently used in NMIS passive and active measurements with Pu at VNIIEF in 1999

  5. RISC Processors and High Performance Computing

    Science.gov (United States)

    Bailey, David H.; Saini, Subhash; Craw, James M. (Technical Monitor)

    1995-01-01

    This tutorial will discuss the top five RISC microprocessors and the parallel systems in which they are used. It will provide a unique cross-machine comparison not available elsewhere. The effective performance of these processors will be compared by citing standard benchmarks in the context of real applications. The latest NAS Parallel Benchmarks, both absolute performance and performance per dollar, will be listed. The next generation of the NPB will be described. The tutorial will conclude with a discussion of future directions in the field. Technology Transfer Considerations: All of these computer systems are commercially available internationally. Information about these processors is available in the public domain, mostly from the vendors themselves. The NAS Parallel Benchmarks and their results have been previously approved numerous times for public release, beginning back in 1991.

  6. Aspects of computation on asynchronous parallel processors

    International Nuclear Information System (INIS)

    Wright, M.

    1989-01-01

    The increasing availability of asynchronous parallel processors has provided opportunities for original and useful work in scientific computing. However, the field of parallel computing is still in a highly volatile state, and researchers display a wide range of opinion about many fundamental questions such as models of parallelism, approaches for detecting and analyzing parallelism of algorithms, and tools that allow software developers and users to make effective use of diverse forms of complex hardware. This volume collects the work of researchers specializing in different aspects of parallel computing, who met to discuss the framework and the mechanics of numerical computing. The far-reaching impact of high-performance asynchronous systems is reflected in the wide variety of topics, which include scientific applications (e.g. linear algebra, lattice gauge simulation, ordinary and partial differential equations), models of parallelism, parallel language features, task scheduling, automatic parallelization techniques, tools for algorithm development in parallel environments, and system design issues

  7. Scientific Computing Kernels on the Cell Processor

    Energy Technology Data Exchange (ETDEWEB)

    Williams, Samuel W.; Shalf, John; Oliker, Leonid; Kamil, Shoaib; Husbands, Parry; Yelick, Katherine

    2007-04-04

    The slowing pace of commodity microprocessor performance improvements combined with ever-increasing chip power demands has become of utmost concern to computational scientists. As a result, the high performance computing community is examining alternative architectures that address the limitations of modern cache-based designs. In this work, we examine the potential of using the recently-released STI Cell processor as a building block for future high-end computing systems. Our work contains several novel contributions. First, we introduce a performance model for Cell and apply it to several key scientific computing kernels: dense matrix multiply, sparse matrix vector multiply, stencil computations, and 1D/2D FFTs. The difficulty of programming Cell, which requires assembly level intrinsics for the best performance, makes this model useful as an initial step in algorithm design and evaluation. Next, we validate the accuracy of our model by comparing results against published hardware results, as well as our own implementations on a 3.2GHz Cell blade. Additionally, we compare Cell performance to benchmarks run on leading superscalar (AMD Opteron), VLIW (Intel Itanium2), and vector (Cray X1E) architectures. Our work also explores several different mappings of the kernels and demonstrates a simple and effective programming model for Cell's unique architecture. Finally, we propose modest microarchitectural modifications that could significantly increase the efficiency of double-precision calculations. Overall results demonstrate the tremendous potential of the Cell architecture for scientific computations in terms of both raw performance and power efficiency.

  8. Parallel computation for distributed parameter system-from vector processors to Adena computer

    Energy Technology Data Exchange (ETDEWEB)

    Nogi, T

    1983-04-01

    Research on advanced parallel hardware and software architectures for very high-speed computation deserves and needs more support and attention to fulfil its promise. Novel architectures for parallel processing are being made ready. Architectures for parallel processing can be roughly divided into two groups. One is a vector processor in which a single central processing unit involves multiple vector-arithmetic registers. The other is a processor array in which slave processors are connected to a host processor to perform parallel computation. In this review, the concept and data structure of the Adena (alternating-direction edition nexus array) architecture, which is conformable to distributed-parameter simulation algorithms, are described. 5 references.

  9. An introduction to programming multiple-processor computers

    International Nuclear Information System (INIS)

    Hicks, H.R.; Lynch, V.E.

    1986-01-01

    Fortran applications programs can be executed on multiprocessor computers in either a unitasking (traditional) or multitasking form. The later allows a single job to use more than one processor simultaneously, with a consequent reduction in elapsed time and, perhaps, the cost of the calculation. An introduction to programming in this environment is presented. The concept of synchronization and data sharing using EVENTS and LOCKS are illustrated with examples. The strategy of strong synchronization and the use of synchronization templates are proposed. We emphasize that incorrect multitasking programs can produce irreducible results, which makes debugging more difficult

  10. Recommending the heterogeneous cluster type multi-processor system computing

    International Nuclear Information System (INIS)

    Iijima, Nobukazu

    2010-01-01

    Real-time reactor simulator had been developed by reusing the equipment of the Musashi reactor and its performance improvement became indispensable for research tools to increase sampling rate with introduction of arithmetic units using multi-Digital Signal Processor(DSP) system (cluster). In order to realize the heterogeneous cluster type multi-processor system computing, combination of two kinds of Control Processor (CP) s, Cluster Control Processor (CCP) and System Control Processor (SCP), were proposed with Large System Control Processor (LSCP) for hierarchical cluster if needed. Faster computing performance of this system was well evaluated by simulation results for simultaneous execution of plural jobs and also pipeline processing between clusters, which showed the system led to effective use of existing system and enhancement of the cost performance. (T. Tanaka)

  11. Fast parallel computation of polynomials using few processors

    DEFF Research Database (Denmark)

    Valiant, Leslie; Skyum, Sven

    1981-01-01

    It is shown that any multivariate polynomial that can be computed sequentially in C steps and has degree d can be computed in parallel in 0((log d) (log C + log d)) steps using only (Cd)0(1) processors....

  12. Performance evaluation of throughput computing workloads using multi-core processors and graphics processors

    Science.gov (United States)

    Dave, Gaurav P.; Sureshkumar, N.; Blessy Trencia Lincy, S. S.

    2017-11-01

    Current trend in processor manufacturing focuses on multi-core architectures rather than increasing the clock speed for performance improvement. Graphic processors have become as commodity hardware for providing fast co-processing in computer systems. Developments in IoT, social networking web applications, big data created huge demand for data processing activities and such kind of throughput intensive applications inherently contains data level parallelism which is more suited for SIMD architecture based GPU. This paper reviews the architectural aspects of multi/many core processors and graphics processors. Different case studies are taken to compare performance of throughput computing applications using shared memory programming in OpenMP and CUDA API based programming.

  13. MPC Related Computational Capabilities of ARMv7A Processors

    DEFF Research Database (Denmark)

    Frison, Gianluca; Jørgensen, John Bagterp

    2015-01-01

    In recent years, the mass market of mobile devices has pushed the demand for increasingly fast but cheap processors. ARM, the world leader in this sector, has developed the Cortex-A series of processors with focus on computationally intensive applications. If properly programmed, these processors...... are powerful enough to solve the complex optimization problems arising in MPC in real-time, while keeping the traditional low-cost and low-power consumption. This makes these processors ideal candidates for use in embedded MPC. In this paper, we investigate the floating-point capabilities of Cortex A7, A9...... and A15 and show how to exploit the unique features of each processor to obtain the best performance, in the context of a novel implementation method for the linear-algebra routines used in MPC solvers. This method adapts high-performance computing techniques to the needs of embedded MPC. In particular...

  14. Vector and parallel processors in computational science

    International Nuclear Information System (INIS)

    Duff, I.S.; Reid, J.K.

    1985-01-01

    This book presents the papers given at a conference which reviewed the new developments in parallel and vector processing. Topics considered at the conference included hardware (array processors, supercomputers), programming languages, software aids, numerical methods (e.g., Monte Carlo algorithms, iterative methods, finite elements, optimization), and applications (e.g., neutron transport theory, meteorology, image processing)

  15. Single particle irradiation effect of digital signal processor

    International Nuclear Information System (INIS)

    Fan Si'an; Chen Kenan

    2010-01-01

    The single particle irradiation effect of high energy neutron on digital signal processor TMS320P25 in dynamic working condition has been studied. The influence of the single particle on the device has been explored through the acquired waveform and working current of TMS320P25. Analysis results, test data and test methods have also been presented. (authors)

  16. A single chip pulse processor for nuclear spectroscopy

    International Nuclear Information System (INIS)

    Hilsenrath, F.; Bakke, J.C.; Voss, H.D.

    1985-01-01

    A high performance digital pulse processor, integrated into a single gate array microcircuit, has been developed for spaceflight applications. The new approach takes advantage of the latest CMOS high speed A/D flash converters and low-power gated logic arrays. The pulse processor measures pulse height, pulse area and the required timing information (e.g. multi detector coincidence and pulse pile-up detection). The pulse processor features high throughput rate (e.g. 0.5 Mhz for 2 usec gausssian pulses) and improved differential linearity (e.g. + or - 0.2 LSB for a + or - 1 LSB A/D). Because of the parallel digital architecture of the device, the interface is microprocessor bus compatible. A satellite flight application of this module is presented for use in the X-ray imager and high energy particle spectrometers of the PEM experiment on the Upper Atmospheric Research Satellite

  17. Launching applications on compute and service processors running under different operating systems in scalable network of processor boards with routers

    Science.gov (United States)

    Tomkins, James L [Albuquerque, NM; Camp, William J [Albuquerque, NM

    2009-03-17

    A multiple processor computing apparatus includes a physical interconnect structure that is flexibly configurable to support selective segregation of classified and unclassified users. The physical interconnect structure also permits easy physical scalability of the computing apparatus. The computing apparatus can include an emulator which permits applications from the same job to be launched on processors that use different operating systems.

  18. Fast Parallel Computation of Polynomials Using Few Processors

    DEFF Research Database (Denmark)

    Valiant, Leslie G.; Skyum, Sven; Berkowitz, S.

    1983-01-01

    It is shown that any multivariate polynomial of degree $d$ that can be computed sequentially in $C$ steps can be computed in parallel in $O((\\log d)(\\log C + \\log d))$ steps using only $(Cd)^{O(1)} $ processors....

  19. Initial explorations of ARM processors for scientific computing

    International Nuclear Information System (INIS)

    Abdurachmanov, David; Elmer, Peter; Eulisse, Giulio; Muzaffar, Shahzad

    2014-01-01

    Power efficiency is becoming an ever more important metric for both high performance and high throughput computing. Over the course of next decade it is expected that flops/watt will be a major driver for the evolution of computer architecture. Servers with large numbers of ARM processors, already ubiquitous in mobile computing, are a promising alternative to traditional x86-64 computing. We present the results of our initial investigations into the use of ARM processors for scientific computing applications. In particular we report the results from our work with a current generation ARMv7 development board to explore ARM-specific issues regarding the software development environment, operating system, performance benchmarks and issues for porting High Energy Physics software

  20. A design of a computer complex including vector processors

    International Nuclear Information System (INIS)

    Asai, Kiyoshi

    1982-12-01

    We, members of the Computing Center, Japan Atomic Energy Research Institute have been engaged for these six years in the research of adaptability of vector processing to large-scale nuclear codes. The research has been done in collaboration with researchers and engineers of JAERI and a computer manufacturer. In this research, forty large-scale nuclear codes were investigated from the viewpoint of vectorization. Among them, twenty-six codes were actually vectorized and executed. As the results of the investigation, it is now estimated that about seventy percents of nuclear codes and seventy percents of our total amount of CPU time of JAERI are highly vectorizable. Based on the data obtained by the investigation, (1)currently vectorizable CPU time, (2)necessary number of vector processors, (3)necessary manpower for vectorization of nuclear codes, (4)computing speed, memory size, number of parallel 1/0 paths, size and speed of 1/0 buffer of vector processor suitable for our applications, (5)necessary software and operational policy for use of vector processors are discussed, and finally (6)a computer complex including vector processors is presented in this report. (author)

  1. Programs for Testing Processor-in-Memory Computing Systems

    Science.gov (United States)

    Katz, Daniel S.

    2006-01-01

    The Multithreaded Microbenchmarks for Processor-In-Memory (PIM) Compilers, Simulators, and Hardware are computer programs arranged in a series for use in testing the performances of PIM computing systems, including compilers, simulators, and hardware. The programs at the beginning of the series test basic functionality; the programs at subsequent positions in the series test increasingly complex functionality. The programs are intended to be used while designing a PIM system, and can be used to verify that compilers, simulators, and hardware work correctly. The programs can also be used to enable designers of these system components to examine tradeoffs in implementation. Finally, these programs can be run on non-PIM hardware (either single-threaded or multithreaded) using the POSIX pthreads standard to verify that the benchmarks themselves operate correctly. [POSIX (Portable Operating System Interface for UNIX) is a set of standards that define how programs and operating systems interact with each other. pthreads is a library of pre-emptive thread routines that comply with one of the POSIX standards.

  2. Graphics processor efficiency for realization of rapid tabular computations

    International Nuclear Information System (INIS)

    Dudnik, V.A.; Kudryavtsev, V.I.; Us, S.A.; Shestakov, M.V.

    2016-01-01

    Capabilities of graphics processing units (GPU) and central processing units (CPU) have been investigated for realization of fast-calculation algorithms with the use of tabulated functions. The realization of tabulated functions is exemplified by the GPU/CPU architecture-based processors. Comparison is made between the operating efficiencies of GPU and CPU, employed for tabular calculations at different conditions of use. Recommendations are formulated for the use of graphical and central processors to speed up scientific and engineering computations through the use of tabulated functions

  3. Optimal neural computations require analog processors

    Energy Technology Data Exchange (ETDEWEB)

    Beiu, V.

    1998-12-31

    This paper discusses some of the limitations of hardware implementations of neural networks. The authors start by presenting neural structures and their biological inspirations, while mentioning the simplifications leading to artificial neural networks. Further, the focus will be on hardware imposed constraints. They will present recent results for three different alternatives of parallel implementations of neural networks: digital circuits, threshold gate circuits, and analog circuits. The area and the delay will be related to the neurons` fan-in and to the precision of their synaptic weights. The main conclusion is that hardware-efficient solutions require analog computations, and suggests the following two alternatives: (i) cope with the limitations imposed by silicon, by speeding up the computation of the elementary silicon neurons; (2) investigate solutions which would allow the use of the third dimension (e.g. using optical interconnections).

  4. Image matrix processor for fast multi-dimensional computations

    Science.gov (United States)

    Roberson, George P.; Skeate, Michael F.

    1996-01-01

    An apparatus for multi-dimensional computation which comprises a computation engine, including a plurality of processing modules. The processing modules are configured in parallel and compute respective contributions to a computed multi-dimensional image of respective two dimensional data sets. A high-speed, parallel access storage system is provided which stores the multi-dimensional data sets, and a switching circuit routes the data among the processing modules in the computation engine and the storage system. A data acquisition port receives the two dimensional data sets representing projections through an image, for reconstruction algorithms such as encountered in computerized tomography. The processing modules include a programmable local host, by which they may be configured to execute a plurality of different types of multi-dimensional algorithms. The processing modules thus include an image manipulation processor, which includes a source cache, a target cache, a coefficient table, and control software for executing image transformation routines using data in the source cache and the coefficient table and loading resulting data in the target cache. The local host processor operates to load the source cache with a two dimensional data set, loads the coefficient table, and transfers resulting data out of the target cache to the storage system, or to another destination.

  5. Architectures for single-chip image computing

    Science.gov (United States)

    Gove, Robert J.

    1992-04-01

    This paper will focus on the architectures of VLSI programmable processing components for image computing applications. TI, the maker of industry-leading RISC, DSP, and graphics components, has developed an architecture for a new-generation of image processors capable of implementing a plurality of image, graphics, video, and audio computing functions. We will show that the use of a single-chip heterogeneous MIMD parallel architecture best suits this class of processors--those which will dominate the desktop multimedia, document imaging, computer graphics, and visualization systems of this decade.

  6. Computing trends using graphic processor in high energy physics

    CERN Document Server

    Niculescu, Mihai

    2011-01-01

    One of the main challenges in Heavy Energy Physics is to make fast analysis of high amount of experimental and simulated data. At LHC-CERN one p-p event is approximate 1 Mb in size. The time taken to analyze the data and obtain fast results depends on high computational power. The main advantage of using GPU(Graphic Processor Unit) programming over traditional CPU one is that graphical cards bring a lot of computing power at a very low price. Today a huge number of application(scientific, financial etc) began to be ported or developed for GPU, including Monte Carlo tools or data analysis tools for High Energy Physics. In this paper, we'll present current status and trends in HEP using GPU.

  7. Application of the Computer Capacity to the Analysis of Processors Evolution

    OpenAIRE

    Ryabko, Boris; Rakitskiy, Anton

    2017-01-01

    The notion of computer capacity was proposed in 2012, and this quantity has been estimated for computers of different kinds. In this paper we show that, when designing new processors, the manufacturers change the parameters that affect the computer capacity. This allows us to predict the values of parameters of future processors. As the main example we use Intel processors, due to the accessibility of detailed description of all their technical characteristics.

  8. Architecture and VHDL behavioural validation of a parallel processor dedicated to computer vision

    International Nuclear Information System (INIS)

    Collette, Thierry

    1992-01-01

    Speeding up image processing is mainly obtained using parallel computers; SIMD processors (single instruction stream, multiple data stream) have been developed, and have proven highly efficient regarding low-level image processing operations. Nevertheless, their performances drop for most intermediate of high level operations, mainly when random data reorganisations in processor memories are involved. The aim of this thesis was to extend the SIMD computer capabilities to allow it to perform more efficiently at the image processing intermediate level. The study of some representative algorithms of this class, points out the limits of this computer. Nevertheless, these limits can be erased by architectural modifications. This leads us to propose SYMPATIX, a new SIMD parallel computer. To valid its new concept, a behavioural model written in VHDL - Hardware Description Language - has been elaborated. With this model, the new computer performances have been estimated running image processing algorithm simulations. VHDL modeling approach allows to perform the system top down electronic design giving an easy coupling between system architectural modifications and their electronic cost. The obtained results show SYMPATIX to be an efficient computer for low and intermediate level image processing. It can be connected to a high level computer, opening up the development of new computer vision applications. This thesis also presents, a top down design method, based on the VHDL, intended for electronic system architects. (author) [fr

  9. Highway traffic simulation on multi-processor computers

    Energy Technology Data Exchange (ETDEWEB)

    Hanebutte, U.R.; Doss, E.; Tentner, A.M.

    1997-04-01

    A computer model has been developed to simulate highway traffic for various degrees of automation with a high level of fidelity in regard to driver control and vehicle characteristics. The model simulates vehicle maneuvering in a multi-lane highway traffic system and allows for the use of Intelligent Transportation System (ITS) technologies such as an Automated Intelligent Cruise Control (AICC). The structure of the computer model facilitates the use of parallel computers for the highway traffic simulation, since domain decomposition techniques can be applied in a straight forward fashion. In this model, the highway system (i.e. a network of road links) is divided into multiple regions; each region is controlled by a separate link manager residing on an individual processor. A graphical user interface augments the computer model kv allowing for real-time interactive simulation control and interaction with each individual vehicle and road side infrastructure element on each link. Average speed and traffic volume data is collected at user-specified loop detector locations. Further, as a measure of safety the so- called Time To Collision (TTC) parameter is being recorded.

  10. The Potential of the Cell Processor for Scientific Computing

    Energy Technology Data Exchange (ETDEWEB)

    Williams, Samuel; Shalf, John; Oliker, Leonid; Husbands, Parry; Kamil, Shoaib; Yelick, Katherine

    2005-10-14

    The slowing pace of commodity microprocessor performance improvements combined with ever-increasing chip power demands has become of utmost concern to computational scientists. As a result, the high performance computing community is examining alternative architectures that address the limitations of modern cache-based designs. In this work, we examine the potential of the using the forth coming STI Cell processor as a building block for future high-end computing systems. Our work contains several novel contributions. We are the first to present quantitative Cell performance data on scientific kernels and show direct comparisons against leading superscalar (AMD Opteron), VLIW (IntelItanium2), and vector (Cray X1) architectures. Since neither Cell hardware nor cycle-accurate simulators are currently publicly available, we develop both analytical models and simulators to predict kernel performance. Our work also explores the complexity of mapping several important scientific algorithms onto the Cells unique architecture. Additionally, we propose modest microarchitectural modifications that could significantly increase the efficiency of double-precision calculations. Overall results demonstrate the tremendous potential of the Cell architecture for scientific computations in terms of both raw performance and power efficiency.

  11. Algorithms for computational fluid dynamics n parallel processors

    International Nuclear Information System (INIS)

    Van de Velde, E.F.

    1986-01-01

    A study of parallel algorithms for the numerical solution of partial differential equations arising in computational fluid dynamics is presented. The actual implementation on parallel processors of shared and nonshared memory design is discussed. The performance of these algorithms is analyzed in terms of machine efficiency, communication time, bottlenecks and software development costs. For elliptic equations, a parallel preconditioned conjugate gradient method is described, which has been used to solve pressure equations discretized with high order finite elements on irregular grids. A parallel full multigrid method and a parallel fast Poisson solver are also presented. Hyperbolic conservation laws were discretized with parallel versions of finite difference methods like the Lax-Wendroff scheme and with the Random Choice method. Techniques are developed for comparing the behavior of an algorithm on different architectures as a function of problem size and local computational effort. Effective use of these advanced architecture machines requires the use of machine dependent programming. It is shown that the portability problems can be minimized by introducing high level operations on vectors and matrices structured into program libraries

  12. Efficient Backprojection-Based Synthetic Aperture Radar Computation with Many-Core Processors

    Directory of Open Access Journals (Sweden)

    Jongsoo Park

    2013-01-01

    Full Text Available Tackling computationally challenging problems with high efficiency often requires the combination of algorithmic innovation, advanced architecture, and thorough exploitation of parallelism. We demonstrate this synergy through synthetic aperture radar (SAR via backprojection, an image reconstruction method that can require hundreds of TFLOPS. Computation cost is significantly reduced by our new algorithm of approximate strength reduction; data movement cost is economized by software locality optimizations facilitated by advanced architecture support; parallelism is fully harnessed in various patterns and granularities. We deliver over 35 billion backprojections per second throughput per compute node on an Intel® Xeon® processor E5-2670-based cluster, equipped with Intel® Xeon Phi™ coprocessors. This corresponds to processing a 3K×3K image within a second using a single node. Our study can be extended to other settings: backprojection is applicable elsewhere including medical imaging, approximate strength reduction is a general code transformation technique, and many-core processors are emerging as a solution to energy-efficient computing.

  13. Optical backplane interconnect switch for data processors and computers

    Science.gov (United States)

    Hendricks, Herbert D.; Benz, Harry F.; Hammer, Jacob M.

    1989-01-01

    An optoelectronic integrated device design is reported which can be used to implement an all-optical backplane interconnect switch. The switch is sized to accommodate an array of processors and memories suitable for direct replacement into the basic avionic multiprocessor backplane. The optical backplane interconnect switch is also suitable for direct replacement of the PI bus traffic switch and at the same time, suitable for supporting pipelining of the processor and memory. The 32 bidirectional switchable interconnects are configured with broadcast capability for controls, reconfiguration, and messages. The approach described here can handle a serial interconnection of data processors or a line-to-link interconnection of data processors. An optical fiber demonstration of this approach is presented.

  14. Single neuron computation

    CERN Document Server

    McKenna, Thomas M; Zornetzer, Steven F

    1992-01-01

    This book contains twenty-two original contributions that provide a comprehensive overview of computational approaches to understanding a single neuron structure. The focus on cellular-level processes is twofold. From a computational neuroscience perspective, a thorough understanding of the information processing performed by single neurons leads to an understanding of circuit- and systems-level activity. From the standpoint of artificial neural networks (ANNs), a single real neuron is as complex an operational unit as an entire ANN, and formalizing the complex computations performed by real n

  15. Study of an analog/logic processor for the design of an auto patch hybrid computer

    International Nuclear Information System (INIS)

    Koched, Hassen

    1976-01-01

    This paper presents the experimental study of an analog multiprocessor designed at SES/CEN-Saclay. An application of such a device as a basic component of an auto-patch hybrid computer is presented. First, the description of the processor, and a presentation of the theoretical concepts which governed the design of the processor are given. Experiments on an hybrid computer are then presented. Finally, different systems of automatic patching are presented, and conveniently modified, for the use of such a processor. (author) [fr

  16. Input data requirements for special processors in the computation system containing the VENTURE neutronics code

    International Nuclear Information System (INIS)

    Vondy, D.R.; Fowler, T.B.; Cunningham, G.W.

    1979-07-01

    User input data requirements are presented for certain special processors in a nuclear reactor computation system. These processors generally read data in formatted form and generate binary interface data files. Some data processing is done to convert from the user oriented form to the interface file forms. The VENTURE diffusion theory neutronics code and other computation modules in this system use the interface data files which are generated

  17. Input data requirements for special processors in the computation system containing the VENTURE neutronics code

    International Nuclear Information System (INIS)

    Vondy, D.R.; Fowler, T.B.; Cunningham, G.W.

    1976-11-01

    This report presents user input data requirements for certain special processors in a nuclear reactor computation system. These processors generally read data in formatted form and generate binary interface data files. Some data processing is done to convert from the user-oriented form to the interface file forms. The VENTURE diffusion theory neutronics code and other computation modules in this system use the interface data files which are generated

  18. Computing effective properties of random heterogeneous materials on heterogeneous parallel processors

    Science.gov (United States)

    Leidi, Tiziano; Scocchi, Giulio; Grossi, Loris; Pusterla, Simone; D'Angelo, Claudio; Thiran, Jean-Philippe; Ortona, Alberto

    2012-11-01

    In recent decades, finite element (FE) techniques have been extensively used for predicting effective properties of random heterogeneous materials. In the case of very complex microstructures, the choice of numerical methods for the solution of this problem can offer some advantages over classical analytical approaches, and it allows the use of digital images obtained from real material samples (e.g., using computed tomography). On the other hand, having a large number of elements is often necessary for properly describing complex microstructures, ultimately leading to extremely time-consuming computations and high memory requirements. With the final objective of reducing these limitations, we improved an existing freely available FE code for the computation of effective conductivity (electrical and thermal) of microstructure digital models. To allow execution on hardware combining multi-core CPUs and a GPU, we first translated the original algorithm from Fortran to C, and we subdivided it into software components. Then, we enhanced the C version of the algorithm for parallel processing with heterogeneous processors. With the goal of maximizing the obtained performances and limiting resource consumption, we utilized a software architecture based on stream processing, event-driven scheduling, and dynamic load balancing. The parallel processing version of the algorithm has been validated using a simple microstructure consisting of a single sphere located at the centre of a cubic box, yielding consistent results. Finally, the code was used for the calculation of the effective thermal conductivity of a digital model of a real sample (a ceramic foam obtained using X-ray computed tomography). On a computer equipped with dual hexa-core Intel Xeon X5670 processors and an NVIDIA Tesla C2050, the parallel application version features near to linear speed-up progression when using only the CPU cores. It executes more than 20 times faster when additionally using the GPU.

  19. Computational Particle Dynamic Simulations on Multicore Processors (CPDMu) Final Report Phase I

    Energy Technology Data Exchange (ETDEWEB)

    Schmalz, Mark S

    2011-07-24

    Statement of Problem - Department of Energy has many legacy codes for simulation of computational particle dynamics and computational fluid dynamics applications that are designed to run on sequential processors and are not easily parallelized. Emerging high-performance computing architectures employ massively parallel multicore architectures (e.g., graphics processing units) to increase throughput. Parallelization of legacy simulation codes is a high priority, to achieve compatibility, efficiency, accuracy, and extensibility. General Statement of Solution - A legacy simulation application designed for implementation on mainly-sequential processors has been represented as a graph G. Mathematical transformations, applied to G, produce a graph representation {und G} for a high-performance architecture. Key computational and data movement kernels of the application were analyzed/optimized for parallel execution using the mapping G {yields} {und G}, which can be performed semi-automatically. This approach is widely applicable to many types of high-performance computing systems, such as graphics processing units or clusters comprised of nodes that contain one or more such units. Phase I Accomplishments - Phase I research decomposed/profiled computational particle dynamics simulation code for rocket fuel combustion into low and high computational cost regions (respectively, mainly sequential and mainly parallel kernels), with analysis of space and time complexity. Using the research team's expertise in algorithm-to-architecture mappings, the high-cost kernels were transformed, parallelized, and implemented on Nvidia Fermi GPUs. Measured speedups (GPU with respect to single-core CPU) were approximately 20-32X for realistic model parameters, without final optimization. Error analysis showed no loss of computational accuracy. Commercial Applications and Other Benefits - The proposed research will constitute a breakthrough in solution of problems related to efficient

  20. Computationally efficient implementation of sarse-tap FIR adaptive filters with tap-position control on intel IA-32 processors

    OpenAIRE

    Hirano, Akihiro; Nakayama, Kenji

    2008-01-01

    This paper presents an computationally ef cient implementation of sparse-tap FIR adaptive lters with tapposition control on Intel IA-32 processors with single-instruction multiple-data (SIMD) capability. In order to overcome randomorder memory access which prevents a ectorization, a blockbased processing and a re-ordering buffer are introduced. A dynamic register allocation and the use of memory-to-register operations help the maximization of the loop-unrolling level. Up to 66percent speedup ...

  1. Computing prime factors with a Josephson phase qubit quantum processor

    Science.gov (United States)

    Lucero, Erik; Barends, R.; Chen, Y.; Kelly, J.; Mariantoni, M.; Megrant, A.; O'Malley, P.; Sank, D.; Vainsencher, A.; Wenner, J.; White, T.; Yin, Y.; Cleland, A. N.; Martinis, John M.

    2012-10-01

    A quantum processor can be used to exploit quantum mechanics to find the prime factors of composite numbers. Compiled versions of Shor's algorithm and Gauss sum factorizations have been demonstrated on ensemble quantum systems, photonic systems and trapped ions. Although proposed, these algorithms have yet to be shown using solid-state quantum bits. Using a number of recent qubit control and hardware advances, here we demonstrate a nine-quantum-element solid-state quantum processor and show three experiments to highlight its capabilities. We begin by characterizing the device with spectroscopy. Next, we produce coherent interactions between five qubits and verify bi- and tripartite entanglement through quantum state tomography. In the final experiment, we run a three-qubit compiled version of Shor's algorithm to factor the number 15, and successfully find the prime factors 48% of the time. Improvements in the superconducting qubit coherence times and more complex circuits should provide the resources necessary to factor larger composite numbers and run more intricate quantum algorithms.

  2. Single-chip serial channel enhances multi-processor systems

    Energy Technology Data Exchange (ETDEWEB)

    Millar, J.

    1982-01-01

    In this paper multiprocessor systems are described and explained. The impact that VLSI advancements are having on multiprocessor design is pointed out. The TMS 7041 single-chip microcomputer is described briefly, highlighting its multiprocessor communication capability. And finally, a typical multiprocessor system is shown, implementing the TMS 7041.

  3. A scalable single-chip multi-processor architecture with on-chip RTOS kernel

    NARCIS (Netherlands)

    Theelen, B.D.; Verschueren, A.C.; Reyes Suarez, V.V.; Stevens, M.P.J.; Nunez, A.

    2003-01-01

    Now that system-on-chip technology is emerging, single-chip multi-processors are becoming feasible. A key problem of designing such systems is the complexity of their on-chip interconnects and memory architecture. It is furthermore unclear at what level software should be integrated. An example of a

  4. Vector and parallel processors in computational science. Proceedings

    Energy Technology Data Exchange (ETDEWEB)

    Duff, I S; Reid, J K

    1985-01-01

    This volume contains papers from most of the invited talks and from several of the contributed talks and poster sessions presented at VAPP II. The contents present an extensive coverage of all important aspects of vector and parallel processors, including hardware, languages, numerical algorithms and applications. The topics covered include descriptions of new machines (both research and commercial machines), languages and software aids, and general discussions of whole classes of machines and their uses. Numerical methods papers include Monte Carlo algorithms, iterative and direct methods for solving large systems, finite elements, optimization, random number generation and mathematical software. The specific applications covered include neutron diffusion calculations, molecular dynamics, weather forecasting, lattice gauge calculations, fluid dynamics, flight simulation, cartography, image processing and cryptography. Most machines and architecture types are being used for these applications. many refs.

  5. The Square Kilometre Array Science Data Processor. Preliminary compute platform design

    International Nuclear Information System (INIS)

    Broekema, P.C.; Nieuwpoort, R.V. van; Bal, H.E.

    2015-01-01

    The Square Kilometre Array is a next-generation radio-telescope, to be built in South Africa and Western Australia. It is currently in its detailed design phase, with procurement and construction scheduled to start in 2017. The SKA Science Data Processor is the high-performance computing element of the instrument, responsible for producing science-ready data. This is a major IT project, with the Science Data Processor expected to challenge the computing state-of-the art even in 2020. In this paper we introduce the preliminary Science Data Processor design and the principles that guide the design process, as well as the constraints to the design. We introduce a highly scalable and flexible system architecture capable of handling the SDP workload

  6. The Fermilab Advanced Computer Program multi-array processor system (ACPMAPS): A site oriented supercomputer for theoretical physics

    International Nuclear Information System (INIS)

    Nash, T.; Areti, H.; Atac, R.

    1988-08-01

    The ACP Multi-Array Processor System (ACPMAPS) is a highly cost effective, local memory parallel computer designed for floating point intensive grid based problems. The processing nodes of the system are single board array processors based on the FORTRAN and C programmable Weitek XL chip set. The nodes are connected by a network of very high bandwidth 16 port crossbar switches. The architecture is designed to achieve the highest possible cost effectiveness while maintaining a high level of programmability. The primary application of the machine at Fermilab will be lattice gauge theory. The hardware is supported by a transparent site oriented software system called CANOPY which shields theorist users from the underlying node structure. 4 refs., 2 figs

  7. Bringing Algorithms to Life: Cooperative Computing Activities Using Students as Processors.

    Science.gov (United States)

    Bachelis, Gregory F.; And Others

    1994-01-01

    Presents cooperative computing activities in which each student plays the role of a switch or processor and acts out algorithms. Includes binary counting, finding the smallest card in a deck, sorting by selection and merging, adding and multiplying large numbers, and sieving for primes. (16 references) (Author/MKR)

  8. Mathematical Methods and Algorithms of Mobile Parallel Computing on the Base of Multi-core Processors

    Directory of Open Access Journals (Sweden)

    Alexander B. Bakulev

    2012-11-01

    Full Text Available This article deals with mathematical models and algorithms, providing mobility of sequential programs parallel representation on the high-level language, presents formal model of operation environment processes management, based on the proposed model of programs parallel representation, presenting computation process on the base of multi-core processors.

  9. Networking Micro-Processors for Effective Computer Utilization in Nursing

    OpenAIRE

    Mangaroo, Jewellean; Smith, Bob; Glasser, Jay; Littell, Arthur; Saba, Virginia

    1982-01-01

    Networking as a social entity has important implications for maximizing computer resources for improved utilization in nursing. This paper describes the one process of networking of complementary resources at three institutions. Prairie View A&M University, Texas A&M University and the University of Texas School of Public Health, which has effected greater utilization of computers at the college. The results achieved in this project should have implications for nurses, users, and consumers in...

  10. Restricted access processor - An application of computer security technology

    Science.gov (United States)

    Mcmahon, E. M.

    1985-01-01

    This paper describes a security guard device that is currently being developed by Computer Sciences Corporation (CSC). The methods used to provide assurance that the system meets its security requirements include the system architecture, a system security evaluation, and the application of formal and informal verification techniques. The combination of state-of-the-art technology and the incorporation of new verification procedures results in a demonstration of the feasibility of computer security technology for operational applications.

  11. Scalable Parallelization of Skyline Computation for Multi-core Processors

    DEFF Research Database (Denmark)

    Chester, Sean; Sidlauskas, Darius; Assent, Ira

    2015-01-01

    The skyline is an important query operator for multi-criteria decision making. It reduces a dataset to only those points that offer optimal trade-offs of dimensions. In general, it is very expensive to compute. Recently, multi-core CPU algorithms have been proposed to accelerate the computation...... of the skyline. However, they do not sufficiently minimize dominance tests and so are not competitive with state-of-the-art sequential algorithms. In this paper, we introduce a novel multi-core skyline algorithm, Hybrid, which processes points in blocks. It maintains a shared, global skyline among all threads...

  12. Survey of ANL organization plans for word processors, personal computers, workstations, and associated software

    Energy Technology Data Exchange (ETDEWEB)

    Fenske, K.R.

    1991-11-01

    The Computing and Telecommunications Division (CTD) has compiled this Survey of ANL Organization Plans for Word Processors, Personal Computers, Workstations, and Associated Software to provide DOE and Argonne with a record of recent growth in the acquisition and use of personal computers, microcomputers, and word processors at ANL. Laboratory planners, service providers, and people involved in office automation may find the Survey useful. It is for internal use only, and any unauthorized use is prohibited. Readers of the Survey should use it as a reference that documents the plans of each organization for office automation, identifies appropriate planners and other contact people in those organizations, and encourages the sharing of this information among those people making plans for organizations and decisions about office automation. The Survey supplements information in both the ANL Statement of Site Strategy for Computing Workstations and the ANL Site Response for the DOE Information Technology Resources Long-Range Plan.

  13. Survey of ANL organization plans for word processors, personal computers, workstations, and associated software. Revision 3

    Energy Technology Data Exchange (ETDEWEB)

    Fenske, K.R.

    1991-11-01

    The Computing and Telecommunications Division (CTD) has compiled this Survey of ANL Organization Plans for Word Processors, Personal Computers, Workstations, and Associated Software to provide DOE and Argonne with a record of recent growth in the acquisition and use of personal computers, microcomputers, and word processors at ANL. Laboratory planners, service providers, and people involved in office automation may find the Survey useful. It is for internal use only, and any unauthorized use is prohibited. Readers of the Survey should use it as a reference that documents the plans of each organization for office automation, identifies appropriate planners and other contact people in those organizations, and encourages the sharing of this information among those people making plans for organizations and decisions about office automation. The Survey supplements information in both the ANL Statement of Site Strategy for Computing Workstations and the ANL Site Response for the DOE Information Technology Resources Long-Range Plan.

  14. PHANTOM: Practical Oblivious Computation in a Secure Processor

    Science.gov (United States)

    2014-05-16

    competitive relationship between the cloud provider and the client. For example, Netflix is hosted on Amazon’s EC2 cloud infrastructure, while Amazon is a...unilever/ (visited on 03/21/2013). [7] Charles Babcock. Cloud Connect: Netflix Finds Home In Amazon EC2. Retrieved Sep 16, 2013. url: http...www.informationweek.com/cloud-computing/infrast ructure/cloud-connect- netflix -finds-home-in-amaz/229300547. [8] Jonathan Bachrach et al. “Chisel: Constructing

  15. DiFX: A software correlator for very long baseline interferometry using multi-processor computing environments

    OpenAIRE

    Deller, A. T.; Tingay, S. J.; Bailes, M.; West, C.

    2007-01-01

    We describe the development of an FX style correlator for Very Long Baseline Interferometry (VLBI), implemented in software and intended to run in multi-processor computing environments, such as large clusters of commodity machines (Beowulf clusters) or computers specifically designed for high performance computing, such as multi-processor shared-memory machines. We outline the scientific and practical benefits for VLBI correlation, these chiefly being due to the inherent flexibility of softw...

  16. Single event effects and performance predictions for space applications of RISC processors

    International Nuclear Information System (INIS)

    Kimbrough, J.R.; Colella, N.J.; Denton, S.M.; Shaeffer, D.L.; Shih, D.; Wilburn, J.W.; Coakley, P.G.; Casteneda, C.; Koga, R.; Clark, D.A.; Ullmann, J.L.

    1994-01-01

    Proton and ion Single Event Phenomena (SEP) tests were performed on 32-b processors including R3000A's from all commercial manufacturers along with the Performance PR3400 family, Integrated Device Technology Inc. 79R3081, LSI Logic Corporation LR33000HC, and Intel i80960MX parts. The microprocessors had acceptable upset rates for operation in a low earth orbit or a lunar mission such as CLEMENTINE with a wide range in proton total dose failure. Even though R3000A devices are 60% smaller in physical area than R3000 devices, there was a 340% increase in device Single Event Upset (SEU) cross section. Software tests of varying complexity demonstrate that registers and other functional blocks using register architecture dominate the cross section. The current approach of giving a single upset cross section can lead to erroneous upset rates depending on the application software

  17. A Versatile Image Processor For Digital Diagnostic Imaging And Its Application In Computed Radiography

    Science.gov (United States)

    Blume, H.; Alexandru, R.; Applegate, R.; Giordano, T.; Kamiya, K.; Kresina, R.

    1986-06-01

    In a digital diagnostic imaging department, the majority of operations for handling and processing of images can be grouped into a small set of basic operations, such as image data buffering and storage, image processing and analysis, image display, image data transmission and image data compression. These operations occur in almost all nodes of the diagnostic imaging communications network of the department. An image processor architecture was developed in which each of these functions has been mapped into hardware and software modules. The modular approach has advantages in terms of economics, service, expandability and upgradeability. The architectural design is based on the principles of hierarchical functionality, distributed and parallel processing and aims at real time response. Parallel processing and real time response is facilitated in part by a dual bus system: a VME control bus and a high speed image data bus, consisting of 8 independent parallel 16-bit busses, capable of handling combined up to 144 MBytes/sec. The presented image processor is versatile enough to meet the video rate processing needs of digital subtraction angiography, the large pixel matrix processing requirements of static projection radiography, or the broad range of manipulation and display needs of a multi-modality diagnostic work station. Several hardware modules are described in detail. For illustrating the capabilities of the image processor, processed 2000 x 2000 pixel computed radiographs are shown and estimated computation times for executing the processing opera-tions are presented.

  18. Assessing the Progress of Trapped-Ion Processors Towards Fault-Tolerant Quantum Computation

    Science.gov (United States)

    Bermudez, A.; Xu, X.; Nigmatullin, R.; O'Gorman, J.; Negnevitsky, V.; Schindler, P.; Monz, T.; Poschinger, U. G.; Hempel, C.; Home, J.; Schmidt-Kaler, F.; Biercuk, M.; Blatt, R.; Benjamin, S.; Müller, M.

    2017-10-01

    A quantitative assessment of the progress of small prototype quantum processors towards fault-tolerant quantum computation is a problem of current interest in experimental and theoretical quantum information science. We introduce a necessary and fair criterion for quantum error correction (QEC), which must be achieved in the development of these quantum processors before their sizes are sufficiently big to consider the well-known QEC threshold. We apply this criterion to benchmark the ongoing effort in implementing QEC with topological color codes using trapped-ion quantum processors and, more importantly, to guide the future hardware developments that will be required in order to demonstrate beneficial QEC with small topological quantum codes. In doing so, we present a thorough description of a realistic trapped-ion toolbox for QEC and a physically motivated error model that goes beyond standard simplifications in the QEC literature. We focus on laser-based quantum gates realized in two-species trapped-ion crystals in high-optical aperture segmented traps. Our large-scale numerical analysis shows that, with the foreseen technological improvements described here, this platform is a very promising candidate for fault-tolerant quantum computation.

  19. 35-We polymer electrolyte membrane fuel cell system for notebook computer using a compact fuel processor

    Science.gov (United States)

    Son, In-Hyuk; Shin, Woo-Cheol; Lee, Yong-Kul; Lee, Sung-Chul; Ahn, Jin-Gu; Han, Sang-Il; kweon, Ho-Jin; Kim, Ju-Yong; Kim, Moon-Chan; Park, Jun-Yong

    A polymer electrolyte membrane fuel cell (PEMFC) system is developed to power a notebook computer. The system consists of a compact methanol-reforming system with a CO preferential oxidation unit, a 16-cell PEMFC stack, and a control unit for the management of the system with a d.c.-d.c. converter. The compact fuel-processor system (260 cm 3) generates about 1.2 L min -1 of reformate, which corresponds to 35 We, with a low CO concentration (notebook computers.

  20. 35-We polymer electrolyte membrane fuel cell system for notebook computer using a compact fuel processor

    Energy Technology Data Exchange (ETDEWEB)

    Son, In-Hyuk; Shin, Woo-Cheol; Lee, Sung-Chul; Ahn, Jin-Gu; Han, Sang-Il; kweon, Ho-Jin; Kim, Ju-Yong; Park, Jun-Yong [Energy 1 Group, Energy Laboratory at Corporate R and D Center in Samsung SDI Co., Ltd., 575, Shin-dong, Yeongtong-gu, Suwon-si, Gyeonggi-do 443-731 (Korea); Lee, Yong-Kul [Department of Chemical Engineering, Dankook University, Youngin 448-701 (Korea); Kim, Moon-Chan [Department of Environmental Engineering, Chongju University, Chongju 360-764 (Korea)

    2008-10-15

    A polymer electrolyte membrane fuel cell (PEMFC) system is developed to power a notebook computer. The system consists of a compact methanol-reforming system with a CO preferential oxidation unit, a 16-cell PEMFC stack, and a control unit for the management of the system with a d.c.-d.c. converter. The compact fuel-processor system (260 cm{sup 3}) generates about 1.2 L min{sup -1} of reformate, which corresponds to 35 We, with a low CO concentration (<30 ppm, typically 0 ppm), and is thus proven to be capable of being targetted at notebook computers. (author)

  1. Survey of ANL organization plans for word processors, personal computers, workstations, and associated software

    Energy Technology Data Exchange (ETDEWEB)

    Fenske, K.R.; Rockwell, V.S.

    1992-08-01

    The Computing and Telecommunications Division (CTD) has compiled this Survey of ANL Organization plans for Word Processors, Personal Computers, Workstations, and Associated Software (ANL/TM, Revision 4) to provide DOE and Argonne with a record of recent growth in the acquisition and use of personal computers, microcomputers, and word processors at ANL. Laboratory planners, service providers, and people involved in office automation may find the Survey useful. It is for internal use only, and any unauthorized use is prohibited. Readers of the Survey should use it as a reference document that (1) documents the plans of each organization for office automation, (2) identifies appropriate planners and other contact people in those organizations and (3) encourages the sharing of this information among those people making plans for organizations and decisions about office automation. The Survey supplements information in both the ANL Statement of Site Strategy for Computing Workstations (ANL/TM 458) and the ANL Site Response for the DOE Information Technology Resources Long-Range Plan (ANL/TM 466).

  2. Survey of ANL organization plans for word processors, personal computers, workstations, and associated software. Revision 4

    Energy Technology Data Exchange (ETDEWEB)

    Fenske, K.R.; Rockwell, V.S.

    1992-08-01

    The Computing and Telecommunications Division (CTD) has compiled this Survey of ANL Organization plans for Word Processors, Personal Computers, Workstations, and Associated Software (ANL/TM, Revision 4) to provide DOE and Argonne with a record of recent growth in the acquisition and use of personal computers, microcomputers, and word processors at ANL. Laboratory planners, service providers, and people involved in office automation may find the Survey useful. It is for internal use only, and any unauthorized use is prohibited. Readers of the Survey should use it as a reference document that (1) documents the plans of each organization for office automation, (2) identifies appropriate planners and other contact people in those organizations and (3) encourages the sharing of this information among those people making plans for organizations and decisions about office automation. The Survey supplements information in both the ANL Statement of Site Strategy for Computing Workstations (ANL/TM 458) and the ANL Site Response for the DOE Information Technology Resources Long-Range Plan (ANL/TM 466).

  3. Functional Verification of Enhanced RISC Processor

    OpenAIRE

    SHANKER NILANGI; SOWMYA L

    2013-01-01

    This paper presents design and verification of a 32-bit enhanced RISC processor core having floating point computations integrated within the core, has been designed to reduce the cost and complexity. The designed 3 stage pipelined 32-bit RISC processor is based on the ARM7 processor architecture with single precision floating point multiplier, floating point adder/subtractor for floating point operations and 32 x 32 booths multiplier added to the integer core of ARM7. The binary representati...

  4. Development of a new signal processor for tetralateral position sensitive detector based on single-chip microcomputer

    International Nuclear Information System (INIS)

    Huang Meizhen; Shi Longzhao; Wang Yuxing; Ni Yi; Li Zhenqing; Ding Haifeng

    2006-01-01

    An inherently nonlinear relation between the output current of the tetralateral position sensitive detector (PSD) and the position of the incident light spot has been found theoretically. Based on single-chip microcomputer and the theoretical relation between output current and position, a new signal processor capable of correcting nonlinearity and reducing position measurement deviation of tetralateral PSD was developed. A tetralateral PSD (S1200, 13x13 mm 2 , Hamamatsu Photonics K.K.) was measured with the new signal processor, a linear relation between the output position of the PSD, and the incident position of the light spot was obtained. In the 60% range of a 13x13 mm 2 active area, the position nonlinearity (rms) was 0.15% and the position measurement deviation (rms) was ±20 μm. Compared with traditional analog signal processor, the new signal processor is of better compatibility, lower cost, higher precision, and easier to be interfaced

  5. TMS320C25 Digital Signal Processor For 2-Dimensional Fast Fourier Transform Computation

    International Nuclear Information System (INIS)

    Ardisasmita, M. Syamsa

    1996-01-01

    The Fourier transform is one of the most important mathematical tool in signal processing and analysis, which converts information from the time/spatial domain into the frequency domain. Even with implementation of the Fast Fourier Transform algorithms in imaging data, the discrete Fourier transform execution consume a lot of time. Digital signal processors are designed specifically to perform computation intensive digital signal processing algorithms. By taking advantage of the advanced architecture. parallel processing, and dedicated digital signal processing (DSP) instruction sets. This device can execute million of DSP operations per second. The device architecture, characteristics and feature suitable for fast Fourier transform application and speed-up are discussed

  6. Animated computer graphics models of space and earth sciences data generated via the massively parallel processor

    Science.gov (United States)

    Treinish, Lloyd A.; Gough, Michael L.; Wildenhain, W. David

    1987-01-01

    The capability was developed of rapidly producing visual representations of large, complex, multi-dimensional space and earth sciences data sets via the implementation of computer graphics modeling techniques on the Massively Parallel Processor (MPP) by employing techniques recently developed for typically non-scientific applications. Such capabilities can provide a new and valuable tool for the understanding of complex scientific data, and a new application of parallel computing via the MPP. A prototype system with such capabilities was developed and integrated into the National Space Science Data Center's (NSSDC) Pilot Climate Data System (PCDS) data-independent environment for computer graphics data display to provide easy access to users. While developing these capabilities, several problems had to be solved independently of the actual use of the MPP, all of which are outlined.

  7. Multi-Core Processor Memory Contention Benchmark Analysis Case Study

    Science.gov (United States)

    Simon, Tyler; McGalliard, James

    2009-01-01

    Multi-core processors dominate current mainframe, server, and high performance computing (HPC) systems. This paper provides synthetic kernel and natural benchmark results from an HPC system at the NASA Goddard Space Flight Center that illustrate the performance impacts of multi-core (dual- and quad-core) vs. single core processor systems. Analysis of processor design, application source code, and synthetic and natural test results all indicate that multi-core processors can suffer from significant memory subsystem contention compared to similar single-core processors.

  8. A prediction method for job runtimes on shared processors: Survey, statistical analysis and new avenues

    NARCIS (Netherlands)

    Dobber, A.M.; van der Mei, R.D.; Koole, G.M.

    2007-01-01

    Grid computing is an emerging technology by which huge numbers of processors over the world create a global source of processing power. Their collaboration makes it possible to perform computations that are too extensive to perform on a single processor. On a grid, processors may connect and

  9. Custom Hardware Processor to Compute a Figure of Merit for the Fit of X-Ray Diffraction

    International Nuclear Information System (INIS)

    Gomez-Pulido, P.J.A.; Vega-Rodriguez, M.A.; Sanchez-Perez, J.M.; Sanchez-Bajo, F.; Santos, S.P.D.

    2008-01-01

    A custom processor based on re configurable hardware technology is proposed in order to compute the figure of merit used to measure the quality of the fit of X-ray diffraction peaks. As the experimental X-ray profiles can present many peaks severely overlapped, it is necessary to select the best model among a large set of reasonably good solutions. Determining the best solution is computationally intensive, because this is a hard combinatorial optimization problem. The proposed processors, working in parallel, increase the performance relative to a software implementation.

  10. Neuron splitting in compute-bound parallel network simulations enables runtime scaling with twice as many processors.

    Science.gov (United States)

    Hines, Michael L; Eichner, Hubert; Schürmann, Felix

    2008-08-01

    Neuron tree topology equations can be split into two subtrees and solved on different processors with no change in accuracy, stability, or computational effort; communication costs involve only sending and receiving two double precision values by each subtree at each time step. Splitting cells is useful in attaining load balance in neural network simulations, especially when there is a wide range of cell sizes and the number of cells is about the same as the number of processors. For compute-bound simulations load balance results in almost ideal runtime scaling. Application of the cell splitting method to two published network models exhibits good runtime scaling on twice as many processors as could be effectively used with whole-cell balancing.

  11. Single channel analog pulse processor Asic for gas proportional counters and SI detector

    International Nuclear Information System (INIS)

    Chandratre, V.B.; Sarkar, Soumen; Kataria, S.K.; Viyogi, Y.P.

    2005-01-01

    The paper presents the design and development of a single channel pulse processor in short Singleplex ASIC targeted for gas proportional counters/Si detectors. The design is optimized for the dynamic range of +500 fC to -500 fC with provision for externally adjusted pole-zero cancellation. A dedicated filter based on the de-convolution principle is used for the cancellation of the long hyperbolic signal tail produced by the slow drift of ions, typical in gas proportional with the filter time constants derived from the actual detector input signal shape. The pole-zero adjustment can be done by external dc voltage to achieve perfect base-line recovery to 1% after 5 μs. The simulated 0 pf noise is 500 e - rms for the peaking time of 1.2 μs with noise slope of 7e - -. The gain is 3.4 mv/fC over the entire linear dynamic range with power dissipation of 13 mW. This design is a modified version of Indiplex chip with features dynamic range equal gain on both polarities with nearly same noise and serves as diagnostic chip for Indiplex. The chip can be used for radiation monitoring instruments. (author)

  12. EXPERIENCE WITH FPGA-BASED PROCESSOR CORE AS FRONT-END COMPUTER

    International Nuclear Information System (INIS)

    HOFF, L.T.

    2005-01-01

    The RHIC control system architecture follows the familiar ''standard model''. LINUX workstations are used as operator consoles. Front-end computers are distributed around the accelerator, close to equipment being controlled or monitored. These computers are generally based on VMEbus CPU modules running the VxWorks operating system. I/O is typically performed via the VMEbus, or via PMC daughter cards (via an internal PCI bus), or via on-board I/O interfaces (Ethernet or serial). Advances in FPGA size and sophistication now permit running virtual processor ''cores'' within the FPGA logic, including ''cores'' with advanced features such as memory management. Such systems offer certain advantages over traditional VMEbus Front-end computers. Advantages include tighter coupling with FPGA logic, and therefore higher I/O bandwidth, and flexibility in packaging, possibly resulting in a lower noise environment and/or lower cost. This paper presents the experience acquired while porting the RHIC control system to a PowerPC 405 core within a Xilinx FPGA for use in low-level RF control

  13. Embedded Processor Laboratory

    Data.gov (United States)

    Federal Laboratory Consortium — The Embedded Processor Laboratory provides the means to design, develop, fabricate, and test embedded computers for missile guidance electronics systems in support...

  14. Icarus: A 2-D Direct Simulation Monte Carlo (DSMC) Code for Multi-Processor Computers

    International Nuclear Information System (INIS)

    BARTEL, TIMOTHY J.; PLIMPTON, STEVEN J.; GALLIS, MICHAIL A.

    2001-01-01

    Icarus is a 2D Direct Simulation Monte Carlo (DSMC) code which has been optimized for the parallel computing environment. The code is based on the DSMC method of Bird[11.1] and models from free-molecular to continuum flowfields in either cartesian (x, y) or axisymmetric (z, r) coordinates. Computational particles, representing a given number of molecules or atoms, are tracked as they have collisions with other particles or surfaces. Multiple species, internal energy modes (rotation and vibration), chemistry, and ion transport are modeled. A new trace species methodology for collisions and chemistry is used to obtain statistics for small species concentrations. Gas phase chemistry is modeled using steric factors derived from Arrhenius reaction rates or in a manner similar to continuum modeling. Surface chemistry is modeled with surface reaction probabilities; an optional site density, energy dependent, coverage model is included. Electrons are modeled by either a local charge neutrality assumption or as discrete simulational particles. Ion chemistry is modeled with electron impact chemistry rates and charge exchange reactions. Coulomb collision cross-sections are used instead of Variable Hard Sphere values for ion-ion interactions. The electro-static fields can either be: externally input, a Langmuir-Tonks model or from a Green's Function (Boundary Element) based Poison Solver. Icarus has been used for subsonic to hypersonic, chemically reacting, and plasma flows. The Icarus software package includes the grid generation, parallel processor decomposition, post-processing, and restart software. The commercial graphics package, Tecplot, is used for graphics display. All of the software packages are written in standard Fortran

  15. Analysis of the computational requirements of a pulse-doppler radar signal processor

    CSIR Research Space (South Africa)

    Broich, R

    2012-05-01

    Full Text Available In an attempt to find an optimal processing architecture for radar signal processing applications, the different algorithms that are typically used in a pulse-Doppler radar signal processor are investigated. Radar algorithms are broken down...

  16. HONEI: A collection of libraries for numerical computations targeting multiple processor architectures

    Science.gov (United States)

    van Dyk, Danny; Geveler, Markus; Mallach, Sven; Ribbrock, Dirk; Göddeke, Dominik; Gutwenger, Carsten

    2009-12-01

    We present HONEI, an open-source collection of libraries offering a hardware oriented approach to numerical calculations. HONEI abstracts the hardware, and applications written on top of HONEI can be executed on a wide range of computer architectures such as CPUs, GPUs and the Cell processor. We demonstrate the flexibility and performance of our approach with two test applications, a Finite Element multigrid solver for the Poisson problem and a robust and fast simulation of shallow water waves. By linking against HONEI's libraries, we achieve a two-fold speedup over straight forward C++ code using HONEI's SSE backend, and additional 3-4 and 4-16 times faster execution on the Cell and a GPU. A second important aspect of our approach is that the full performance capabilities of the hardware under consideration can be exploited by adding optimised application-specific operations to the HONEI libraries. HONEI provides all necessary infrastructure for development and evaluation of such kernels, significantly simplifying their development. Program summaryProgram title: HONEI Catalogue identifier: AEDW_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEDW_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GPLv2 No. of lines in distributed program, including test data, etc.: 216 180 No. of bytes in distributed program, including test data, etc.: 1 270 140 Distribution format: tar.gz Programming language: C++ Computer: x86, x86_64, NVIDIA CUDA GPUs, Cell blades and PlayStation 3 Operating system: Linux RAM: at least 500 MB free Classification: 4.8, 4.3, 6.1 External routines: SSE: none; [1] for GPU, [2] for Cell backend Nature of problem: Computational science in general and numerical simulation in particular have reached a turning point. The revolution developers are facing is not primarily driven by a change in (problem-specific) methodology, but rather by the fundamental paradigm shift of the

  17. Computations on the massively parallel processor at the Goddard Space Flight Center

    Science.gov (United States)

    Strong, James P.

    1991-01-01

    Described are four significant algorithms implemented on the massively parallel processor (MPP) at the Goddard Space Flight Center. Two are in the area of image analysis. Of the other two, one is a mathematical simulation experiment and the other deals with the efficient transfer of data between distantly separated processors in the MPP array. The first algorithm presented is the automatic determination of elevations from stereo pairs. The second algorithm solves mathematical logistic equations capable of producing both ordered and chaotic (or random) solutions. This work can potentially lead to the simulation of artificial life processes. The third algorithm is the automatic segmentation of images into reasonable regions based on some similarity criterion, while the fourth is an implementation of a bitonic sort of data which significantly overcomes the nearest neighbor interconnection constraints on the MPP for transferring data between distant processors.

  18. XL-100S microprogrammable processor

    International Nuclear Information System (INIS)

    Gorbunov, N.V.; Guzik, Z.; Sutulin, V.A.; Forytski, A.

    1983-01-01

    The XL-100S microprogrammable processor providing the multiprocessor operation mode in the XL system crate is described. The processor meets the EUR 6500 CAMAC standards, address up to 4 Mbyte memory, and interacts with 7 CAMAC branchas. Eight external requests initiate operations preset by a sequence of microcommands in a memory of the capacity up to 64 kwords of 32-Git. The microprocessor architecture allows one to emulate commands of the majority of mini- or micro-computers, including floating point operations. The XL-100S processor may be used in various branches of experimental physics: for physical experiment apparatus control, fast selection of useful physical events, organization of the of input/output operations, organization of direct assess to memory included, etc. The Am2900 microprocessor set is used as an elementary base. The device is made in the form of a single width CAMAC module

  19. Image Matrix Processor for Volumetric Computations Final Report CRADA No. TSB-1148-95

    Energy Technology Data Exchange (ETDEWEB)

    Roberson, G. Patrick [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Browne, Jolyon [Advanced Research & Applications Corporation, Sunnyvale, CA (United States)

    2018-01-22

    The development of an Image Matrix Processor (IMP) was proposed that would provide an economical means to perform rapid ray-tracing processes on volume "Giga Voxel" data sets. This was a multi-phased project. The objective of the first phase of the IMP project was to evaluate the practicality of implementing a workstation-based Image Matrix Processor for use in volumetric reconstruction and rendering using hardware simulation techniques. Additionally, ARACOR and LLNL worked together to identify and pursue further funding sources to complete a second phase of this project.

  20. Computation of Molecular Spectra on a Quantum Processor with an Error-Resilient Algorithm

    Directory of Open Access Journals (Sweden)

    J. I. Colless

    2018-02-01

    Full Text Available Harnessing the full power of nascent quantum processors requires the efficient management of a limited number of quantum bits with finite coherent lifetimes. Hybrid algorithms, such as the variational quantum eigensolver (VQE, leverage classical resources to reduce the required number of quantum gates. Experimental demonstrations of VQE have resulted in calculation of Hamiltonian ground states, and a new theoretical approach based on a quantum subspace expansion (QSE has outlined a procedure for determining excited states that are central to dynamical processes. We use a superconducting-qubit-based processor to apply the QSE approach to the H_{2} molecule, extracting both ground and excited states without the need for auxiliary qubits or additional minimization. Further, we show that this extended protocol can mitigate the effects of incoherent errors, potentially enabling larger-scale quantum simulations without the need for complex error-correction techniques.

  1. Computation of Molecular Spectra on a Quantum Processor with an Error-Resilient Algorithm

    Science.gov (United States)

    Colless, J. I.; Ramasesh, V. V.; Dahlen, D.; Blok, M. S.; Kimchi-Schwartz, M. E.; McClean, J. R.; Carter, J.; de Jong, W. A.; Siddiqi, I.

    2018-02-01

    Harnessing the full power of nascent quantum processors requires the efficient management of a limited number of quantum bits with finite coherent lifetimes. Hybrid algorithms, such as the variational quantum eigensolver (VQE), leverage classical resources to reduce the required number of quantum gates. Experimental demonstrations of VQE have resulted in calculation of Hamiltonian ground states, and a new theoretical approach based on a quantum subspace expansion (QSE) has outlined a procedure for determining excited states that are central to dynamical processes. We use a superconducting-qubit-based processor to apply the QSE approach to the H2 molecule, extracting both ground and excited states without the need for auxiliary qubits or additional minimization. Further, we show that this extended protocol can mitigate the effects of incoherent errors, potentially enabling larger-scale quantum simulations without the need for complex error-correction techniques.

  2. Design and Demonstration of RSFQ Processor Datapath for High Performance Computing

    Science.gov (United States)

    2014-09-30

    Microarchitecture for Wide Datapath RSFQ Processors: Design Study , IEEE Transactions on Applied Superconductivity, (06 2011): 787. doi: 10.1109/TASC...undergraduates funded by your agreement who graduated during this period and will receive scholarships or fellowships for further studies in science...size of 4.4 mm x 3.0 nun and consists of - 4000 Josephson junctions. The total length of passive transmission lines (PTL) used in wiring is - 0.2m

  3. Parallel quantum computing in a single ensemble quantum computer

    International Nuclear Information System (INIS)

    Long Guilu; Xiao, L.

    2004-01-01

    We propose a parallel quantum computing mode for ensemble quantum computer. In this mode, some qubits are in pure states while other qubits are in mixed states. It enables a single ensemble quantum computer to perform 'single-instruction-multidata' type of parallel computation. Parallel quantum computing can provide additional speedup in Grover's algorithm and Shor's algorithm. In addition, it also makes a fuller use of qubit resources in an ensemble quantum computer. As a result, some qubits discarded in the preparation of an effective pure state in the Schulman-Varizani and the Cleve-DiVincenzo algorithms can be reutilized

  4. High performance graphics processor based computed tomography reconstruction algorithms for nuclear and other large scale applications.

    Energy Technology Data Exchange (ETDEWEB)

    Jimenez, Edward S. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Orr, Laurel J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Thompson, Kyle R. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2013-09-01

    The goal of this work is to develop a fast computed tomography (CT) reconstruction algorithm based on graphics processing units (GPU) that achieves significant improvement over traditional central processing unit (CPU) based implementations. The main challenge in developing a CT algorithm that is capable of handling very large datasets is parallelizing the algorithm in such a way that data transfer does not hinder performance of the reconstruction algorithm. General Purpose Graphics Processing (GPGPU) is a new technology that the Science and Technology (S&T) community is starting to adopt in many fields where CPU-based computing is the norm. GPGPU programming requires a new approach to algorithm development that utilizes massively multi-threaded environments. Multi-threaded algorithms in general are difficult to optimize since performance bottlenecks occur that are non-existent in single-threaded algorithms such as memory latencies. If an efficient GPU-based CT reconstruction algorithm can be developed; computational times could be improved by a factor of 20. Additionally, cost benefits will be realized as commodity graphics hardware could potentially replace expensive supercomputers and high-end workstations. This project will take advantage of the CUDA programming environment and attempt to parallelize the task in such a way that multiple slices of the reconstruction volume are computed simultaneously. This work will also take advantage of the GPU memory by utilizing asynchronous memory transfers, GPU texture memory, and (when possible) pinned host memory so that the memory transfer bottleneck inherent to GPGPU is amortized. Additionally, this work will take advantage of GPU-specific hardware (i.e. fast texture memory, pixel-pipelines, hardware interpolators, and varying memory hierarchy) that will allow for additional performance improvements.

  5. Single-board 32-bit computer for the FASTBUS

    International Nuclear Information System (INIS)

    Kellner, R.; Blossom, J.M.; Hong, J.P.

    1985-01-01

    The Los Alamos National Laboratory is building a 32bit computer on a FASTBUS board. It will use the National Semiconductor 32032 chip set, including the demand-paged memory management, floating point slave processor and interrupt control chips. The board will support 4 megabytes of memory which can be accessed by the processor over an on-board execution bus at processor speeds and which can be accessed by the FASTBUS at 80 megabytes per second. A windowed, direct memory access mechanism allows transfers of up to all of the memory

  6. Transportable GPU (General Processor Units) chip set technology for standard computer architectures

    Science.gov (United States)

    Fosdick, R. E.; Denison, H. C.

    1982-11-01

    The USAFR-developed GPU Chip Set has been utilized by Tracor to implement both USAF and Navy Standard 16-Bit Airborne Computer Architectures. Both configurations are currently being delivered into DOD full-scale development programs. Leadless Hermetic Chip Carrier packaging has facilitated implementation of both architectures on single 41/2 x 5 substrates. The CMOS and CMOS/SOS implementations of the GPU Chip Set have allowed both CPU implementations to use less than 3 watts of power each. Recent efforts by Tracor for USAF have included the definition of a next-generation GPU Chip Set that will retain the application-proven architecture of the current chip set while offering the added cost advantages of transportability across ISO-CMOS and CMOS/SOS processes and across numerous semiconductor manufacturers using a newly-defined set of common design rules. The Enhanced GPU Chip Set will increase speed by an approximate factor of 3 while significantly reducing chip counts and costs of standard CPU implementations.

  7. Supercomputers and parallel computation. Based on the proceedings of a workshop on progress in the use of vector and array processors organised by the Institute of Mathematics and its Applications and held in Bristol, 2-3 September 1982

    International Nuclear Information System (INIS)

    Paddon, D.J.

    1984-01-01

    This book is based on the proceedings of a conference on parallel computing held in 1982. There are 18 papers which cover the following topics: VLSI parallel architectures, the theory of parallel computing and vector and array processor computing. One paper on 'Tough Problems in Reactor Design' is indexed separately. All the contributions are on research done in the United Kingdom. Although much of the experience in array processor computing is associated with the ICL distributed array processor (DAP) and this is reflected in the contributions, the research relating to the ICL DAP is relevant to all types of array processors. (UK)

  8. Analog Integrated Circuit Design for Spike Time Dependent Encoder and Reservoir in Reservoir Computing Processors

    Science.gov (United States)

    2018-01-01

    HAS BEEN REVIEWED AND IS APPROVED FOR PUBLICATION IN ACCORDANCE WITH ASSIGNED DISTRIBUTION STATEMENT. FOR THE CHIEF ENGINEER : / S / / S...bridged high-performance computing, nanotechnology , and integrated circuits & systems. 15. SUBJECT TERMS neuromorphic computing, neuron design, spike...multidisciplinary effort encompassed high-performance computing, nanotechnology , integrated circuits, and integrated systems. The project’s architecture was

  9. Application of the coupled code Athlet-Quabox/Cubbox for the extreme scenarios of the OECD/NRC BWR turbine trip benchmark and its performance on multi-processor computers

    International Nuclear Information System (INIS)

    Langenbuch, S.; Schmidt, K.D.; Velkov, K.

    2003-01-01

    The OECD/NRC BWR Turbine Trip (TT) Benchmark is investigated to perform code-to-code comparison of coupled codes including a comparison to measured data which are available from turbine trip experiments at Peach Bottom 2. This Benchmark problem for a BWR over-pressure transient represents a challenging application of coupled codes which integrate 3-dimensional neutron kinetics into thermal-hydraulic system codes for best-estimate simulation of plant transients. This transient represents a typical application of coupled codes which are usually performed on powerful workstations using a single CPU. Nowadays, the availability of multi-CPUs is much easier. Indeed, powerful workstations already provide 4 to 8 CPU, computer centers give access to multi-processor systems with numbers of CPUs in the order of 16 up to several 100. Therefore, the performance of the coupled code Athlet-Quabox/Cubbox on multi-processor systems is studied. Different cases of application lead to changing requirements of the code efficiency, because the amount of computer time spent in different parts of the code is varying. This paper presents main results of the coupled code Athlet-Quabox/Cubbox for the extreme scenarios of the BWR TT Benchmark together with evaluations of the code performance on multi-processor computers. (authors)

  10. Reducing the computational requirements for simulating tunnel fires by combining multiscale modelling and multiple processor calculation

    DEFF Research Database (Denmark)

    Vermesi, Izabella; Rein, Guillermo; Colella, Francesco

    2017-01-01

    Multiscale modelling of tunnel fires that uses a coupled 3D (fire area) and 1D (the rest of the tunnel) model is seen as the solution to the numerical problem of the large domains associated with long tunnels. The present study demonstrates the feasibility of the implementation of this method...... in FDS version 6.0, a widely used fire-specific, open source CFD software. Furthermore, it compares the reduction in simulation time given by multiscale modelling with the one given by the use of multiple processor calculation. This was done using a 1200m long tunnel with a rectangular cross......-section as a demonstration case. The multiscale implementation consisted of placing a 30MW fire in the centre of a 400m long 3D domain, along with two 400m long 1D ducts on each side of it, that were again bounded by two nodes each. A fixed volume flow was defined in the upstream duct and the two models were coupled...

  11. Online Fastbus processor for LEP

    International Nuclear Information System (INIS)

    Mueller, H.

    1986-01-01

    The author describes the online computing aspects of Fastbus systems using a processor module which has been developed at CERN and is now available commercially. These General Purpose Master/Slaves (GPMS) are based on 68000/10 (or optionally 68020/68881) processors. Applications include use as event-filters (DELPHI), supervisory controllers, Fastbus stand-alone diagnostic tools, and multiprocessor array components. The direct mapping of single, 32-bit assembly instructions to execute Fastbus protocols makes the use of a GPM both simple and flexible. Loosely coupled processing in Fastbus networks is possible between GPM's as they support access semaphores and use a two port memory as I/O buffer for Fastbus. Both master and slave-ports support block transfers up to 20 Mbytes/s. The CERN standard Fastbus software and the MoniCa symbolic debugging monitor are available on the GPM with real time, multiprocessing support. (Auth.)

  12. The Heidelberg POLYP - a flexible and fault-tolerant poly-processor

    International Nuclear Information System (INIS)

    Maenner, R.; Deluigi, B.

    1981-01-01

    The Heidelberg poly-processor system POLYP is described. It is intended to be used in nuclear physics for reprocessing of experimental data, in high energy physics as second-stage trigger processor, and generally in other applications requiring high-computing power. The POLYP system consists of any number of I/O-processors, processor modules (eventually of different types), global memory segments, and a host processor. All modules (up to several hundred) are connected by a multiple common-data-bus system; all processors, additionally, by a multiple sync bus system for processor/task-scheduling. All hard- and software is designed to be decentralized and free of bottle-necks. Most hardware-faults like single-bit errors in memory or multi-bit errors during transfers are automatically corrected. Defective modules, buses, etc., can be removed with only a graceful degradation of the system-throughput. (orig.)

  13. 3081/E processor

    International Nuclear Information System (INIS)

    Kunz, P.F.; Gravina, M.; Oxoby, G.

    1984-04-01

    The 3081/E project was formed to prepare a much improved IBM mainframe emulator for the future. Its design is based on a large amount of experience in using the 168/E processor to increase available CPU power in both online and offline environments. The processor will be at least equal to the execution speed of a 370/168 and up to 1.5 times faster for heavy floating point code. A single processor will thus be at least four times more powerful than the VAX 11/780, and five processors on a system would equal at least the performance of the IBM 3081K. With its large memory space and simple but flexible high speed interface, the 3081/E is well suited for the online and offline needs of high energy physics in the future

  14. Quantum simulation of superconductors on quantum computers. Toward the first applications of quantum processors

    International Nuclear Information System (INIS)

    Dallaire-Demers, Pierre-Luc

    2016-01-01

    Quantum computers are the ideal platform for quantum simulations. Given enough coherent operations and qubits, such machines can be leveraged to simulate strongly correlated materials, where intricate quantum effects give rise to counter-intuitive macroscopic phenomena such as high-temperature superconductivity. Many phenomena of strongly correlated materials are encapsulated in the Fermi-Hubbard model. In general, no closed-form solution is known for lattices of more than one spatial dimension, but they can be numerically approximated using cluster methods. To model long-range effects such as order parameters, a powerful method to compute the cluster's Green's function consists in finding its self-energy through a variational principle. As is shown in this thesis, this allows the possibility of studying various phase transitions at finite temperature in the Fermi-Hubbard model. However, a classical cluster solver quickly hits an exponential wall in the memory (or computation time) required to store the computation variables. We show theoretically that the cluster solver can be mapped to a subroutine on a quantum computer whose quantum memory usage scales linearly with the number of orbitals in the simulated cluster and the number of measurements scales quadratically. We also provide a gate decomposition of the cluster Hamiltonian and a simple planar architecture for a quantum simulator that can also be used to simulate more general fermionic systems. We briefly analyze the Trotter-Suzuki errors and estimate the scaling properties of the algorithm for more complex applications. A quantum computer with a few tens of qubits could therefore simulate the thermodynamic properties of complex fermionic lattices inaccessible to classical supercomputers.

  15. Quantum simulation of superconductors on quantum computers. Toward the first applications of quantum processors

    Energy Technology Data Exchange (ETDEWEB)

    Dallaire-Demers, Pierre-Luc

    2016-10-07

    Quantum computers are the ideal platform for quantum simulations. Given enough coherent operations and qubits, such machines can be leveraged to simulate strongly correlated materials, where intricate quantum effects give rise to counter-intuitive macroscopic phenomena such as high-temperature superconductivity. Many phenomena of strongly correlated materials are encapsulated in the Fermi-Hubbard model. In general, no closed-form solution is known for lattices of more than one spatial dimension, but they can be numerically approximated using cluster methods. To model long-range effects such as order parameters, a powerful method to compute the cluster's Green's function consists in finding its self-energy through a variational principle. As is shown in this thesis, this allows the possibility of studying various phase transitions at finite temperature in the Fermi-Hubbard model. However, a classical cluster solver quickly hits an exponential wall in the memory (or computation time) required to store the computation variables. We show theoretically that the cluster solver can be mapped to a subroutine on a quantum computer whose quantum memory usage scales linearly with the number of orbitals in the simulated cluster and the number of measurements scales quadratically. We also provide a gate decomposition of the cluster Hamiltonian and a simple planar architecture for a quantum simulator that can also be used to simulate more general fermionic systems. We briefly analyze the Trotter-Suzuki errors and estimate the scaling properties of the algorithm for more complex applications. A quantum computer with a few tens of qubits could therefore simulate the thermodynamic properties of complex fermionic lattices inaccessible to classical supercomputers.

  16. A low-cost vector processor boosting compute-intensive image processing operations

    Science.gov (United States)

    Adorf, Hans-Martin

    1992-01-01

    Low-cost vector processing (VP) is within reach of everyone seriously engaged in scientific computing. The advent of affordable add-on VP-boards for standard workstations complemented by mathematical/statistical libraries is beginning to impact compute-intensive tasks such as image processing. A case in point in the restoration of distorted images from the Hubble Space Telescope. A low-cost implementation is presented of the standard Tarasko-Richardson-Lucy restoration algorithm on an Intel i860-based VP-board which is seamlessly interfaced to a commercial, interactive image processing system. First experience is reported (including some benchmarks for standalone FFT's) and some conclusions are drawn.

  17. AMD's 64-bit Opteron processor

    CERN Multimedia

    CERN. Geneva

    2003-01-01

    This talk concentrates on issues that relate to obtaining peak performance from the Opteron processor. Compiler options, memory layout, MPI issues in multi-processor configurations and the use of a NUMA kernel will be covered. A discussion of recent benchmarking projects and results will also be included.BiographiesDavid RichDavid directs AMD's efforts in high performance computing and also in the use of Opteron processors...

  18. Array processor architecture

    Science.gov (United States)

    Barnes, George H. (Inventor); Lundstrom, Stephen F. (Inventor); Shafer, Philip E. (Inventor)

    1983-01-01

    A high speed parallel array data processing architecture fashioned under a computational envelope approach includes a data base memory for secondary storage of programs and data, and a plurality of memory modules interconnected to a plurality of processing modules by a connection network of the Omega gender. Programs and data are fed from the data base memory to the plurality of memory modules and from hence the programs are fed through the connection network to the array of processors (one copy of each program for each processor). Execution of the programs occur with the processors operating normally quite independently of each other in a multiprocessing fashion. For data dependent operations and other suitable operations, all processors are instructed to finish one given task or program branch before all are instructed to proceed in parallel processing fashion on the next instruction. Even when functioning in the parallel processing mode however, the processors are not locked-step but execute their own copy of the program individually unless or until another overall processor array synchronization instruction is issued.

  19. Three-dimensional magnetic field computation on a distributed memory parallel processor

    International Nuclear Information System (INIS)

    Barion, M.L.

    1990-01-01

    The analysis of three-dimensional magnetic fields by finite element methods frequently proves too onerous a task for the computing resource on which it is attempted. When non-linear and transient effects are included, it may become impossible to calculate the field distribution to sufficient resolution. One approach to this problem is to exploit the natural parallelism in the finite element method via parallel processing. This paper reports on an implementation of a finite element code for non-linear three-dimensional low-frequency magnetic field calculation on Intel's iPSC/2

  20. Distributed quantum computing with single photon sources

    International Nuclear Information System (INIS)

    Beige, A.; Kwek, L.C.

    2005-01-01

    Full text: Distributed quantum computing requires the ability to perform nonlocal gate operations between the distant nodes (stationary qubits) of a large network. To achieve this, it has been proposed to interconvert stationary qubits with flying qubits. In contrast to this, we show that distributed quantum computing only requires the ability to encode stationary qubits into flying qubits but not the conversion of flying qubits into stationary qubits. We describe a scheme for the realization of an eventually deterministic controlled phase gate by performing measurements on pairs of flying qubits. Our scheme could be implemented with a linear optics quantum computing setup including sources for the generation of single photons on demand, linear optics elements and photon detectors. In the presence of photon loss and finite detector efficiencies, the scheme could be used to build large cluster states for one way quantum computing with a high fidelity. (author)

  1. Experiences with the ACPMAPS (Advanced Computer Program Multiple Array Processor System) 50 GFLOP system

    International Nuclear Information System (INIS)

    Fischler, M.

    1992-10-01

    The Fermilab Computer R ampersand D and Theory departments have for several years collaborated on a multi-GFLOP (recently upgraded to 50 GFLOP) system for lattice gauge calculations. The primary emphasis is on flexibility and ease of algorithm development. This system (ACPMAPS) has been in use for some time, allowing theorists to produce QCD results with relevance for the analysis of experimental data. We present general observations about benefits of such a scientist-oriented system, and summarize some of the advances recently made. We also discuss what was discovered about features needed in a useful algorithm exploration platform. These lessons can be applied to the design and evaluation of future massively parallel systems (commercial or otherwise)

  2. Design Principles for Synthesizable Processor Cores

    DEFF Research Database (Denmark)

    Schleuniger, Pascal; McKee, Sally A.; Karlsson, Sven

    2012-01-01

    As FPGAs get more competitive, synthesizable processor cores become an attractive choice for embedded computing. Currently popular commercial processor cores do not fully exploit current FPGA architectures. In this paper, we propose general design principles to increase instruction throughput...

  3. Processors for wavelet analysis and synthesis: NIFS and TI-C80 MVP

    Science.gov (United States)

    Brooks, Geoffrey W.

    1996-03-01

    Two processors are considered for image quadrature mirror filtering (QMF). The neuromorphic infrared focal-plane sensor (NIFS) is an existing prototype analog processor offering high speed spatio-temporal Gaussian filtering, which could be used for the QMF low- pass function, and difference of Gaussian filtering, which could be used for the QMF high- pass function. Although not designed specifically for wavelet analysis, the biologically- inspired system accomplishes the most computationally intensive part of QMF processing. The Texas Instruments (TI) TMS320C80 Multimedia Video Processor (MVP) is a 32-bit RISC master processor with four advanced digital signal processors (DSPs) on a single chip. Algorithm partitioning, memory management and other issues are considered for optimal performance. This paper presents these considerations with simulated results leading to processor implementation of high-speed QMF analysis and synthesis.

  4. Experimental technique for study on three-particle reactions in kinematically total experiments with usage of the two-processor complex on the M-400 computer basis

    International Nuclear Information System (INIS)

    Berezin, F.N.; Kisurin, V.A.; Nemets, O.F.; Ofengenden, R.G.; Pugach, V.M.; Pavlenko, Yu.N.; Patlan', Yu.V.; Savrasov, S.S.

    1981-01-01

    Experimental technique for investigation of three-particle nuclear reactions in kinematically total experiments is described. The technique provides the storage of one-dimensional and two- dimensional energy spectra from several detectors. A block diagram of the measuring system, using this technique, is presented. The measuring system consists of analog equipment for rapid-slow coincidences and of a two-processor complex on the base of the M-400 computer with a general bus. Application of a two-processor complex, each computer of which has a possibility of direct access to memory of another computer, permits to separate functions of data collection and data operational presentation and to perform necessary physical calculations. Software of the measuring complex which includes programs written using the ASSEMBLER language for the first computer and functional programs written using the BASIC language for the second computer, is considered. Software of the first computer includes the DISPETCHER dialog control program, driver package for control of external devices, of applied program package and system modules. The technique, described, is tested in experiment on investigation of d+ 10 B→α+α+α three- particle reaction at deutron energy of 13.6 MeV. The two-dimensional energy spectrum reaction obtained with the help of the technique described is presented [ru

  5. Massively Parallel Computing at Sandia and Its Application to National Defense

    National Research Council Canada - National Science Library

    Dosanjh, Sudip

    1991-01-01

    Two years ago, researchers at Sandia National Laboratories showed that a massively parallel computer with 1024 processors could solve scientific problems more than 1000 times faster than a single processor...

  6. Many - body simulations using an array processor

    International Nuclear Information System (INIS)

    Rapaport, D.C.

    1985-01-01

    Simulations of microscopic models of water and polypeptides using molecular dynamics and Monte Carlo techniques have been carried out with the aid of an FPS array processor. The computational techniques are discussed, with emphasis on the development and optimization of the software to take account of the special features of the processor. The computing requirements of these simulations exceed what could be reasonably carried out on a normal 'scientific' computer. While the FPS processor is highly suited to the kinds of models described, several other computationally intensive problems in statistical mechanics are outlined for which alternative processor architectures are more appropriate

  7. Aeroflex Single Board Computers and Instrument Circuit Cards for Nuclear Environments Measuring and Monitoring

    International Nuclear Information System (INIS)

    Stratton, Sam; Stevenson, Dave; Magnifico, Mateo

    2013-06-01

    A Single Board Computer (SBC) is an entire computer including all of the required components and I/O interfaces built on a single circuit board. SBC's are used across numerous industrial, military and space flight applications. In the case of military and space implementations, SBC's employ advanced high reliability processors designed for rugged thermal, mechanical and even radiation environments. These processors, in turn, rely on equally advanced support components such as memory, interface, and digital logic. When all of these components are put together on a printed circuit card, the result is a highly reliable Single Board Computer that can perform a wide variety of tasks in very harsh environments. In the area of instrumentation, peripheral circuit cards can be developed that directly interface to the SBC and various radiation measuring devices and systems. Designers use signal conditioning and high reliability Analog to Digital Converters (ADC's) to convert the measuring device signals to digital data suitable for a microprocessor. The data can then be sent to the SBC via high speed communication protocols such as Ethernet or similar type of serial bus. Data received by the SBC can then be manipulated and processed into a form readily available to users. Recent events are causing some in the NPP industry to consider devices and systems with better radiation and temperature performance capability. Systems designed for space application are designed for the harsh environment of space which under certain conditions would be similar to what the electronics will see during a severe nuclear reactor event. The NPP industry should be considering higher reliability electronics for certain critical applications. (authors)

  8. Multithreaded Processors

    Indian Academy of Sciences (India)

    IAS Admin

    by modern computer designers due to operating speed mismatch between ... ern mobile phone is more powerful than the computer aboard NASA's Apollo 11 .... Note that such a system stalls only when all threads are stalled. Assume there are ...

  9. Rapid prototyping and evaluation of programmable SIMD SDR processors in LISA

    Science.gov (United States)

    Chen, Ting; Liu, Hengzhu; Zhang, Botao; Liu, Dongpei

    2013-03-01

    With the development of international wireless communication standards, there is an increase in computational requirement for baseband signal processors. Time-to-market pressure makes it impossible to completely redesign new processors for the evolving standards. Due to its high flexibility and low power, software defined radio (SDR) digital signal processors have been proposed as promising technology to replace traditional ASIC and FPGA fashions. In addition, there are large numbers of parallel data processed in computation-intensive functions, which fosters the development of single instruction multiple data (SIMD) architecture in SDR platform. So a new way must be found to prototype the SDR processors efficiently. In this paper we present a bit-and-cycle accurate model of programmable SIMD SDR processors in a machine description language LISA. LISA is a language for instruction set architecture which can gain rapid model at architectural level. In order to evaluate the availability of our proposed processor, three common baseband functions, FFT, FIR digital filter and matrix multiplication have been mapped on the SDR platform. Analytical results showed that the SDR processor achieved the maximum of 47.1% performance boost relative to the opponent processor.

  10. Green Secure Processors: Towards Power-Efficient Secure Processor Design

    Science.gov (United States)

    Chhabra, Siddhartha; Solihin, Yan

    With the increasing wealth of digital information stored on computer systems today, security issues have become increasingly important. In addition to attacks targeting the software stack of a system, hardware attacks have become equally likely. Researchers have proposed Secure Processor Architectures which utilize hardware mechanisms for memory encryption and integrity verification to protect the confidentiality and integrity of data and computation, even from sophisticated hardware attacks. While there have been many works addressing performance and other system level issues in secure processor design, power issues have largely been ignored. In this paper, we first analyze the sources of power (energy) increase in different secure processor architectures. We then present a power analysis of various secure processor architectures in terms of their increase in power consumption over a base system with no protection and then provide recommendations for designs that offer the best balance between performance and power without compromising security. We extend our study to the embedded domain as well. We also outline the design of a novel hybrid cryptographic engine that can be used to minimize the power consumption for a secure processor. We believe that if secure processors are to be adopted in future systems (general purpose or embedded), it is critically important that power issues are considered in addition to performance and other system level issues. To the best of our knowledge, this is the first work to examine the power implications of providing hardware mechanisms for security.

  11. Single Event Upset Analysis: On-orbit performance of the Alpha Magnetic Spectrometer Digital Signal Processor Memory aboard the International Space Station

    Science.gov (United States)

    Li, Jiaqiang; Choutko, Vitaly; Xiao, Liyi

    2018-03-01

    Based on the collection of error data from the Alpha Magnetic Spectrometer (AMS) Digital Signal Processors (DSP), on-orbit Single Event Upsets (SEUs) of the DSP program memory are analyzed. The daily error distribution and time intervals between errors are calculated to evaluate the reliability of the system. The particle density distribution of International Space Station (ISS) orbit is presented and the effects from the South Atlantic Anomaly (SAA) and the geomagnetic poles are analyzed. The impact of solar events on the DSP program memory is carried out combining data analysis and Monte Carlo simulation (MC). From the analysis and simulation results, it is concluded that the area corresponding to the SAA is the main source of errors on the ISS orbit. Solar events can also cause errors on DSP program memory, but the effect depends on the on-orbit particle density.

  12. An interactive parallel processor for data analysis

    International Nuclear Information System (INIS)

    Mong, J.; Logan, D.; Maples, C.; Rathbun, W.; Weaver, D.

    1984-01-01

    A parallel array of eight minicomputers has been assembled in an attempt to deal with kiloparameter data events. By exporting computer system functions to a separate processor, the authors have been able to achieve computer amplification linearly proportional to the number of executing processors

  13. Processors and systems (picture processing)

    Energy Technology Data Exchange (ETDEWEB)

    Gemmar, P

    1983-01-01

    Automatic picture processing requires high performance computers and high transmission capacities in the processor units. The author examines the possibilities of operating processors in parallel in order to accelerate the processing of pictures. He therefore discusses a number of available processors and systems for picture processing and illustrates their capacities for special types of picture processing. He stresses the fact that the amount of storage required for picture processing is exceptionally high. The author concludes that it is as yet difficult to decide whether very large groups of simple processors or highly complex multiprocessor systems will provide the best solution. Both methods will be aided by the development of VLSI. New solutions have already been offered (systolic arrays and 3-d processing structures) but they also are subject to losses caused by inherently parallel algorithms. Greater efforts must be made to produce suitable software for multiprocessor systems. Some possibilities for future picture processing systems are discussed. 33 references.

  14. Modcomp MAX IV System Processors reference guide

    Energy Technology Data Exchange (ETDEWEB)

    Cummings, J.

    1990-10-01

    A user almost always faces a big problem when having to learn to use a new computer system. The information necessary to use the system is often scattered throughout many different manuals. The user also faces the problem of extracting the information really needed from each manual. Very few computer vendors supply a single Users Guide or even a manual to help the new user locate the necessary manuals. Modcomp is no exception to this, Modcomp MAX IV requires that the user be familiar with the system file usage which adds to the problem. At General Atomics there is an ever increasing need for new users to learn how to use the Modcomp computers. This paper was written to provide a condensed Users Reference Guide'' for Modcomp computer users. This manual should be of value not only to new users but any users that are not Modcomp computer systems experts. This Users Reference Guide'' is intended to provided the basic information for the use of the various Modcomp System Processors necessary to, create, compile, link-edit, and catalog a program. Only the information necessary to provide the user with a basic understanding of the Systems Processors is included. This document provides enough information for the majority of programmers to use the Modcomp computers without having to refer to any other manuals. A lot of emphasis has been placed on the file description and usage for each of the System Processors. This allows the user to understand how Modcomp MAX IV does things rather than just learning the system commands.

  15. The LASS hardware processor

    International Nuclear Information System (INIS)

    Kunz, P.F.

    1976-01-01

    The problems of data analysis with hardware processors are reviewed and a description is given of a programmable processor. This processor, the 168/E, has been designed for use in the LASS multi-processor system; it has an execution speed comparable to the IBM 370/168 and uses the subset of IBM 370 instructions appropriate to the LASS analysis task. (Auth.)

  16. Accuracies Of Optical Processors For Adaptive Optics

    Science.gov (United States)

    Downie, John D.; Goodman, Joseph W.

    1992-01-01

    Paper presents analysis of accuracies and requirements concerning accuracies of optical linear-algebra processors (OLAP's) in adaptive-optics imaging systems. Much faster than digital electronic processor and eliminate some residual distortion. Question whether errors introduced by analog processing of OLAP overcome advantage of greater speed. Paper addresses issue by presenting estimate of accuracy required in general OLAP that yields smaller average residual aberration of wave front than digital electronic processor computing at given speed.

  17. Single Photon Emission Computed Tomography (SPECT)

    Science.gov (United States)

    ... examined for the location of the tracer. Computer graphics can be used to create a 3-dimensional ... a Second Heart Attack | Spanish Cardiac Rehab Referral Card | Spanish Heart Attack Warning Signs: Patient sheet | Infographic | ...

  18. Video frame processor

    International Nuclear Information System (INIS)

    Joshi, V.M.; Agashe, Alok; Bairi, B.R.

    1993-01-01

    This report provides technical description regarding the Video Frame Processor (VFP) developed at Bhabha Atomic Research Centre. The instrument provides capture of video images available in CCIR format. Two memory planes each with a capacity of 512 x 512 x 8 bit data enable storage of two video image frames. The stored image can be processed on-line and on-line image subtraction can also be carried out for image comparisons. The VFP is a PC Add-on board and is I/O mapped within the host IBM PC/AT compatible computer. (author). 9 refs., 4 figs., 19 photographs

  19. Multiple single-board-computer system for the KEK positron generator control

    International Nuclear Information System (INIS)

    Nakahara, Kazuo; Abe, Isamu; Enomoto, Atsushi; Otake, Yuji; Urano, Takao

    1986-01-01

    The KEK positron generator is controlled by means of a distributed microprocessor network. The control system is composed of three kinds of equipment: device controllers for the linac equipment, operation management stations and a communication network. Individual linac equipment has its own microprocessor-based controller. A multiple single board computer (SBC) system is used for communication control and for equipment surveillance; it has a database containing communication and linac equipment status information. The linac operation management that should be the most soft part in the control system, is separated from the multiple SBC system and is carried out by work-stations. The principle that every processor executes only one task is maintained throughout the control system. This made the software architecture very simple. (orig.)

  20. Multibus-based parallel processor for simulation

    Science.gov (United States)

    Ogrady, E. P.; Wang, C.-H.

    1983-01-01

    A Multibus-based parallel processor simulation system is described. The system is intended to serve as a vehicle for gaining hands-on experience, testing system and application software, and evaluating parallel processor performance during development of a larger system based on the horizontal/vertical-bus interprocessor communication mechanism. The prototype system consists of up to seven Intel iSBC 86/12A single-board computers which serve as processing elements, a multiple transmission controller (MTC) designed to support system operation, and an Intel Model 225 Microcomputer Development System which serves as the user interface and input/output processor. All components are interconnected by a Multibus/IEEE 796 bus. An important characteristic of the system is that it provides a mechanism for a processing element to broadcast data to other selected processing elements. This parallel transfer capability is provided through the design of the MTC and a minor modification to the iSBC 86/12A board. The operation of the MTC, the basic hardware-level operation of the system, and pertinent details about the iSBC 86/12A and the Multibus are described.

  1. A Model of Batch Scheduling for a Single Batch Processor with Additional Setups to Minimize Total Inventory Holding Cost of Parts of a Single Item Requested at Multi-due-date

    Science.gov (United States)

    Hakim Halim, Abdul; Ernawati; Hidayat, Nita P. A.

    2018-03-01

    This paper deals with a model of batch scheduling for a single batch processor on which a number of parts of a single items are to be processed. The process needs two kinds of setups, i. e., main setups required before processing any batches, and additional setups required repeatedly after the batch processor completes a certain number of batches. The parts to be processed arrive at the shop floor at the times coinciding with their respective starting times of processing, and the completed parts are to be delivered at multiple due dates. The objective adopted for the model is that of minimizing total inventory holding cost consisting of holding cost per unit time for a part in completed batches, and that in in-process batches. The formulation of total inventory holding cost is derived from the so-called actual flow time defined as the interval between arrival times of parts at the production line and delivery times of the completed parts. The actual flow time satisfies not only minimum inventory but also arrival and delivery just in times. An algorithm to solve the model is proposed and a numerical example is shown.

  2. Array processors based on Gaussian fraction-free method

    Energy Technology Data Exchange (ETDEWEB)

    Peng, S; Sedukhin, S [Aizu Univ., Aizuwakamatsu, Fukushima (Japan); Sedukhin, I

    1998-03-01

    The design of algorithmic array processors for solving linear systems of equations using fraction-free Gaussian elimination method is presented. The design is based on a formal approach which constructs a family of planar array processors systematically. These array processors are synthesized and analyzed. It is shown that some array processors are optimal in the framework of linear allocation of computations and in terms of number of processing elements and computing time. (author)

  3. Efficient Multicriteria Protein Structure Comparison on Modern Processor Architectures

    Science.gov (United States)

    Manolakos, Elias S.

    2015-01-01

    Fast increasing computational demand for all-to-all protein structures comparison (PSC) is a result of three confounding factors: rapidly expanding structural proteomics databases, high computational complexity of pairwise protein comparison algorithms, and the trend in the domain towards using multiple criteria for protein structures comparison (MCPSC) and combining results. We have developed a software framework that exploits many-core and multicore CPUs to implement efficient parallel MCPSC in modern processors based on three popular PSC methods, namely, TMalign, CE, and USM. We evaluate and compare the performance and efficiency of the two parallel MCPSC implementations using Intel's experimental many-core Single-Chip Cloud Computer (SCC) as well as Intel's Core i7 multicore processor. We show that the 48-core SCC is more efficient than the latest generation Core i7, achieving a speedup factor of 42 (efficiency of 0.9), making many-core processors an exciting emerging technology for large-scale structural proteomics. We compare and contrast the performance of the two processors on several datasets and also show that MCPSC outperforms its component methods in grouping related domains, achieving a high F-measure of 0.91 on the benchmark CK34 dataset. The software implementation for protein structure comparison using the three methods and combined MCPSC, along with the developed underlying rckskel algorithmic skeletons library, is available via GitHub. PMID:26605332

  4. Efficient Multicriteria Protein Structure Comparison on Modern Processor Architectures.

    Science.gov (United States)

    Sharma, Anuj; Manolakos, Elias S

    2015-01-01

    Fast increasing computational demand for all-to-all protein structures comparison (PSC) is a result of three confounding factors: rapidly expanding structural proteomics databases, high computational complexity of pairwise protein comparison algorithms, and the trend in the domain towards using multiple criteria for protein structures comparison (MCPSC) and combining results. We have developed a software framework that exploits many-core and multicore CPUs to implement efficient parallel MCPSC in modern processors based on three popular PSC methods, namely, TMalign, CE, and USM. We evaluate and compare the performance and efficiency of the two parallel MCPSC implementations using Intel's experimental many-core Single-Chip Cloud Computer (SCC) as well as Intel's Core i7 multicore processor. We show that the 48-core SCC is more efficient than the latest generation Core i7, achieving a speedup factor of 42 (efficiency of 0.9), making many-core processors an exciting emerging technology for large-scale structural proteomics. We compare and contrast the performance of the two processors on several datasets and also show that MCPSC outperforms its component methods in grouping related domains, achieving a high F-measure of 0.91 on the benchmark CK34 dataset. The software implementation for protein structure comparison using the three methods and combined MCPSC, along with the developed underlying rckskel algorithmic skeletons library, is available via GitHub.

  5. Towards a Process Algebra for Shared Processors

    DEFF Research Database (Denmark)

    Buchholtz, Mikael; Andersen, Jacob; Løvengreen, Hans Henrik

    2002-01-01

    We present initial work on a timed process algebra that models sharing of processor resources allowing preemption at arbitrary points in time. This enables us to model both the functional and the timely behaviour of concurrent processes executed on a single processor. We give a refinement relation...

  6. Very Long Instruction Word Processors

    Indian Academy of Sciences (India)

    Explicitly Parallel Instruction Computing (EPIC) is an instruction processing paradigm that has been in the spot- light due to its adoption by the next generation of Intel. Processors starting with the IA-64. The EPIC processing paradigm is an evolution of the Very Long Instruction. Word (VLIW) paradigm. This article gives an ...

  7. Single-photon emission computed tomography

    International Nuclear Information System (INIS)

    Budinger, T.F.

    1986-01-01

    Single photon tomography dates from the early 1960's when the idea of emission transverse section tomography was presented by Kuhl and Edwards. They used a rectilinear scanner and analogue back-projection methods to detect emissions from a series of sequential positions transverse to the cephaldcaudad axis of the body. This chapter presents an explanation of emission tomography by describing longitudinal and transverse section tomography. In principle all modes of tomography can be considered under the general topic of coded apertures wherein the code ranges from translation of a pinhole collimator to rotation of a parallel hole or focused collimator array

  8. Development of small scale cluster computer for numerical analysis

    Science.gov (United States)

    Zulkifli, N. H. N.; Sapit, A.; Mohammed, A. N.

    2017-09-01

    In this study, two units of personal computer were successfully networked together to form a small scale cluster. Each of the processor involved are multicore processor which has four cores in it, thus made this cluster to have eight processors. Here, the cluster incorporate Ubuntu 14.04 LINUX environment with MPI implementation (MPICH2). Two main tests were conducted in order to test the cluster, which is communication test and performance test. The communication test was done to make sure that the computers are able to pass the required information without any problem and were done by using simple MPI Hello Program where the program written in C language. Additional, performance test was also done to prove that this cluster calculation performance is much better than single CPU computer. In this performance test, four tests were done by running the same code by using single node, 2 processors, 4 processors, and 8 processors. The result shows that with additional processors, the time required to solve the problem decrease. Time required for the calculation shorten to half when we double the processors. To conclude, we successfully develop a small scale cluster computer using common hardware which capable of higher computing power when compare to single CPU processor, and this can be beneficial for research that require high computing power especially numerical analysis such as finite element analysis, computational fluid dynamics, and computational physics analysis.

  9. Computer systems for annotation of single molecule fragments

    Science.gov (United States)

    Schwartz, David Charles; Severin, Jessica

    2016-07-19

    There are provided computer systems for visualizing and annotating single molecule images. Annotation systems in accordance with this disclosure allow a user to mark and annotate single molecules of interest and their restriction enzyme cut sites thereby determining the restriction fragments of single nucleic acid molecules. The markings and annotations may be automatically generated by the system in certain embodiments and they may be overlaid translucently onto the single molecule images. An image caching system may be implemented in the computer annotation systems to reduce image processing time. The annotation systems include one or more connectors connecting to one or more databases capable of storing single molecule data as well as other biomedical data. Such diverse array of data can be retrieved and used to validate the markings and annotations. The annotation systems may be implemented and deployed over a computer network. They may be ergonomically optimized to facilitate user interactions.

  10. The study of Kruskal's and Prim's algorithms on the Multiple Instruction and Single Data stream computer system

    Directory of Open Access Journals (Sweden)

    A. Yu. Popov

    2015-01-01

    Full Text Available Bauman Moscow State Technical University is implementing a project to develop operating principles of computer system having radically new architecture. A developed working model of the system allowed us to evaluate an efficiency of developed hardware and software. The experimental results presented in previous studies, as well as the analysis of operating principles of new computer system permit to draw conclusions regarding its efficiency in solving discrete optimization problems related to processing of sets.The new architecture is based on a direct hardware support of operations of discrete mathematics, which is reflected in using the special facilities for processing of sets and data structures. Within the framework of the project a special device was designed, i.e. a structure processor (SP, which improved the performance, without limiting the scope of applications of such a computer system.The previous works presented the basic principles of the computational process organization in MISD (Multiple Instructions, Single Data system, showed the structure and features of the structure processor and the general principles to solve discrete optimization problems on graphs.This paper examines two search algorithms of the minimum spanning tree, namely Kruskal's and Prim's algorithms. It studies the implementations of algorithms for two SP operation modes: coprocessor mode and MISD one. The paper presents results of experimental comparison of MISD system performance in coprocessor mode with mainframes.

  11. Single photon emission computed tomography and oth selected computer topics

    International Nuclear Information System (INIS)

    Frey, G.D.

    1981-01-01

    This book, the proceedings of a meeting in January 1980, contains 21 papers. Thirteen are devoted to aspects of emission tomography, four to nuclear cardiology, and five to other topics. The initial set of papers consists of reviews of the single photon emission tomography process. These include transverse axial tomography using scintillation cameras and other devices, longitudinal section tomography, and pin-hole and slant-hole systems. These reviews are generally well done, but as might be expected, lack any coherence from paper to paper. The papers on nuclear cardiology include several of Fourier analysis in nuclear cardiology and one on shunt quantification. Other clinical papers are on quantifying Tc-99m glucoheptonate uptake in the brain and on iron-59 retention studies. A general criticism of the book is the poor quality of photographic reproductions

  12. Development of an Advanced Digital Reactor Protection System Using Diverse Dual Processors to Prevent Common-Mode Failure

    International Nuclear Information System (INIS)

    Shin, Hyun Kook; Nam, Sang Ku; Sohn, Se Do; Chang, Hoon Seon

    2003-01-01

    The advanced digital reactor protection system (ADRPS) with diverse dual processors has been developed to prevent common-mode failure (CMF). The principle of diversity is applied to both hardware design and software design. For hardware diversity, two different types of CPUs are used for the bistable processor and local coincidence logic (LCL) processor. The Versa Module Eurocard-based single board computers are used for the CPU hardware platforms. The QNX operating system and the VxWorks operating system were selected for software diversity. Functional diversity is also applied to the input and output modules, and to the algorithm in the bistable processors and LCL processors. The characteristics of the newly developed digital protection system are described together with the preventive capability against CMF. Also, system reliability analysis is discussed. The evaluation results show that the ADRPS has a good preventive capability against the CMF and is a highly reliable reactor protection system

  13. Single-Sex Computer Classes: An Effective Alternative.

    Science.gov (United States)

    Swain, Sandra L.; Harvey, Douglas M.

    2002-01-01

    Advocates single-sex computer instruction as a temporary alternative educational program to provide middle school and secondary school girls with access to computers, to present girls with opportunities to develop positive attitudes towards technology, and to make available a learning environment conducive to girls gaining technological skills.…

  14. A Computational Experiment on Single-Walled Carbon Nanotubes

    Science.gov (United States)

    Simpson, Scott; Lonie, David C.; Chen, Jiechen; Zurek, Eva

    2013-01-01

    A computational experiment that investigates single-walled carbon nanotubes (SWNTs) has been developed and employed in an upper-level undergraduate physical chemistry laboratory course. Computations were carried out to determine the electronic structure, radial breathing modes, and the influence of the nanotube's diameter on the…

  15. Program computes single-point failures in critical system designs

    Science.gov (United States)

    Brown, W. R.

    1967-01-01

    Computer program analyzes the designs of critical systems that will either prove the design is free of single-point failures or detect each member of the population of single-point failures inherent in a system design. This program should find application in the checkout of redundant circuits and digital systems.

  16. Single-Frame Cinema. Three Dimensional Computer-Generated Imaging.

    Science.gov (United States)

    Cheetham, Edward Joseph, II

    This master's thesis provides a description of the proposed art form called single-frame cinema, which is a category of computer imagery that takes the temporal polarities of photography and cinema and unites them into a single visual vignette of time. Following introductory comments, individual chapters discuss (1) the essential physical…

  17. Single instruction computer architecture and its application in image processing

    Science.gov (United States)

    Laplante, Phillip A.

    1992-03-01

    A single processing computer system using only half-adder circuits is described. In addition, it is shown that only a single hard-wired instruction is needed in the control unit to obtain a complete instruction set for this general purpose computer. Such a system has several advantages. First it is intrinsically a RISC machine--in fact the 'ultimate RISC' machine. Second, because only a single type of logic element is employed the entire computer system can be easily realized on a single, highly integrated chip. Finally, due to the homogeneous nature of the computer's logic elements, the computer has possible implementations as an optical or chemical machine. This in turn suggests possible paradigms for neural computing and artificial intelligence. After showing how we can implement a full-adder, min, max and other operations using the half-adder, we use an array of such full-adders to implement the dilation operation for two black and white images. Next we implement the erosion operation of two black and white images using a relative complement function and the properties of erosion and dilation. This approach was inspired by papers by van der Poel in which a single instruction is used to furnish a complete set of general purpose instructions and by Bohm- Jacopini where it is shown that any problem can be solved using a Turing machine with one entry and one exit.

  18. Probabilistic programmable quantum processors

    International Nuclear Information System (INIS)

    Buzek, V.; Ziman, M.; Hillery, M.

    2004-01-01

    We analyze how to improve performance of probabilistic programmable quantum processors. We show how the probability of success of the probabilistic processor can be enhanced by using the processor in loops. In addition, we show that an arbitrary SU(2) transformations of qubits can be encoded in program state of a universal programmable probabilistic quantum processor. The probability of success of this processor can be enhanced by a systematic correction of errors via conditional loops. Finally, we show that all our results can be generalized also for qudits. (Abstract Copyright [2004], Wiley Periodicals, Inc.)

  19. Numeric algorithms for parallel processors computer architectures with applications to the few-groups neutron diffusion equations

    International Nuclear Information System (INIS)

    Zee, S.K.

    1987-01-01

    A numeric algorithm and an associated computer code were developed for the rapid solution of the finite-difference method representation of the few-group neutron-diffusion equations on parallel computers. Applications of the numeric algorithm on both SIMD (vector pipeline) and MIMD/SIMD (multi-CUP/vector pipeline) architectures were explored. The algorithm was successfully implemented in the two-group, 3-D neutron diffusion computer code named DIFPAR3D (DIFfusion PARallel 3-Dimension). Numerical-solution techniques used in the code include the Chebyshev polynomial acceleration technique in conjunction with the power method of outer iteration. For inner iterations, a parallel form of red-black (cyclic) line SOR with automated determination of group dependent relaxation factors and iteration numbers required to achieve specified inner iteration error tolerance is incorporated. The code employs a macroscopic depletion model with trace capability for selected fission products' transients and critical boron. In addition to this, moderator and fuel temperature feedback models are also incorporated into the DIFPAR3D code, for realistic simulation of power reactor cores. The physics models used were proven acceptable in separate benchmarking studies

  20. A study on the effectiveness of lockup-free caches for a Reduced Instruction Set Computer (RISC) processor

    OpenAIRE

    Tharpe, Leonard.

    1992-01-01

    Approved for public release; distribution is unlimited This thesis presents a simulation and analysis of the Reduced Instruction Set Computer (RISC) architecture and the effects on RISC performance of a lockup-free cache interface. RISC architectures achieve high performance by having a small, but sufficient, instruction set with most instructions executing in one clock cycle. Current RISC performance range from 1.5 to 2.0 CPI. The goal of RISC is to attain a CPI of 1.0. The major hind...

  1. Satellite on-board real-time SAR processor prototype

    Science.gov (United States)

    Bergeron, Alain; Doucet, Michel; Harnisch, Bernd; Suess, Martin; Marchese, Linda; Bourqui, Pascal; Desnoyers, Nicholas; Legros, Mathieu; Guillot, Ludovic; Mercier, Luc; Châteauneuf, François

    2017-11-01

    A Compact Real-Time Optronic SAR Processor has been successfully developed and tested up to a Technology Readiness Level of 4 (TRL4), the breadboard validation in a laboratory environment. SAR, or Synthetic Aperture Radar, is an active system allowing day and night imaging independent of the cloud coverage of the planet. The SAR raw data is a set of complex data for range and azimuth, which cannot be compressed. Specifically, for planetary missions and unmanned aerial vehicle (UAV) systems with limited communication data rates this is a clear disadvantage. SAR images are typically processed electronically applying dedicated Fourier transformations. This, however, can also be performed optically in real-time. Originally the first SAR images were optically processed. The optical Fourier processor architecture provides inherent parallel computing capabilities allowing real-time SAR data processing and thus the ability for compression and strongly reduced communication bandwidth requirements for the satellite. SAR signal return data are in general complex data. Both amplitude and phase must be combined optically in the SAR processor for each range and azimuth pixel. Amplitude and phase are generated by dedicated spatial light modulators and superimposed by an optical relay set-up. The spatial light modulators display the full complex raw data information over a two-dimensional format, one for the azimuth and one for the range. Since the entire signal history is displayed at once, the processor operates in parallel yielding real-time performances, i.e. without resulting bottleneck. Processing of both azimuth and range information is performed in a single pass. This paper focuses on the onboard capabilities of the compact optical SAR processor prototype that allows in-orbit processing of SAR images. Examples of processed ENVISAT ASAR images are presented. Various SAR processor parameters such as processing capabilities, image quality (point target analysis), weight and

  2. Parallel Computation on Multicore Processors Using Explicit Form of the Finite Element Method and C++ Standard Libraries

    Directory of Open Access Journals (Sweden)

    Rek Václav

    2016-11-01

    Full Text Available In this paper, the form of modifications of the existing sequential code written in C or C++ programming language for the calculation of various kind of structures using the explicit form of the Finite Element Method (Dynamic Relaxation Method, Explicit Dynamics in the NEXX system is introduced. The NEXX system is the core of engineering software NEXIS, Scia Engineer, RFEM and RENEX. It has the possibilities of multithreaded running, which can now be supported at the level of native C++ programming language using standard libraries. Thanks to the high degree of abstraction that a contemporary C++ programming language provides, a respective library created in this way can be very generalized for other purposes of usage of parallelism in computational mechanics.

  3. Neurovision processor for designing intelligent sensors

    Science.gov (United States)

    Gupta, Madan M.; Knopf, George K.

    1992-03-01

    A programmable multi-task neuro-vision processor, called the Positive-Negative (PN) neural processor, is proposed as a plausible hardware mechanism for constructing robust multi-task vision sensors. The computational operations performed by the PN neural processor are loosely based on the neural activity fields exhibited by certain nervous tissue layers situated in the brain. The neuro-vision processor can be programmed to generate diverse dynamic behavior that may be used for spatio-temporal stabilization (STS), short-term visual memory (STVM), spatio-temporal filtering (STF) and pulse frequency modulation (PFM). A multi- functional vision sensor that performs a variety of information processing operations on time- varying two-dimensional sensory images can be constructed from a parallel and hierarchical structure of numerous individually programmed PN neural processors.

  4. Diagnosis of dementia with single photon emission computed tomography

    International Nuclear Information System (INIS)

    Jagust, W.J.; Budinger, T.F.; Reed, B.R.

    1987-01-01

    Single photon emission computed tomography is a practical modality for the study of physiologic cerebral activity in vivo. We utilized single photon emission computed tomography and N-isopropyl-p-iodoamphetamine iodine 123 to evaluate regional cerebral blood flow in nine patients with Alzheimer's disease (AD), five healthy elderly control subjects, and two patients with multi-infarct dementia. We found that all subjects with AD demonstrated flow deficits in temporoparietal cortex bilaterally, and that the ratio of activity in bilateral temporoparietal cortex to activity in the whole slice allowed the differentiation of all patients with AD from both the controls and from the patients with multi-infarct dementia. Furthermore, this ratio showed a strong correlation with disease severity in the AD group. Single photon emission computed tomography appears to be useful in the differential diagnosis of dementia and reflects clinical features of the disease

  5. Computer Architecture A Quantitative Approach

    CERN Document Server

    Hennessy, John L

    2007-01-01

    The era of seemingly unlimited growth in processor performance is over: single chip architectures can no longer overcome the performance limitations imposed by the power they consume and the heat they generate. Today, Intel and other semiconductor firms are abandoning the single fast processor model in favor of multi-core microprocessors--chips that combine two or more processors in a single package. In the fourth edition of Computer Architecture, the authors focus on this historic shift, increasing their coverage of multiprocessors and exploring the most effective ways of achieving parallelis

  6. Computing the sparse matrix vector product using block-based kernels without zero padding on processors with AVX-512 instructions

    Directory of Open Access Journals (Sweden)

    Bérenger Bramas

    2018-04-01

    Full Text Available The sparse matrix-vector product (SpMV is a fundamental operation in many scientific applications from various fields. The High Performance Computing (HPC community has therefore continuously invested a lot of effort to provide an efficient SpMV kernel on modern CPU architectures. Although it has been shown that block-based kernels help to achieve high performance, they are difficult to use in practice because of the zero padding they require. In the current paper, we propose new kernels using the AVX-512 instruction set, which makes it possible to use a blocking scheme without any zero padding in the matrix memory storage. We describe mask-based sparse matrix formats and their corresponding SpMV kernels highly optimized in assembly language. Considering that the optimal blocking size depends on the matrix, we also provide a method to predict the best kernel to be used utilizing a simple interpolation of results from previous executions. We compare the performance of our approach to that of the Intel MKL CSR kernel and the CSR5 open-source package on a set of standard benchmark matrices. We show that we can achieve significant improvements in many cases, both for sequential and for parallel executions. Finally, we provide the corresponding code in an open source library, called SPC5.

  7. A single-chip computer analysis system for liquid fluorescence

    International Nuclear Information System (INIS)

    Zhang Yongming; Wu Ruisheng; Li Bin

    1998-01-01

    The single-chip computer analysis system for liquid fluorescence is an intelligent analytic instrument, which is based on the principle that the liquid containing hydrocarbons can give out several characteristic fluorescences when irradiated by strong light. Besides a single-chip computer, the system makes use of the keyboard and the calculation and printing functions of a CASIO printing calculator. It combines optics, mechanism and electronics into one, and is small, light and practical, so it can be used for surface water sample analysis in oil field and impurity analysis of other materials

  8. Invasive tightly coupled processor arrays

    CERN Document Server

    LARI, VAHID

    2016-01-01

    This book introduces new massively parallel computer (MPSoC) architectures called invasive tightly coupled processor arrays. It proposes strategies, architecture designs, and programming interfaces for invasive TCPAs that allow invading and subsequently executing loop programs with strict requirements or guarantees of non-functional execution qualities such as performance, power consumption, and reliability. For the first time, such a configurable processor array architecture consisting of locally interconnected VLIW processing elements can be claimed by programs, either in full or in part, using the principle of invasive computing. Invasive TCPAs provide unprecedented energy efficiency for the parallel execution of nested loop programs by avoiding any global memory access such as GPUs and may even support loops with complex dependencies such as loop-carried dependencies that are not amenable to parallel execution on GPUs. For this purpose, the book proposes different invasion strategies for claiming a desire...

  9. High performance graphics processors for medical imaging applications

    International Nuclear Information System (INIS)

    Goldwasser, S.M.; Reynolds, R.A.; Talton, D.A.; Walsh, E.S.

    1989-01-01

    This paper describes a family of high- performance graphics processors with special hardware for interactive visualization of 3D human anatomy. The basic architecture expands to multiple parallel processors, each processor using pipelined arithmetic and logical units for high-speed rendering of Computed Tomography (CT), Magnetic Resonance (MR) and Positron Emission Tomography (PET) data. User-selectable display alternatives include multiple 2D axial slices, reformatted images in sagittal or coronal planes and shaded 3D views. Special facilities support applications requiring color-coded display of multiple datasets (such as radiation therapy planning), or dynamic replay of time- varying volumetric data (such as cine-CT or gated MR studies of the beating heart). The current implementation is a single processor system which generates reformatted images in true real time (30 frames per second), and shaded 3D views in a few seconds per frame. It accepts full scale medical datasets in their native formats, so that minimal preprocessing delay exists between data acquisition and display

  10. Single-server blind quantum computation with quantum circuit model

    Science.gov (United States)

    Zhang, Xiaoqian; Weng, Jian; Li, Xiaochun; Luo, Weiqi; Tan, Xiaoqing; Song, Tingting

    2018-06-01

    Blind quantum computation (BQC) enables the client, who has few quantum technologies, to delegate her quantum computation to a server, who has strong quantum computabilities and learns nothing about the client's quantum inputs, outputs and algorithms. In this article, we propose a single-server BQC protocol with quantum circuit model by replacing any quantum gate with the combination of rotation operators. The trap quantum circuits are introduced, together with the combination of rotation operators, such that the server is unknown about quantum algorithms. The client only needs to perform operations X and Z, while the server honestly performs rotation operators.

  11. Computational Modeling of Photonic Crystal Microcavity Single-Photon Emitters

    Science.gov (United States)

    Saulnier, Nicole A.

    Conventional cryptography is based on algorithms that are mathematically complex and difficult to solve, such as factoring large numbers. The advent of a quantum computer would render these schemes useless. As scientists work to develop a quantum computer, cryptographers are developing new schemes for unconditionally secure cryptography. Quantum key distribution has emerged as one of the potential replacements of classical cryptography. It relics on the fact that measurement of a quantum bit changes the state of the bit and undetected eavesdropping is impossible. Single polarized photons can be used as the quantum bits, such that a quantum system would in some ways mirror the classical communication scheme. The quantum key distribution system would include components that create, transmit and detect single polarized photons. The focus of this work is on the development of an efficient single-photon source. This source is comprised of a single quantum dot inside of a photonic crystal microcavity. To better understand the physics behind the device, a computational model is developed. The model uses Finite-Difference Time-Domain methods to analyze the electromagnetic field distribution in photonic crystal microcavities. It uses an 8-band k · p perturbation theory to compute the energy band structure of the epitaxially grown quantum dots. We discuss a method that combines the results of these two calculations for determining the spontaneous emission lifetime of a quantum dot in bulk material or in a microcavity. The computational models developed in this thesis are used to identify and characterize microcavities for potential use in a single-photon source. The computational tools developed are also used to investigate novel photonic crystal microcavities that incorporate 1D distributed Bragg reflectors for vertical confinement. It is found that the spontaneous emission enhancement in the quasi-3D cavities can be significantly greater than in traditional suspended slab

  12. Multithreading in vector processors

    Science.gov (United States)

    Evangelinos, Constantinos; Kim, Changhoan; Nair, Ravi

    2018-01-16

    In one embodiment, a system includes a processor having a vector processing mode and a multithreading mode. The processor is configured to operate on one thread per cycle in the multithreading mode. The processor includes a program counter register having a plurality of program counters, and the program counter register is vectorized. Each program counter in the program counter register represents a distinct corresponding thread of a plurality of threads. The processor is configured to execute the plurality of threads by activating the plurality of program counters in a round robin cycle.

  13. DEBROS: design and use of a Linux-like RTOS on an inexpensive 8-bit single board computer

    International Nuclear Information System (INIS)

    Davis, M.A.

    2012-01-01

    As the power, complexity, and capabilities of embedded processors continue to grow, it is easy to forget just how much can be done with inexpensive Single Board Computers (SBCs) based on 8-bit processors. When the proprietary, non-standard tools from the vendor for one such embedded computer became a major roadblock, I embarked on a project to expand my own knowledge and provide a more flexible, standards based alternative. Inspired by the early work done on operating systems such as UNIX TM , Linux, and Minix, I wrote DEBROS (the Davis Embedded Baby Real-time Operating System), which is a fully preemptive, priority-based OS with soft real-time capabilities that provides a subset of standard Linux/UNIX compatible system calls such as stdio, BSD sockets, pipes, semaphores, etc. The end result was a much more flexible, standards-based development environment which allowed me to simplify my programming model, expand diagnostic capabilities, and reduce the time spent monitoring and applying updates to the hundreds of devices in the lab currently using such inexpensive hardware. (author)

  14. Computing magnetic anisotropy constants of single molecule magnets

    Indian Academy of Sciences (India)

    We present here a theoretical approach to compute the molecular magnetic anisotropy parameters, and for single molecule magnets in any given spin eigenstate of exchange spin Hamiltonian. We first describe a hybrid constant -valence bond (VB) technique of solving spin Hamiltonians employing full spatial ...

  15. The Experiment Method for Manufacturing Grid Development on Single Computer

    Institute of Scientific and Technical Information of China (English)

    XIAO Youan; ZHOU Zude

    2006-01-01

    In this paper, an experiment method for the Manufacturing Grid application system development in the single personal computer environment is proposed. The characteristic of the proposed method is constructing a full prototype Manufacturing Grid application system which is hosted on a single personal computer with the virtual machine technology. Firstly, it builds all the Manufacturing Grid physical resource nodes on an abstraction layer of a single personal computer with the virtual machine technology. Secondly, all the virtual Manufacturing Grid resource nodes will be connected with virtual network and the application software will be deployed on each Manufacturing Grid nodes. Then, we can obtain a prototype Manufacturing Grid application system which is working in the single personal computer, and can carry on the experiment on this foundation. Compared with the known experiment methods for the Manufacturing Grid application system development, the proposed method has the advantages of the known methods, such as cost inexpensively, operation simple, and can get the confidence experiment result easily. The Manufacturing Grid application system constructed with the proposed method has the high scalability, stability and reliability. It is can be migrated to the real application environment rapidly.

  16. On the implementation of the Ford | Fulkerson algorithm on the Multiple Instruction and Single Data computer system

    Directory of Open Access Journals (Sweden)

    A. Yu. Popov

    2014-01-01

    Full Text Available Algorithms of optimization in networks and direct graphs find a broad application when solving the practical tasks. However, along with large-scale introduction of information technologies in human activity, requirements for volumes of input data and retrieval rate of solution are aggravated. In spite of the fact that by now the large number of algorithms for the various models of computers and computing systems have been studied and implemented, the solution of key problems of optimization for real dimensions of tasks remains difficult. In this regard search of new and more efficient computing structures, as well as update of known algorithms are of great current interest.The work considers an implementation of the search-end algorithm of the maximum flow on the direct graph for multiple instructions and single data computer system (MISD developed in BMSTU. Key feature of this architecture is deep hardware support of operations over sets and structures of data. Functions of storage and access to them are realized on the specialized processor of structures processing (SP which is capable to perform at the hardware level such operations as: add, delete, search, intersect, complete, merge, and others. Advantage of such system is possibility of parallel execution of parts of the computing tasks regarding the access to the sets to data structures simultaneously with arithmetic and logical processing of information.The previous works present the general principles of the computing process arrangement and features of programs implemented in MISD system, describe the structure and principles of functioning the processor of structures processing, show the general principles of the graph task solutions in such system, and experimentally study the efficiency of the received algorithms.The work gives command formats of the SP processor, offers the technique to update the algorithms realized in MISD system, suggests the option of Ford-Falkersona algorithm

  17. A data base processor semantics specification package

    Science.gov (United States)

    Fishwick, P. A.

    1983-01-01

    A Semantics Specification Package (DBPSSP) for the Intel Data Base Processor (DBP) is defined. DBPSSP serves as a collection of cross assembly tools that allow the analyst to assemble request blocks on the host computer for passage to the DBP. The assembly tools discussed in this report may be effectively used in conjunction with a DBP compatible data communications protocol to form a query processor, precompiler, or file management system for the database processor. The source modules representing the components of DBPSSP are fully commented and included.

  18. Linear optical quantum computing in a single spatial mode.

    Science.gov (United States)

    Humphreys, Peter C; Metcalf, Benjamin J; Spring, Justin B; Moore, Merritt; Jin, Xian-Min; Barbieri, Marco; Kolthammer, W Steven; Walmsley, Ian A

    2013-10-11

    We present a scheme for linear optical quantum computing using time-bin-encoded qubits in a single spatial mode. We show methods for single-qubit operations and heralded controlled-phase (cphase) gates, providing a sufficient set of operations for universal quantum computing with the Knill-Laflamme-Milburn [Nature (London) 409, 46 (2001)] scheme. Our protocol is suited to currently available photonic devices and ideally allows arbitrary numbers of qubits to be encoded in the same spatial mode, demonstrating the potential for time-frequency modes to dramatically increase the quantum information capacity of fixed spatial resources. As a test of our scheme, we demonstrate the first entirely single spatial mode implementation of a two-qubit quantum gate and show its operation with an average fidelity of 0.84±0.07.

  19. Preservice Teachers' Computer Use in Single Computer Training Courses; Relationships and Predictions

    Science.gov (United States)

    Zogheib, Salah

    2015-01-01

    Single computer courses offered at colleges of education are expected to provide preservice teachers with the skills and expertise needed to adopt computer technology in their future classrooms. However, preservice teachers still find difficulty adopting such technology. This research paper investigated relationships among preservice teachers'…

  20. Distributed processor systems

    International Nuclear Information System (INIS)

    Zacharov, B.

    1976-01-01

    In recent years, there has been a growing tendency in high-energy physics and in other fields to solve computational problems by distributing tasks among the resources of inter-coupled processing devices and associated system elements. This trend has gained further momentum more recently with the increased availability of low-cost processors and with the development of the means of data distribution. In two lectures, the broad question of distributed computing systems is examined and the historical development of such systems reviewed. An attempt is made to examine the reasons for the existence of these systems and to discern the main trends for the future. The components of distributed systems are discussed in some detail and particular emphasis is placed on the importance of standards and conventions in certain key system components. The ideas and principles of distributed systems are discussed in general terms, but these are illustrated by a number of concrete examples drawn from the context of the high-energy physics environment. (Auth.)

  1. Performing stencil computations

    Energy Technology Data Exchange (ETDEWEB)

    Donofrio, David

    2018-01-16

    A method and apparatus for performing stencil computations efficiently are disclosed. In one embodiment, a processor receives an offset, and in response, retrieves a value from a memory via a single instruction, where the retrieving comprises: identifying, based on the offset, one of a plurality of registers of the processor; loading an address stored in the identified register; and retrieving from the memory the value at the address.

  2. Persistent Memory in Single Node Delay-Coupled Reservoir Computing.

    Directory of Open Access Journals (Sweden)

    André David Kovac

    Full Text Available Delays are ubiquitous in biological systems, ranging from genetic regulatory networks and synaptic conductances, to predator/pray population interactions. The evidence is mounting, not only to the presence of delays as physical constraints in signal propagation speed, but also to their functional role in providing dynamical diversity to the systems that comprise them. The latter observation in biological systems inspired the recent development of a computational architecture that harnesses this dynamical diversity, by delay-coupling a single nonlinear element to itself. This architecture is a particular realization of Reservoir Computing, where stimuli are injected into the system in time rather than in space as is the case with classical recurrent neural network realizations. This architecture also exhibits an internal memory which fades in time, an important prerequisite to the functioning of any reservoir computing device. However, fading memory is also a limitation to any computation that requires persistent storage. In order to overcome this limitation, the current work introduces an extended version to the single node Delay-Coupled Reservoir, that is based on trained linear feedback. We show by numerical simulations that adding task-specific linear feedback to the single node Delay-Coupled Reservoir extends the class of solvable tasks to those that require nonfading memory. We demonstrate, through several case studies, the ability of the extended system to carry out complex nonlinear computations that depend on past information, whereas the computational power of the system with fading memory alone quickly deteriorates. Our findings provide the theoretical basis for future physical realizations of a biologically-inspired ultrafast computing device with extended functionality.

  3. Persistent Memory in Single Node Delay-Coupled Reservoir Computing.

    Science.gov (United States)

    Kovac, André David; Koall, Maximilian; Pipa, Gordon; Toutounji, Hazem

    2016-01-01

    Delays are ubiquitous in biological systems, ranging from genetic regulatory networks and synaptic conductances, to predator/pray population interactions. The evidence is mounting, not only to the presence of delays as physical constraints in signal propagation speed, but also to their functional role in providing dynamical diversity to the systems that comprise them. The latter observation in biological systems inspired the recent development of a computational architecture that harnesses this dynamical diversity, by delay-coupling a single nonlinear element to itself. This architecture is a particular realization of Reservoir Computing, where stimuli are injected into the system in time rather than in space as is the case with classical recurrent neural network realizations. This architecture also exhibits an internal memory which fades in time, an important prerequisite to the functioning of any reservoir computing device. However, fading memory is also a limitation to any computation that requires persistent storage. In order to overcome this limitation, the current work introduces an extended version to the single node Delay-Coupled Reservoir, that is based on trained linear feedback. We show by numerical simulations that adding task-specific linear feedback to the single node Delay-Coupled Reservoir extends the class of solvable tasks to those that require nonfading memory. We demonstrate, through several case studies, the ability of the extended system to carry out complex nonlinear computations that depend on past information, whereas the computational power of the system with fading memory alone quickly deteriorates. Our findings provide the theoretical basis for future physical realizations of a biologically-inspired ultrafast computing device with extended functionality.

  4. VIRTUS: a multi-processor system in FASTBUS

    International Nuclear Information System (INIS)

    Ellett, J.; Jackson, R.; Ritter, R.; Schlein, P.; Yaeger, D.; Zweizig, J.

    1986-01-01

    VIRTUS is a system of parallel MC68000-based processors interconnected by FASTBUS that is used either on-line as an intelligent trigger component or off-line for full event processing. Each processor receives the complete set of data from one event. The host computer, a VAX 11/780, down-line loads all software to the processors, controls and monitors the functioning of all processors, and writes processed data to tape. Instructions, programs, and data are transferred among the processors and the host in the form of fixed format, variable length data blocks. (Auth.)

  5. Discussion paper for a highly parallel array processor-based machine

    International Nuclear Information System (INIS)

    Hagstrom, R.; Bolotin, G.; Dawson, J.

    1984-01-01

    The architectural plant for a quickly realizable implementation of a highly parallel special-purpose computer system with peak performance in the range of 6 billion floating point operations per second is discussed. The architecture is suitable to Lattice Gauge theoretical computations of fundamental physics interest and may be applicable to a range of other problems which deal with numerically intensive computational problems. The plan is quickly realizable because it employs a maximum of commercially available hardware subsystems and because the architecture is software-transparent to the individual processors, allowing straightforward re-use of whatever commercially available operating-systems and support software that is suitable to run on the commercially-produced processors. A tiny prototype instrument, designed along this architecture has already operated. A few elementary examples of programs which can run efficiently are presented. The large machine which the authors would propose to build would be based upon a highly competent array-processor, the ST-100 Array Processor, and specific design possibilities are discussed. The first step toward realizing this plan practically is to install a single ST-100 to allow algorithm development to proceed while a demonstration unit is built using two of the ST-100 Array Processors

  6. Color sensor and neural processor on one chip

    Science.gov (United States)

    Fiesler, Emile; Campbell, Shannon R.; Kempem, Lother; Duong, Tuan A.

    1998-10-01

    Low-cost, compact, and robust color sensor that can operate in real-time under various environmental conditions can benefit many applications, including quality control, chemical sensing, food production, medical diagnostics, energy conservation, monitoring of hazardous waste, and recycling. Unfortunately, existing color sensor are either bulky and expensive or do not provide the required speed and accuracy. In this publication we describe the design of an accurate real-time color classification sensor, together with preprocessing and a subsequent neural network processor integrated on a single complementary metal oxide semiconductor (CMOS) integrated circuit. This one-chip sensor and information processor will be low in cost, robust, and mass-producible using standard commercial CMOS processes. The performance of the chip and the feasibility of its manufacturing is proven through computer simulations based on CMOS hardware parameters. Comparisons with competing methodologies show a significantly higher performance for our device.

  7. Single photon emission computed tomography in children with idiopathic seizures

    International Nuclear Information System (INIS)

    Hara, Masafumi; Takahashi, Mutsumasa; Kojima, Akihiro; Shimomura, Osamu; Kinoshita, Rumi; Tomiguchi, Seiji; Taku, Keiichi; Miike, Teruhisa

    1991-01-01

    Single photon emission computed tomography (SPECT) with N-isoprophyl-p [ 123 I]-iodoamphetamine (IMP), X-ray computed tomography (X-CT), and magnetic resonance imaging (MRI) were performed in 20 children with idiopathic seizures. In children with idiopathic seizures, SPECT could detect the abnormal sites at the highest rate (45%) compared with CT (10%) and MRI (12%), but the abnormal sites on SPECT correlated poorly with the foci on electroencephalograph (EEG). Idiopathic epilepsy with hypoperfusion on SPECT was refractory to treatment and was frequently associated with mental and/or developmental retardation. Perfusion defects on SPECT scans probably affect the development and maturation of the brain in children. (author)

  8. Integrated fuel processor development

    International Nuclear Information System (INIS)

    Ahmed, S.; Pereira, C.; Lee, S. H. D.; Krumpelt, M.

    2001-01-01

    The Department of Energy's Office of Advanced Automotive Technologies has been supporting the development of fuel-flexible fuel processors at Argonne National Laboratory. These fuel processors will enable fuel cell vehicles to operate on fuels available through the existing infrastructure. The constraints of on-board space and weight require that these fuel processors be designed to be compact and lightweight, while meeting the performance targets for efficiency and gas quality needed for the fuel cell. This paper discusses the performance of a prototype fuel processor that has been designed and fabricated to operate with liquid fuels, such as gasoline, ethanol, methanol, etc. Rated for a capacity of 10 kWe (one-fifth of that needed for a car), the prototype fuel processor integrates the unit operations (vaporization, heat exchange, etc.) and processes (reforming, water-gas shift, preferential oxidation reactions, etc.) necessary to produce the hydrogen-rich gas (reformate) that will fuel the polymer electrolyte fuel cell stacks. The fuel processor work is being complemented by analytical and fundamental research. With the ultimate objective of meeting on-board fuel processor goals, these studies include: modeling fuel cell systems to identify design and operating features; evaluating alternative fuel processing options; and developing appropriate catalysts and materials. Issues and outstanding challenges that need to be overcome in order to develop practical, on-board devices are discussed

  9. 3081/E processor and its on-line use

    International Nuclear Information System (INIS)

    Rankin, P.; Bricaud, B.; Gravina, M.

    1985-05-01

    The 3081/E is a second generation emulator of a mainframe IBM. One of it's applications will be to form part of the data acquisition system of the upgraded Mark II detector for data taking at the SLAC linear collider. Since the processor does not have direct connections to I/O devices a FASTBUS interface will be provided to allow communication with both SLAC Scanner Processors (which are responsible for the accumulation of data at a crate level) and the experiment's VAX 8600 mainframe. The 3081/E's will supply a significant amount of on-line computing power to the experiment (a single 3081/E is equivalent to 4 to 5 VAX 11/780's). A major advantage of the 3081/E is that program development can be done on an IBM mainframe (such as the one used for off-line analysis) which gives the programmer access to a full range of debugging tools. The processor's performance can be continually monitored by comparison of the results obtained using it to those given when the same program is run on an IBM computer. 9 refs

  10. Special processor for in-core control systems

    International Nuclear Information System (INIS)

    Golovanov, M.N.; Duma, V.R.; Levin, G.L.; Mel'nikov, A.V.; Polikanin, A.V.; Filatov, V.P.

    1978-01-01

    The BUTs-20 special processor is discussed, designed to control the units of the in-core control equipment which are incorporated into the VECTOR communication channel, and to provide preliminary data processing prior to computer calculations. A set of instructions and flowsheet of the processor, organization of its communication with memories and other units of the system are given. The processor components: a control unit and an arithmetic logical unit are discussed. It is noted that the special processor permits more effective utilization of the computer time

  11. Convolutional Deep Belief Networks for Single-Cell/Object Tracking in Computational Biology and Computer Vision

    OpenAIRE

    Zhong, Bineng; Pan, Shengnan; Zhang, Hongbo; Wang, Tian; Du, Jixiang; Chen, Duansheng; Cao, Liujuan

    2016-01-01

    In this paper, we propose deep architecture to dynamically learn the most discriminative features from data for both single-cell and object tracking in computational biology and computer vision. Firstly, the discriminative features are automatically learned via a convolutional deep belief network (CDBN). Secondly, we design a simple yet effective method to transfer features learned from CDBNs on the source tasks for generic purpose to the object tracking tasks using only limited amount of tra...

  12. Logistic Fuel Processor Development

    National Research Council Canada - National Science Library

    Salavani, Reza

    2004-01-01

    ... to light gases then steam reform the light gases into hydrogen rich stream. This report documents the efforts in developing a fuel processor capable of providing hydrogen to a 3kW fuel cell stack...

  13. Logistic Fuel Processor Development

    National Research Council Canada - National Science Library

    Salavani, Reza

    2004-01-01

    The Air Base Technologies Division of the Air Force Research Laboratory has developed a logistic fuel processor that removes the sulfur content of the fuel and in the process converts logistic fuel...

  14. Adaptive signal processor

    Energy Technology Data Exchange (ETDEWEB)

    Walz, H.V.

    1980-07-01

    An experimental, general purpose adaptive signal processor system has been developed, utilizing a quantized (clipped) version of the Widrow-Hoff least-mean-square adaptive algorithm developed by Moschner. The system accommodates 64 adaptive weight channels with 8-bit resolution for each weight. Internal weight update arithmetic is performed with 16-bit resolution, and the system error signal is measured with 12-bit resolution. An adapt cycle of adjusting all 64 weight channels is accomplished in 8 ..mu..sec. Hardware of the signal processor utilizes primarily Schottky-TTL type integrated circuits. A prototype system with 24 weight channels has been constructed and tested. This report presents details of the system design and describes basic experiments performed with the prototype signal processor. Finally some system configurations and applications for this adaptive signal processor are discussed.

  15. Adaptive signal processor

    International Nuclear Information System (INIS)

    Walz, H.V.

    1980-07-01

    An experimental, general purpose adaptive signal processor system has been developed, utilizing a quantized (clipped) version of the Widrow-Hoff least-mean-square adaptive algorithm developed by Moschner. The system accommodates 64 adaptive weight channels with 8-bit resolution for each weight. Internal weight update arithmetic is performed with 16-bit resolution, and the system error signal is measured with 12-bit resolution. An adapt cycle of adjusting all 64 weight channels is accomplished in 8 μsec. Hardware of the signal processor utilizes primarily Schottky-TTL type integrated circuits. A prototype system with 24 weight channels has been constructed and tested. This report presents details of the system design and describes basic experiments performed with the prototype signal processor. Finally some system configurations and applications for this adaptive signal processor are discussed

  16. An overview of flight computer technologies for future NASA

    Science.gov (United States)

    Alkalai, L.

    2001-01-01

    In this paper, we present an overview of current developments by several US Government Agencies and associated programs, towards high-performance single board computers for use in space. Three separate projects will be described; two that are based on the Power PC processor, and one based on the Pentium processor.

  17. Simulation of a parallel processor on a serial processor: The neutron diffusion equation

    International Nuclear Information System (INIS)

    Honeck, H.C.

    1981-01-01

    Parallel processors could provide the nuclear industry with very high computing power at a very moderate cost. Will we be able to make effective use of this power. This paper explores the use of a very simple parallel processor for solving the neutron diffusion equation to predict power distributions in a nuclear reactor. We first describe a simple parallel processor and estimate its theoretical performance based on the current hardware technology. Next, we show how the parallel processor could be used to solve the neutron diffusion equation. We then present the results of some simulations of a parallel processor run on a serial processor and measure some of the expected inefficiencies. Finally we extrapolate the results to estimate how actual design codes would perform. We find that the standard numerical methods for solving the neutron diffusion equation are still applicable when used on a parallel processor. However, some simple modifications to these methods will be necessary if we are to achieve the full power of these new computers. (orig.) [de

  18. 21 CFR 870.1435 - Single-function, preprogrammed diagnostic computer.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Single-function, preprogrammed diagnostic computer... Single-function, preprogrammed diagnostic computer. (a) Identification. A single-function, preprogrammed diagnostic computer is a hard-wired computer that calculates a specific physiological or blood-flow parameter...

  19. Assembly processor program converts symbolic programming language to machine language

    Science.gov (United States)

    Pelto, E. V.

    1967-01-01

    Assembly processor program converts symbolic programming language to machine language. This program translates symbolic codes into computer understandable instructions, assigns locations in storage for successive instructions, and computer locations from symbolic addresses.

  20. Functional unit for a processor

    NARCIS (Netherlands)

    Rohani, A.; Kerkhoff, Hans G.

    2013-01-01

    The invention relates to a functional unit for a processor, such as a Very Large Instruction Word Processor. The invention further relates to a processor comprising at least one such functional unit. The invention further relates to a functional unit and processor capable of mitigating the effect of

  1. Implementation of an EPICS IOC on an Embedded Soft Core Processor Using Field Programmable Gate Arrays

    International Nuclear Information System (INIS)

    Douglas Curry; Alicia Hofler; Hai Dong; Trent Allison; J. Hovater; Kelly Mahoney

    2005-01-01

    At Jefferson Lab, we have been evaluating soft core processors running an EPICS IOC over μClinux on our custom hardware. A soft core processor is a flexible CPU architecture that is configured in the FPGA as opposed to a hard core processor which is fixed in silicon. Combined with an on-board Ethernet port, the technology incorporates the IOC and digital control hardware within a single FPGA. By eliminating the general purpose computer IOC, the designer is no longer tied to a specific platform, e.g. PC, VME, or VXI, to serve as the intermediary between the high level controls and the field hardware. This paper will discuss the design and development process as well as specific applications for JLab's next generation low-level RF controls and Machine Protection Systems

  2. Quantification in single photon emission computed tomography (SPECT)

    International Nuclear Information System (INIS)

    Buvat, Irene

    2005-01-01

    The objective of this lecture is to understand the possibilities and limitations of the quantitative analysis of single photon emission computed tomography (SPECT) images. It is also to identify the conditions to be fulfilled to obtain reliable quantitative measurements from images. Content: 1 - Introduction: Quantification in emission tomography - definition and challenges; quantification biasing phenomena; 2 - quantification in SPECT, problems and correction methods: Attenuation, scattering, un-stationary spatial resolution, partial volume effect, movement, tomographic reconstruction, calibration; 3 - Synthesis: actual quantification accuracy; 4 - Beyond the activity concentration measurement

  3. OSL sensitivity changes during single aliquot procedures: Computer simulations

    DEFF Research Database (Denmark)

    McKeever, S.W.S.; Agersnap Larsen, N.; Bøtter-Jensen, L.

    1997-01-01

    We present computer simulations of sensitivity changes obtained during single aliquot, regeneration procedures. The simulations indicate that the sensitivity changes are the combined result of shallow trap and deep trap effects. Four separate processes have been identified. Although procedures can...... be suggested to eliminate the shallow trap effects, it appears that the deep trap effects cannot be removed. The character of the sensitivity changes which result from these effects is seen to be dependent upon several external parameters, including the extent of bleaching of the OSL signal, the laboratory...

  4. Accuracy requirements of optical linear algebra processors in adaptive optics imaging systems

    Science.gov (United States)

    Downie, John D.; Goodman, Joseph W.

    1989-10-01

    The accuracy requirements of optical processors in adaptive optics systems are determined by estimating the required accuracy in a general optical linear algebra processor (OLAP) that results in a smaller average residual aberration than that achieved with a conventional electronic digital processor with some specific computation speed. Special attention is given to an error analysis of a general OLAP with regard to the residual aberration that is created in an adaptive mirror system by the inaccuracies of the processor, and to the effect of computational speed of an electronic processor on the correction. Results are presented on the ability of an OLAP to compete with a digital processor in various situations.

  5. Computing tools for implementing standards for single-case designs.

    Science.gov (United States)

    Chen, Li-Ting; Peng, Chao-Ying Joanne; Chen, Ming-E

    2015-11-01

    In the single-case design (SCD) literature, five sets of standards have been formulated and distinguished: design standards, assessment standards, analysis standards, reporting standards, and research synthesis standards. This article reviews computing tools that can assist researchers and practitioners in meeting the analysis standards recommended by the What Works Clearinghouse: Procedures and Standards Handbook-the WWC standards. These tools consist of specialized web-based calculators or downloadable software for SCD data, and algorithms or programs written in Excel, SAS procedures, SPSS commands/Macros, or the R programming language. We aligned these tools with the WWC standards and evaluated them for accuracy and treatment of missing data, using two published data sets. All tools were tested to be accurate. When missing data were present, most tools either gave an error message or conducted analysis based on the available data. Only one program used a single imputation method. This article concludes with suggestions for an inclusive computing tool or environment, additional research on the treatment of missing data, and reasonable and flexible interpretations of the WWC standards. © The Author(s) 2015.

  6. Single cell adhesion assay using computer controlled micropipette.

    Directory of Open Access Journals (Sweden)

    Rita Salánki

    Full Text Available Cell adhesion is a fundamental phenomenon vital for all multicellular organisms. Recognition of and adhesion to specific macromolecules is a crucial task of leukocytes to initiate the immune response. To gain statistically reliable information of cell adhesion, large numbers of cells should be measured. However, direct measurement of the adhesion force of single cells is still challenging and today's techniques typically have an extremely low throughput (5-10 cells per day. Here, we introduce a computer controlled micropipette mounted onto a normal inverted microscope for probing single cell interactions with specific macromolecules. We calculated the estimated hydrodynamic lifting force acting on target cells by the numerical simulation of the flow at the micropipette tip. The adhesion force of surface attached cells could be accurately probed by repeating the pick-up process with increasing vacuum applied in the pipette positioned above the cell under investigation. Using the introduced methodology hundreds of cells adhered to specific macromolecules were measured one by one in a relatively short period of time (∼30 min. We blocked nonspecific cell adhesion by the protein non-adhesive PLL-g-PEG polymer. We found that human primary monocytes are less adherent to fibrinogen than their in vitro differentiated descendants: macrophages and dendritic cells, the latter producing the highest average adhesion force. Validation of the here introduced method was achieved by the hydrostatic step-pressure micropipette manipulation technique. Additionally the result was reinforced in standard microfluidic shear stress channels. Nevertheless, automated micropipette gave higher sensitivity and less side-effect than the shear stress channel. Using our technique, the probed single cells can be easily picked up and further investigated by other techniques; a definite advantage of the computer controlled micropipette. Our experiments revealed the existence of a

  7. 3081//sub E/ processor

    International Nuclear Information System (INIS)

    Kunz, P.F.; Gravina, M.; Oxoby, G.; Trang, Q.; Fucci, A.; Jacobs, D.; Martin, B.; Storr, K.

    1983-03-01

    Since the introduction of the 168//sub E/, emulating processors have been successful over an amazingly wide range of applications. This paper will describe a second generation processor, the 3081//sub E/. This new processor, which is being developed as a collaboration between SLAC and CERN, goes beyond just fixing the obvious faults of the 168//sub E/. Not only will the 3081//sub E/ have much more memory space, incorporate many more IBM instructions, and have much more memory space, incorporate many more IBM instructions, and have full double precision floating point arithmetic, but it will also have faster execution times and be much simpler to build, debug, and maintain. The simple interface and reasonable cost of the 168//sub E/ will be maintained for the 3081//sub E/

  8. Multiple single-element transducer photoacoustic computed tomography system

    Science.gov (United States)

    Kalva, Sandeep Kumar; Hui, Zhe Zhi; Pramanik, Manojit

    2018-02-01

    Light absorption by the chromophores (hemoglobin, melanin, water etc.) present in any biological tissue results in local temperature rise. This rise in temperature results in generation of pressure waves due to the thermoelastic expansion of the tissue. In a circular scanning photoacoustic computed tomography (PACT) system, these pressure waves can be detected using a single-element ultrasound transducer (SUST) (while rotating in full 360° around the sample) or using a circular array transducer. SUST takes several minutes to acquire the PA data around the sample whereas the circular array transducer takes only a fraction of seconds. Hence, for real time imaging circular array transducers are preferred. However, these circular array transducers are custom made, expensive and not easily available in the market whereas SUSTs are cheap and readily available in the market. Using SUST for PACT systems is still cost effective. In order to reduce the scanning time to few seconds instead of using single SUST (rotating 360° ), multiple SUSTs can be used at the same time to acquire the PA data. This will reduce the scanning time by two-fold in case of two SUSTs (rotating 180° ) or by four-fold and eight-fold in case of four SUSTs (rotating 90° ) and eight SUSTs (rotating 45° ) respectively. Here we show that with multiple SUSTs, similar PA images (numerical and experimental phantom data) can be obtained as that of PA images obtained using single SUST.

  9. Convolutional Deep Belief Networks for Single-Cell/Object Tracking in Computational Biology and Computer Vision.

    Science.gov (United States)

    Zhong, Bineng; Pan, Shengnan; Zhang, Hongbo; Wang, Tian; Du, Jixiang; Chen, Duansheng; Cao, Liujuan

    2016-01-01

    In this paper, we propose deep architecture to dynamically learn the most discriminative features from data for both single-cell and object tracking in computational biology and computer vision. Firstly, the discriminative features are automatically learned via a convolutional deep belief network (CDBN). Secondly, we design a simple yet effective method to transfer features learned from CDBNs on the source tasks for generic purpose to the object tracking tasks using only limited amount of training data. Finally, to alleviate the tracker drifting problem caused by model updating, we jointly consider three different types of positive samples. Extensive experiments validate the robustness and effectiveness of the proposed method.

  10. Sojourn time tails in processor-sharing systems

    NARCIS (Netherlands)

    Egorova, R.R.

    2009-01-01

    The processor-sharing discipline was originally introduced as a modeling abstraction for the design and performance analysis of the processing unit of a computer system. Under the processor-sharing discipline, all active tasks are assumed to be processed simultaneously, receiving an equal share of

  11. ACP/R3000 processors in data acquisition systems

    International Nuclear Information System (INIS)

    Deppe, J.; Areti, H.; Atac, R.

    1989-02-01

    We describe ACP/R3000 processor based data acquisition systems for high energy physics. This VME bus compatible processor board, with a computational power equivalent to 15 VAX 11/780s or better, contains 8 Mb of memory for event buffering and has a high speed secondary bus that allows data gathering from front end electronics. 2 refs., 3 figs

  12. Temporal fringe pattern analysis with parallel computing

    International Nuclear Information System (INIS)

    Tuck Wah Ng; Kar Tien Ang; Argentini, Gianluca

    2005-01-01

    Temporal fringe pattern analysis is invaluable in transient phenomena studies but necessitates long processing times. Here we describe a parallel computing strategy based on the single-program multiple-data model and hyperthreading processor technology to reduce the execution time. In a two-node cluster workstation configuration we found that execution periods were reduced by 1.6 times when four virtual processors were used. To allow even lower execution times with an increasing number of processors, the time allocated for data transfer, data read, and waiting should be minimized. Parallel computing is found here to present a feasible approach to reduce execution times in temporal fringe pattern analysis

  13. On the effective parallel programming of multi-core processors

    NARCIS (Netherlands)

    Varbanescu, A.L.

    2010-01-01

    Multi-core processors are considered now the only feasible alternative to the large single-core processors which have become limited by technological aspects such as power consumption and heat dissipation. However, due to their inherent parallel structure and their diversity, multi-cores are

  14. Application of an array processor to the analysis of magnetic data for the Doublet III tokamak

    International Nuclear Information System (INIS)

    Wang, T.S.; Saito, M.T.

    1980-08-01

    Discussed herein is a fast computational technique employing the Floating Point Systems AP-190L array processor to analyze magnetic data for the Doublet III tokamak, a fusion research device. Interpretation of the experimental data requires the repeated solution of a free-boundary nonlinear partial differential equation, which describes the magnetohydrodynamic (MHD) equilibrium of the plasma. For this particular application, we have found that the array processor is only 1.4 and 3.5 times slower than the CDC-7600 and CRAY computers, respectively. The overhead on the host DEC-10 computer was kept to a minimum by chaining the complete Poisson solver and free-boundary algorithm into one single-load module using the vector function chainer (VFC). A simple time-sharing scheme for using the MHD code is also discussed

  15. A high-speed analog neural processor

    NARCIS (Netherlands)

    Masa, P.; Masa, Peter; Hoen, Klaas; Hoen, Klaas; Wallinga, Hans

    1994-01-01

    Targeted at high-energy physics research applications, our special-purpose analog neural processor can classify up to 70 dimensional vectors within 50 nanoseconds. The decision-making process of the implemented feedforward neural network enables this type of computation to tolerate weight

  16. An embedded single-board computer for BPM of SSRF

    International Nuclear Information System (INIS)

    Chen Kai; Liu Shubin; Yan Han; Wu Weihao; Zhao Lei; An Qi; Leng Yongbin; Yi Xing; Yan Yingbing; Lai Longwei

    2011-01-01

    An embedded single-board computer (SBC) system based on AT91RM9200 was designed for monitoring and controlling the digital beam position monitor system of Shanghai Synchrotron Radiation Facility (SSRF) through the Virtex-4 FPGA in the digital processing board. The SBC transfers the configuration commands from the remote EPICS to the FPGA, and calculates the beam position data. The interface between the FPGA and the SBC is the Static Memory Controller (SMC) with a peak transfer speed of up to 349 Mbps. The 100 Mb Ethernet is used for data transfer between the EPICS and SBC board, and a serial port serves as monitoring the status of the embedded system. Test results indicate that the SBC board functions well. (authors)

  17. Single-molecule protein sequencing through fingerprinting: computational assessment

    Science.gov (United States)

    Yao, Yao; Docter, Margreet; van Ginkel, Jetty; de Ridder, Dick; Joo, Chirlmin

    2015-10-01

    Proteins are vital in all biological systems as they constitute the main structural and functional components of cells. Recent advances in mass spectrometry have brought the promise of complete proteomics by helping draft the human proteome. Yet, this commonly used protein sequencing technique has fundamental limitations in sensitivity. Here we propose a method for single-molecule (SM) protein sequencing. A major challenge lies in the fact that proteins are composed of 20 different amino acids, which demands 20 molecular reporters. We computationally demonstrate that it suffices to measure only two types of amino acids to identify proteins and suggest an experimental scheme using SM fluorescence. When achieved, this highly sensitive approach will result in a paradigm shift in proteomics, with major impact in the biological and medical sciences.

  18. Single-molecule protein sequencing through fingerprinting: computational assessment

    International Nuclear Information System (INIS)

    Yao, Yao; Docter, Margreet; Van Ginkel, Jetty; Joo, Chirlmin; De Ridder, Dick

    2015-01-01

    Proteins are vital in all biological systems as they constitute the main structural and functional components of cells. Recent advances in mass spectrometry have brought the promise of complete proteomics by helping draft the human proteome. Yet, this commonly used protein sequencing technique has fundamental limitations in sensitivity. Here we propose a method for single-molecule (SM) protein sequencing. A major challenge lies in the fact that proteins are composed of 20 different amino acids, which demands 20 molecular reporters. We computationally demonstrate that it suffices to measure only two types of amino acids to identify proteins and suggest an experimental scheme using SM fluorescence. When achieved, this highly sensitive approach will result in a paradigm shift in proteomics, with major impact in the biological and medical sciences. (paper)

  19. Single photon emission computed tomography of the liver

    International Nuclear Information System (INIS)

    Makler, P.T. Jr.

    1988-01-01

    Single photon emission computed tomography (SPECT) of the liver has been introduced in order to improve sensitivity and localization of space-occupying lesions. There have been numerous reports on the usefulness of the technique, as well as extensive analyses of its technical drawbacks. In general, SPECT provides a more accurate estimation of defect size than does conventional planar scintigraphy for cases in which one wishes to evaluate changes in lesion size due to therapy. The presence of a superimposed parenchymal disease, however, remains a major problem, which will only be resolved by a scanning technique that specifically detects the disease process of concern (hot spot imaging) rather than displacement of normal tissue (cold spot imaging)

  20. Single photon emission computed tomography in AIDS dementia complex

    International Nuclear Information System (INIS)

    Pohl, P.; Vogl, G.; Fill, H.; Roessler, H.Z.; Zangerle, R.; Gerstenbrand, F.

    1988-01-01

    Single photon emission computed tomography (SPECT) studies were performed in AIDS dementia complex using IMP in 12 patients (and HM-PAO in four of these same patients). In all patients, SPECT revealed either multiple or focal uptake defects, the latter corresponding with focal signs or symptoms in all but one case. Computerized tomography showed a diffuse cerebral atrophy in eight of 12 patients, magnetic resonance imaging exhibited changes like atrophy and/or leukoencephalopathy in two of five cases. Our data indicate that both disturbance of cerebral amine metabolism and alteration of local perfusion share in the pathogenesis of AIDS dementia complex. SPECT is an important aid in the diagnosis of AIDS dementia complex and contributes to the understanding of the pathophysiological mechanisms of this disorder

  1. Algorithms for parallel computers

    International Nuclear Information System (INIS)

    Churchhouse, R.F.

    1985-01-01

    Until relatively recently almost all the algorithms for use on computers had been designed on the (usually unstated) assumption that they were to be run on single processor, serial machines. With the introduction of vector processors, array processors and interconnected systems of mainframes, minis and micros, however, various forms of parallelism have become available. The advantage of parallelism is that it offers increased overall processing speed but it also raises some fundamental questions, including: (i) which, if any, of the existing 'serial' algorithms can be adapted for use in the parallel mode. (ii) How close to optimal can such adapted algorithms be and, where relevant, what are the convergence criteria. (iii) How can we design new algorithms specifically for parallel systems. (iv) For multi-processor systems how can we handle the software aspects of the interprocessor communications. Aspects of these questions illustrated by examples are considered in these lectures. (orig.)

  2. An Introduction to Parallel Computation R

    Indian Academy of Sciences (India)

    How are they programmed? This article provides an introduction. A parallel computer is a network of processors built for ... and have been used to solve problems much faster than a single ... in parallel computer design is to select an organization which ..... The most ambitious approach to parallel computing is to develop.

  3. Spectrum of single photon emission computed tomography/computed tomography findings in patients with parathyroid adenomas.

    Science.gov (United States)

    Chakraborty, Dhritiman; Mittal, Bhagwant Rai; Harisankar, Chidambaram Natrajan Balasubramanian; Bhattacharya, Anish; Bhadada, Sanjay

    2011-01-01

    Primary hyperparathyroidism results from excessive parathyroid hormone secretion. Approximately 85% of all cases of primary hyperparathyroidism are caused by a single parathyroid adenoma; 10-15% of the cases are caused by parathyroid hyperplasia. Parathyroid carcinoma accounts for approximately 3-4% of cases of primary disease. Technetium-99m-sestamibi (MIBI), the current scintigraphic procedure of choice for preoperative parathyroid localization, can be performed in various ways. The "single-isotope, double-phase technique" is based on the fact that MIBI washes out more rapidly from the thyroid than from abnormal parathyroid tissue. However, not all parathyroid lesions retain MIBI and not all thyroid tissue washes out quickly, and subtraction imaging is helpful. Single photon emission computed tomography (SPECT) provides information for localizing parathyroid lesions, differentiating thyroid from parathyroid lesions, and detecting and localizing ectopic parathyroid lesions. Addition of CT with SPECT improves the sensitivity. This pictorial assay demonstrates various SPECT/CT patterns observed in parathyroid scintigraphy.

  4. Interleaved Subtask Scheduling on Multi Processor SOC

    NARCIS (Netherlands)

    Zhe, M.

    2006-01-01

    The ever-progressing semiconductor processing technique has integrated more and more embedded processors on a single system-on-achip (SoC). With such powerful SoC platforms, and also due to the stringent time-to-market deadlines, many functionalities which used to be implemented in ASICs are

  5. Computational/experimental studies of isolated, single component droplet combustion

    Science.gov (United States)

    Dryer, Frederick L.

    1993-01-01

    Isolated droplet combustion processes have been the subject of extensive experimental and theoretical investigations for nearly 40 years. The gross features of droplet burning are qualitatively embodied by simple theories and are relatively well understood. However, there remain significant aspects of droplet burning, particularly its dynamics, for which additional basic knowledge is needed for thorough interpretations and quantitative explanations of transient phenomena. Spherically-symmetric droplet combustion, which can only be approximated under conditions of both low Reynolds and Grashof numbers, represents the simplest geometrical configuration in which to study the coupled chemical/transport processes inherent within non-premixed flames. The research summarized here, concerns recent results on isolated, single component, droplet combustion under microgravity conditions, a program pursued jointly with F.A. Williams of the University of California, San Diego. The overall program involves developing and applying experimental methods to study the burning of isolated, single component droplets, in various atmospheres, primarily at atmospheric pressure and below, in both drop towers and aboard space-based platforms such as the Space Shuttle or Space Station. Both computational methods and asymptotic methods, the latter pursued mainly at UCSD, are used in developing the experimental test matrix, in analyzing results, and for extending theoretical understanding. Methanol, and the normal alkanes, n-heptane, and n-decane, have been selected as test fuels to study time-dependent droplet burning phenomena. The following sections summarizes the Princeton efforts on this program, describe work in progress, and briefly delineate future research directions.

  6. Proceedings of clinical SPECT [single photon emission computed tomography] symposium

    International Nuclear Information System (INIS)

    1986-09-01

    It has been five years since the last in-depth American College of Nuclear Physicians/Society of Nuclear Medicine Symposium on the subject of single photon emission computed tomography (SPECT) was held. Because this subject was nominated as the single most desired topic we have selected SPECT imaging as the basis for this year's program. The objectives of this symposium are to survey the progress of SPECT clinical applications that have taken place over the last five years and to provide practical and timely guidelines to users of SPECT so that this exciting imaging modality can be fully integrated into the evaluation of pathologic processes. The first half was devoted to a consideration of technical factors important in SPECT acquisition and the second half was devoted to those organ systems about which sufficient clinical SPECT imaging data are available. With respect to the technical aspect of the program we have selected the key areas which demand awareness and attention in order to make SPECT operational in clinical practice. These include selection of equipment, details of uniformity correction, utilization of phantoms for equipment acceptance and quality assurance, the major aspect of algorithms, an understanding of filtered back projection and appropriate choice of filters and an awareness of the most commonly generated artifacts and how to recognize them. With respect to the acquisition and interpretation of organ images, the faculty will present information on the major aspects of hepatic, brain, cardiac, skeletal, and immunologic imaging techniques. Individual papers are processed separately for the data base

  7. A programmable two-qubit quantum processor in silicon.

    Science.gov (United States)

    Watson, T F; Philips, S G J; Kawakami, E; Ward, D R; Scarlino, P; Veldhorst, M; Savage, D E; Lagally, M G; Friesen, Mark; Coppersmith, S N; Eriksson, M A; Vandersypen, L M K

    2018-03-29

    Now that it is possible to achieve measurement and control fidelities for individual quantum bits (qubits) above the threshold for fault tolerance, attention is moving towards the difficult task of scaling up the number of physical qubits to the large numbers that are needed for fault-tolerant quantum computing. In this context, quantum-dot-based spin qubits could have substantial advantages over other types of qubit owing to their potential for all-electrical operation and ability to be integrated at high density onto an industrial platform. Initialization, readout and single- and two-qubit gates have been demonstrated in various quantum-dot-based qubit representations. However, as seen with small-scale demonstrations of quantum computers using other types of qubit, combining these elements leads to challenges related to qubit crosstalk, state leakage, calibration and control hardware. Here we overcome these challenges by using carefully designed control techniques to demonstrate a programmable two-qubit quantum processor in a silicon device that can perform the Deutsch-Josza algorithm and the Grover search algorithm-canonical examples of quantum algorithms that outperform their classical analogues. We characterize the entanglement in our processor by using quantum-state tomography of Bell states, measuring state fidelities of 85-89 per cent and concurrences of 73-82 per cent. These results pave the way for larger-scale quantum computers that use spins confined to quantum dots.

  8. A programmable two-qubit quantum processor in silicon

    Science.gov (United States)

    Watson, T. F.; Philips, S. G. J.; Kawakami, E.; Ward, D. R.; Scarlino, P.; Veldhorst, M.; Savage, D. E.; Lagally, M. G.; Friesen, Mark; Coppersmith, S. N.; Eriksson, M. A.; Vandersypen, L. M. K.

    2018-03-01

    Now that it is possible to achieve measurement and control fidelities for individual quantum bits (qubits) above the threshold for fault tolerance, attention is moving towards the difficult task of scaling up the number of physical qubits to the large numbers that are needed for fault-tolerant quantum computing. In this context, quantum-dot-based spin qubits could have substantial advantages over other types of qubit owing to their potential for all-electrical operation and ability to be integrated at high density onto an industrial platform. Initialization, readout and single- and two-qubit gates have been demonstrated in various quantum-dot-based qubit representations. However, as seen with small-scale demonstrations of quantum computers using other types of qubit, combining these elements leads to challenges related to qubit crosstalk, state leakage, calibration and control hardware. Here we overcome these challenges by using carefully designed control techniques to demonstrate a programmable two-qubit quantum processor in a silicon device that can perform the Deutsch–Josza algorithm and the Grover search algorithm—canonical examples of quantum algorithms that outperform their classical analogues. We characterize the entanglement in our processor by using quantum-state tomography of Bell states, measuring state fidelities of 85–89 per cent and concurrences of 73–82 per cent. These results pave the way for larger-scale quantum computers that use spins confined to quantum dots.

  9. The Central Trigger Processor (CTP)

    CERN Multimedia

    Franchini, Matteo

    2016-01-01

    The Central Trigger Processor (CTP) receives trigger information from the calorimeter and muon trigger processors, as well as from other sources of trigger. It makes the Level-1 decision (L1A) based on a trigger menu.

  10. Very Long Instruction Word Processors

    Indian Academy of Sciences (India)

    Pentium Processor have modified the processor architecture to exploit parallelism in a program. .... The type of operation itself is encoded using 14 bits. .... text of designing simple architectures with low power consump- tion and execute x86 ...

  11. Embedded processor extensions for image processing

    Science.gov (United States)

    Thevenin, Mathieu; Paindavoine, Michel; Letellier, Laurent; Heyrman, Barthélémy

    2008-04-01

    The advent of camera phones marks a new phase in embedded camera sales. By late 2009, the total number of camera phones will exceed that of both conventional and digital cameras shipped since the invention of photography. Use in mobile phones of applications like visiophony, matrix code readers and biometrics requires a high degree of component flexibility that image processors (IPs) have not, to date, been able to provide. For all these reasons, programmable processor solutions have become essential. This paper presents several techniques geared to speeding up image processors. It demonstrates that a gain of twice is possible for the complete image acquisition chain and the enhancement pipeline downstream of the video sensor. Such results confirm the potential of these computing systems for supporting future applications.

  12. Parallel processor programs in the Federal Government

    Science.gov (United States)

    Schneck, P. B.; Austin, D.; Squires, S. L.; Lehmann, J.; Mizell, D.; Wallgren, K.

    1985-01-01

    In 1982, a report dealing with the nation's research needs in high-speed computing called for increased access to supercomputing resources for the research community, research in computational mathematics, and increased research in the technology base needed for the next generation of supercomputers. Since that time a number of programs addressing future generations of computers, particularly parallel processors, have been started by U.S. government agencies. The present paper provides a description of the largest government programs in parallel processing. Established in fiscal year 1985 by the Institute for Defense Analyses for the National Security Agency, the Supercomputing Research Center will pursue research to advance the state of the art in supercomputing. Attention is also given to the DOE applied mathematical sciences research program, the NYU Ultracomputer project, the DARPA multiprocessor system architectures program, NSF research on multiprocessor systems, ONR activities in parallel computing, and NASA parallel processor projects.

  13. The Molen Polymorphic Media Processor

    NARCIS (Netherlands)

    Kuzmanov, G.K.

    2004-01-01

    In this dissertation, we address high performance media processing based on a tightly coupled co-processor architectural paradigm. More specifically, we introduce a reconfigurable media augmentation of a general purpose processor and implement it into a fully operational processor prototype. The

  14. Dual-core Itanium Processor

    CERN Multimedia

    2006-01-01

    Intel’s first dual-core Itanium processor, code-named "Montecito" is a major release of Intel's Itanium 2 Processor Family, which implements the Intel Itanium architecture on a dual-core processor with two cores per die (integrated circuit). Itanium 2 is much more powerful than its predecessor. It has lower power consumption and thermal dissipation.

  15. gFEX, the ATLAS Calorimeter Level-1 Real Time Processor

    CERN Document Server

    AUTHOR|(SzGeCERN)759889; The ATLAS collaboration; Begel, Michael; Chen, Hucheng; Lanni, Francesco; Takai, Helio; Wu, Weihao

    2016-01-01

    The global feature extractor (gFEX) is a component of the Level-1 Calorimeter trigger Phase-I upgrade for the ATLAS experiment. It is intended to identify patterns of energy associated with the hadronic decays of high momentum Higgs, W, & Z bosons, top quarks, and exotic particles in real time at the LHC crossing rate. The single processor board will be packaged in an Advanced Telecommunications Computing Architecture (ATCA) module and implemented as a fast reconfigurable processor based on three Xilinx Vertex Ultra-scale FPGAs. The board will receive coarse-granularity information from all the ATLAS calorimeters on 276 optical fibers with the data transferred at the 40 MHz Large Hadron Collider (LHC) clock frequency. The gFEX will be controlled by a single system-on-chip processor, ZYNQ, that will be used to configure all the processor Field-Programmable Gate Array (FPGAs), monitor board health, and interface to external signals. Now, the pre-prototype board which includes one ZYNQ and one Vertex-7 FPGA ...

  16. gFEX, the ATLAS Calorimeter Level 1 Real Time Processor

    CERN Document Server

    Tang, Shaochun; The ATLAS collaboration

    2015-01-01

    The global feature extractor (gFEX) is a component of the Level-1Calorimeter trigger Phase-I upgrade for the ATLAS experiment. It is intended to identify patterns of energy associated with the hadronic decays of high momentum Higgs, W, & Z bosons, top quarks, and exotic particles in real time at the LHC crossing rate. The single processor board will be packaged in an Advanced Telecommunications Computing Architecture (ATCA) module and implemented as a fast reconfigurable processor based on three Xilinx Ultra-scale FPGAs. The board will receive coarse-granularity information from all the ATLAS calorimeters on 264 optical fibers with the data transferred at the 40 MHz LHC clock frequency. The gFEX will be controlled by a single system-on-chip processor, ZYNQ, that will be used to configure all the processor FPGAs, monitor board health, and interface to external signals. Now, the pre-prototype board which includes one ZYNQ and one Vertex-7 FPGA has been designed for testing and verification. The performance ...

  17. MULTI-CORE AND OPTICAL PROCESSOR RELATED APPLICATIONS RESEARCH AT OAK RIDGE NATIONAL LABORATORY

    Energy Technology Data Exchange (ETDEWEB)

    Barhen, Jacob [ORNL; Kerekes, Ryan A [ORNL; ST Charles, Jesse Lee [ORNL; Buckner, Mark A [ORNL

    2008-01-01

    performs the matrix-vector multiplications, where the nominal matrix size is 256x256. The system clock is 125MHz. At each clock cycle, 128K multiply-and-add operations per second (OPS) are carried out, which yields a peak performance of 16 TeraOPS. IBM Cell Broadband Engine. The Cell processor is the extraordinary resulting product of 5 years of sustained, intensive R&D collaboration (involving over $400M investment) between IBM, Sony, and Toshiba. Its architecture comprises one multithreaded 64-bit PowerPC processor element (PPE) with VMX capabilities and two levels of globally coherent cache, and 8 synergistic processor elements (SPEs). Each SPE consists of a processor (SPU) designed for streaming workloads, local memory, and a globally coherent direct memory access (DMA) engine. Computations are performed in 128-bit wide single instruction multiple data streams (SIMD). An integrated high-bandwidth element interconnect bus (EIB) connects the nine processors and their ports to external memory and to system I/O. The Applied Software Engineering Research (ASER) Group at the ORNL is applying the Cell to a variety of text and image analysis applications. Research on Cell-equipped PlayStation3 (PS3) consoles has led to the development of a correlation-based image recognition engine that enables a single PS3 to process images at more than 10X the speed of state-of-the-art single-core processors. NVIDIA Graphics Processing Units. The ASER group is also employing the latest NVIDIA graphical processing units (GPUs) to accelerate clustering of thousands of text documents using recently developed clustering algorithms such as document flocking and affinity propagation.

  18. MULTI-CORE AND OPTICAL PROCESSOR RELATED APPLICATIONS RESEARCH AT OAK RIDGE NATIONAL LABORATORY

    International Nuclear Information System (INIS)

    Barhen, Jacob; Kerekes, Ryan A.; St Charles, Jesse Lee; Buckner, Mark A.

    2008-01-01

    performs the matrix-vector multiplications, where the nominal matrix size is 256x256. The system clock is 125MHz. At each clock cycle, 128K multiply-and-add operations per second (OPS) are carried out, which yields a peak performance of 16 TeraOPS. IBM Cell Broadband Engine. The Cell processor is the extraordinary resulting product of 5 years of sustained, intensive R and D collaboration (involving over $400M investment) between IBM, Sony, and Toshiba. Its architecture comprises one multithreaded 64-bit PowerPC processor element (PPE) with VMX capabilities and two levels of globally coherent cache, and 8 synergistic processor elements (SPEs). Each SPE consists of a processor (SPU) designed for streaming workloads, local memory, and a globally coherent direct memory access (DMA) engine. Computations are performed in 128-bit wide single instruction multiple data streams (SIMD). An integrated high-bandwidth element interconnect bus (EIB) connects the nine processors and their ports to external memory and to system I/O. The Applied Software Engineering Research (ASER) Group at the ORNL is applying the Cell to a variety of text and image analysis applications. Research on Cell-equipped PlayStation3 (PS3) consoles has led to the development of a correlation-based image recognition engine that enables a single PS3 to process images at more than 10X the speed of state-of-the-art single-core processors. NVIDIA Graphics Processing Units. The ASER group is also employing the latest NVIDIA graphical processing units (GPUs) to accelerate clustering of thousands of text documents using recently developed clustering algorithms such as document flocking and affinity propagation.

  19. Proceedings of clinical SPECT (single photon emission computed tomography) symposium

    Energy Technology Data Exchange (ETDEWEB)

    1986-09-01

    It has been five years since the last in-depth American College of Nuclear Physicians/Society of Nuclear Medicine Symposium on the subject of single photon emission computed tomography (SPECT) was held. Because this subject was nominated as the single most desired topic we have selected SPECT imaging as the basis for this year's program. The objectives of this symposium are to survey the progress of SPECT clinical applications that have taken place over the last five years and to provide practical and timely guidelines to users of SPECT so that this exciting imaging modality can be fully integrated into the evaluation of pathologic processes. The first half was devoted to a consideration of technical factors important in SPECT acquisition and the second half was devoted to those organ systems about which sufficient clinical SPECT imaging data are available. With respect to the technical aspect of the program we have selected the key areas which demand awareness and attention in order to make SPECT operational in clinical practice. These include selection of equipment, details of uniformity correction, utilization of phantoms for equipment acceptance and quality assurance, the major aspect of algorithms, an understanding of filtered back projection and appropriate choice of filters and an awareness of the most commonly generated artifacts and how to recognize them. With respect to the acquisition and interpretation of organ images, the faculty will present information on the major aspects of hepatic, brain, cardiac, skeletal, and immunologic imaging techniques. Individual papers are processed separately for the data base. (TEM)

  20. Hybrid Single Photon Emission Computed Tomography/Computed Tomography Sulphur Colloid Scintigraphy in Focal Nodular Hyperplasia

    International Nuclear Information System (INIS)

    Bhoil, Amit; Gayana, Shankramurthy; Sood, Ashwani; Bhattacharya, Anish; Mittal, Bhagwant Rai

    2013-01-01

    It is important to differentiate focal nodular hyperplasia (FNH), a benign condition of liver most commonly affecting women, from other neoplasm such as hepatic adenoma and metastasis. The functional reticuloendothelial features of FNH can be demonstrated by scintigraphy. We present a case of breast cancer in whom fluorodeoxyglucose positron emission tomography/computerized tomography (CT) showed a homogenous hyperdense lesion in liver, which on Tc99m sulfur colloid single-photon emission computed tomography/CT was found to have increased focal tracer uptake suggestive of FNH

  1. High-Performance Linear Algebra Processor using FPGA

    National Research Council Canada - National Science Library

    Johnson, J

    2004-01-01

    With recent advances in FPGA (Field Programmable Gate Array) technology it is now feasible to use these devices to build special purpose processors for floating point intensive applications that arise in scientific computing...

  2. Multimode power processor

    Science.gov (United States)

    O'Sullivan, George A.; O'Sullivan, Joseph A.

    1999-01-01

    In one embodiment, a power processor which operates in three modes: an inverter mode wherein power is delivered from a battery to an AC power grid or load; a battery charger mode wherein the battery is charged by a generator; and a parallel mode wherein the generator supplies power to the AC power grid or load in parallel with the battery. In the parallel mode, the system adapts to arbitrary non-linear loads. The power processor may operate on a per-phase basis wherein the load may be synthetically transferred from one phase to another by way of a bumpless transfer which causes no interruption of power to the load when transferring energy sources. Voltage transients and frequency transients delivered to the load when switching between the generator and battery sources are minimized, thereby providing an uninterruptible power supply. The power processor may be used as part of a hybrid electrical power source system which may contain, in one embodiment, a photovoltaic array, diesel engine, and battery power sources.

  3. The breaking point of modern processor and platform technology

    CERN Document Server

    Nowak, A; Lazzaro, A; Leduc, J

    2011-01-01

    This work is an overview of state of the art processors used in High Energy Physics, their architecture and an extensive outline of the forthcoming technologies. Silicon process science and hardware design are making constant and rapid progress, and a solid grasp of these developments is imperative to the understanding of their possible future applications, which might include software strategy, optimizations, computing center operations and hardware acquisitions. In particular, the current issue of software and platform scalability is becoming more and more noticeable, and will develop in the near future with the growing core count of single chips and the approach of certain x86 architectural limits. Other topics brought forward include the hard, physical limits of innovation, the applicability of tried and tested computing formulas to modern technologies, as well as an analysis of viable alternate choices for continued development.

  4. Single photon emission computed tomography in lumbar degenerative spondylolisthesis

    International Nuclear Information System (INIS)

    Ito, S.; Muro, T.; Eisenstein, S.

    1998-01-01

    Analysis of single photon emission computed tomographic images and plain X-ray films of the lumbar vertebrae was performed in 15 patients with lumbar spondylosis and 15 patients with lumbar degenerative spondylolisthesis. The facet joint and osteophyte images were observed in particular, and the slipping ratio of spondylolisthetic vertebrae was determined. The slipping ratio of degenerative spondylolisthesis ranged from 11.8 % to 22.3 %. Hot uptake of 99mTc-HMDP by both L4-5 facet joints was significantly greater in the patients with degenerative spondylolisthesis than in those with lumbar spondylosis. The hot uptake by the osteophytes in lumbar spondylosis was nearly uniform among the three inferior segments, L3-4, L4-5 and L5-S, but was localized to the spondylolisthetic vertebrae, L4-5, or L5-S, in the patients with spondylolisthesis. Half of the osteophytes with hot uptake were assigned to the 3rd degree of Nathan's grading. It was suggested that stress was localized to the slipping vertebrae and their facet joints in patients with lumbar degenerative spondylolisthesis. (author)

  5. Brain single photon emission computed tomography in neonates

    International Nuclear Information System (INIS)

    Denays, R.; Van Pachterbeke, T.; Tondeur, M.

    1989-01-01

    This study was designed to rate the clinical value of [ 123 I]iodoamphetamine (IMP) or [ 99m Tc] hexamethyl propylene amine oxyme (HM-PAO) brain single photon emission computed tomography (SPECT) in neonates, especially in those likely to develop cerebral palsy. The results showed that SPECT abnormalities were congruent in most cases with structural lesions demonstrated by ultrasonography. However, mild bilateral ventricular dilatation and bilateral subependymal porencephalic cysts diagnosed by ultrasound were not associated with an abnormal SPECT finding. In contrast, some cortical periventricular and sylvian lesions and all the parasagittal lesions well visualized in SPECT studies were not diagnosed by ultrasound scans. In neonates with subependymal and/or intraventricular hemorrhage the existence of a parenchymal abnormality was only diagnosed by SPECT. These results indicate that [ 123 I]IMP or [ 99m Tc]HM-PAO brain SPECT shows a potential clinical value as the neurodevelopmental outcome is clearly related to the site, the extent, and the number of cerebral lesions. Long-term clinical follow-up is, however, mandatory in order to define which SPECT abnormality is associated with neurologic deficit

  6. Single-Photon Emission Computed Tomography (SPECT) in childhood epilepsy

    International Nuclear Information System (INIS)

    Gulati, Sheffali; Kalra, Veena; Bal, C.S.

    2000-01-01

    The success of epilepsy surgery is determined strongly by the precise location of the epileptogenic focus. The information from clinical electrophysiological data needs to be strengthened by functional neuroimaging techniques. Single photon emission computed tomography (SPECT) available locally has proved useful as a localising investigation. It evaluates the regional cerebral blood flow and the comparison between ictal and interictal blood flow on SPECT has proved to be a sensitive nuclear marker for the site of seizure onset. Many studies justify the utility of SPECT in localising lesions to possess greater precision than interictal scalp EEG or anatomic neuroimaging. SPECT is of definitive value in temporal lobe epilepsy. Its role in extratemporal lobe epilepsy is less clearly defined. It is useful in various other generalized and partial seizure disorders including epileptic syndromes and helps in differentiating pseudoseizures from true seizures. The need for newer radiopharmaceutical agents with specific neurochemical properties and longer shelf life are under investigation. Subtraction ictal SPECT co-registered to MRI is a promising new modality. (author)

  7. Micro processors for plant protection

    International Nuclear Information System (INIS)

    McAffer, N.T.C.

    1976-01-01

    Micro computers can be used satisfactorily in general protection duties with economic advantages over hardwired systems. The reliability of such protection functions can be enhanced by keeping the task performed by each protection micro processor simple and by avoiding such a task being dependent on others in any substantial way. This implies that vital work done for any task is kept within it and that any communications from it to outside or to it from outside are restricted to those for controlling data transfer. Also that the amount of this data should be the minimum consistent with satisfactory task execution. Technology is changing rapidly and devices may become obsolete and be supplanted by new ones before their theoretical reliability can be confirmed or otherwise by field service. This emphasises the need for users to pool device performance data so that effective reliability judgements can be made within the lifetime of the devices. (orig.) [de

  8. Treecode with a Special-Purpose Processor

    Science.gov (United States)

    Makino, Junichiro

    1991-08-01

    We describe an implementation of the modified Barnes-Hut tree algorithm for a gravitational N-body calculation on a GRAPE (GRAvity PipE) backend processor. GRAPE is a special-purpose computer for N-body calculations. It receives the positions and masses of particles from a host computer and then calculates the gravitational force at each coordinate specified by the host. To use this GRAPE processor with the hierarchical tree algorithm, the host computer must maintain a list of all nodes that exert force on a particle. If we create this list for each particle of the system at each timestep, the number of floating-point operations on the host and that on GRAPE would become comparable, and the increased speed obtained by using GRAPE would be small. In our modified algorithm, we create a list of nodes for many particles. Thus, the amount of the work required of the host is significantly reduced. This algorithm was originally developed by Barnes in order to vectorize the force calculation on a Cyber 205. With this algorithm, the computing time of the force calculation becomes comparable to that of the tree construction, if the GRAPE backend processor is sufficiently fast. The obtained speed-up factor is 30 to 50 for a RISC-based host computer and GRAPE-1A with a peak speed of 240 Mflops.

  9. Software verification and validation methodology for advanced digital reactor protection system using diverse dual processors to prevent common mode failure

    International Nuclear Information System (INIS)

    Son, Ki Chang; Shin, Hyun Kook; Lee, Nam Hoon; Baek, Seung Min; Kim, Hang Bae

    2001-01-01

    The Advanced Digital Reactor Protection System (ADRPS) with diverse dual processors is being developed by the National Research Lab of KOPEC for ADRPS development. One of the ADRPS goals is to develop digital Plant Protection System (PPS) free of Common Mode Failure (CMF). To prevent CMF, the principle of diversity is applied to both hardware design and software design. For the hardware diversity, two different types of CPUs are used for Bistable Processor and Local Coincidence Logic Processor. The VME based Single Board Computers (SBC) are used for the CPU hardware platforms. The QNX Operating System (OS) and the VxWorks OS are used for software diversity. Rigorous Software Verification and Validation (V and V) is also required to prevent CMF. In this paper, software V and V methodology for the ADRPS is described to enhance the ADRPS software reliability and to assure high quality of the ADRPS software

  10. Computational and Experimental Insight Into Single-Molecule Piezoelectric Materials

    Science.gov (United States)

    Marvin, Christopher Wayne

    Piezoelectric materials allow for the harvesting of ambient waste energy from the environment. Producing lightweight, highly responsive materials is a challenge for this type of material, requiring polymer, foam, or bio-inspired materials. In this dissertation, I explore the origin of the piezoelectric effect in single molecules through density functional theory (DFT), analyze the piezoresponse of bio-inspired peptidic materials through the use of atomic and piezoresponse force microscopy (AFM and PFM), and develop a novel class of materials combining flexible polyurethane foams and non-piezoelectric, polar dopants. For the DFT calculations, functional group, regiochemical, and heteroatom derivatives of [6]helicene were examined for their influence on the piezoelectric response. An aza[6]helicene derivative was found to have a piezoelectric response (108 pm/V) comparable to ceramics such as lead zirconium titanate (200+ pm/V). These computed materials have the possibility to compete with current field-leading piezomaterials such as lead zirconium titanate (PZT), zinc oxide (ZnO), and polyvinylidene difluoride (PVDF) and its derivatives. The use of AFM/PFM allows for the demonstration of the piezoelectric effect of the selfassembled monolayer (SAM) peptidic systems. Through PFM, the influence that the helicity and sequence of the peptide has on the overall response of the molecule can be analyzed. Finally, development of a novel class of piezoelectrics, the foam-based materials, expands the current understanding of the qualities required for a piezoelectric material from ceramic and rigid materials to more flexible, organic materials. Through the exploration of these novel types of piezoelectric materials, new design rules and figures of merit have been developed.

  11. Orthopedic applications of single photon emission computed Tomographic bone scanning

    International Nuclear Information System (INIS)

    Collier, B.D.

    1987-01-01

    When compared with planar bone scanning, single photon emission computed tomography (SPECT) has technical advantages of potential diagnostic significance. Planar imaging often superimposes substantial underlying or overlying activity on the bony structure of medical interest. SPECT, however, can be used to remove such unwanted activity. For example, in the hip the acetabulum extends downwards behind the femoral head. Therefore when using planar bone scanning techniques, the photon-deficient defect typical of avascular necrosis (AVN) of the femoral had may be obscured by activity originating in the underlying acetabulum. By using SPECT, underlying and overlying distributions of activity can be separated into sequential tomographic planes. For this reason SPECT facilitates the detection of AVN of the femoral head. When referring a patient without a history of malignancy for bone scanning, the orthopaedic surgeon usually has a specific clinical question involving a limited portion of the skeleton. Orthopaedic surgeons at their institution commonly use bone scanning to clarify the cause of back, hip or knee pain; to determine with a physiological test the significance of radiographic findings; and to establish the extent of disease at symptomatic skeletal sites such as the three compartments of the knee. In instances such as these, when clinical concern is limited to a specific anatomical region, a bone scan procedure that includes SPECT imaging of only a portion of the skeleton is appropriate. To date, SPECT of the skeletal system has most frequently been used to evaluate patients with pain the larger joints and bony structures such as the lumbar spine, hips, knees, or temporomandibular joints (TMJ)

  12. Simulation of a processor switching circuit with APLSV

    International Nuclear Information System (INIS)

    Dilcher, H.

    1979-01-01

    The report describes the simulation of a processor switching circuit with APL. Furthermore an APL function is represented to simulate a processor in an assembly like language. Both together serve as a tool for studying processor properties. By means of the programming function it is also possible to program other simulated processors. The processor is to be used in the processing of data in real time analysis that occur in high energy physics experiments. The data are already offered to the computer in digitalized form. A typical data rate is at 10 KB/ sec. The data are structured in blocks. The particular blocks are 1 KB wide and are independent from each other. Aprocessor has to decide, whether the block data belong to an event that is part of the backround noise and can therefore be forgotten, or whether the data should be saved for a later evaluation. (orig./WB) [de

  13. Universal quantum gates for Single Cooper Pair Box based quantum computing

    Science.gov (United States)

    Echternach, P.; Williams, C. P.; Dultz, S. C.; Braunstein, S.; Dowling, J. P.

    2000-01-01

    We describe a method for achieving arbitrary 1-qubit gates and controlled-NOT gates within the context of the Single Cooper Pair Box (SCB) approach to quantum computing. Such gates are sufficient to support universal quantum computation.

  14. Trigger and decision processors

    International Nuclear Information System (INIS)

    Franke, G.

    1980-11-01

    In recent years there have been many attempts in high energy physics to make trigger and decision processes faster and more sophisticated. This became necessary due to a permanent increase of the number of sensitive detector elements in wire chambers and calorimeters, and in fact it was possible because of the fast developments in integrated circuits technique. In this paper the present situation will be reviewed. The discussion will be mainly focussed upon event filtering by pure software methods and - rather hardware related - microprogrammable processors as well as random access memory triggers. (orig.)

  15. Optical Finite Element Processor

    Science.gov (United States)

    Casasent, David; Taylor, Bradley K.

    1986-01-01

    A new high-accuracy optical linear algebra processor (OLAP) with many advantageous features is described. It achieves floating point accuracy, handles bipolar data by sign-magnitude representation, performs LU decomposition using only one channel, easily partitions and considers data flow. A new application (finite element (FE) structural analysis) for OLAPs is introduced and the results of a case study presented. Error sources in encoded OLAPs are addressed for the first time. Their modeling and simulation are discussed and quantitative data are presented. Dominant error sources and the effects of composite error sources are analyzed.

  16. Bank switched memory interface for an image processor

    International Nuclear Information System (INIS)

    Barron, M.; Downward, J.

    1980-09-01

    A commercially available image processor is interfaced to a PDP-11/45 through an 8K window of memory addresses. When the image processor was not in use it was desired to be able to use the 8K address space as real memory. The standard method of accomplishing this would have been to use UNIBUS switches to switch in either the physical 8K bank of memory or the image processor memory. This method has the disadvantage of being rather expensive. As a simple alternative, a device was built to selectively enable or disable either an 8K bank of memory or the image processor memory. To enable the image processor under program control, GEN is contracted in size, the memory is disabled, a device partition for the image processor is created above GEN, and the image processor memory is enabled. The process is reversed to restore memory to GEN. The hardware to enable/disable the image and computer memories is controlled using spare bits from a DR-11K output register. The image processor and physical memory can be switched in or out on line with no adverse affects on the system's operation

  17. TC9447F, single-chip DSP (digital signal processor) for audio; 1 chip audio yo DSP LSI TC9447F

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1999-03-01

    TC9447F is a single-chip DSP for audio which builds in 2-channel AD converter/4-channel DA converter. It can build various application programs such as the sound field control like hall simulation, digital filter like equalizer, and dynamic range control, in the program memory (ROM). Further, it builds in {+-}10dB trim use electronic volume for two channels. It also builds data delay use RAM (64K-bit) in, so no RAM to be separately attached is necessary. (translated by NEDO)

  18. SCELib2: the new revision of SCELib, the parallel computational library of molecular properties in the single center approach

    Science.gov (United States)

    Sanna, N.; Morelli, G.

    2004-09-01

    In this paper we present the new version of the SCELib program (CPC Catalogue identifier ADMG) a full numerical implementation of the Single Center Expansion (SCE) method. The physics involved is that of producing the SCE description of molecular electronic densities, of molecular electrostatic potentials and of molecular perturbed potentials due to a point negative or positive charge. This new revision of the program has been optimized to run in serial as well as in parallel execution mode, to support a larger set of molecular symmetries and to permit the restart of long-lasting calculations. To measure the performance of this new release, a comparative study has been carried out on the most powerful computing architectures in serial and parallel runs. The results of the calculations reported in this paper refer to real cases medium to large molecular systems and they are reported in full details to benchmark at best the parallel architectures the new SCELib code will run on. Program summaryTitle of program: SCELib2 Catalogue identifier: ADGU Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADGU Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Reference to previous versions: Comput. Phys. Commun. 128 (2) (2000) 139 (CPC catalogue identifier: ADMG) Does the new version supersede the original program?: Yes Computer for which the program is designed and others on which it has been tested: HP ES45 and rx2600, SUN ES4500, IBM SP and any single CPU workstation based on Alpha, SPARC, POWER, Itanium2 and X86 processors Installations: CASPUR, local Operating systems under which the program has been tested: HP Tru64 V5.X, SUNOS V5.8, IBM AIX V5.X, Linux RedHat V8.0 Programming language used: C Memory required to execute with typical data: 10 Mwords. Up to 2000 Mwords depending on the molecular system and runtime parameters No. of bits in a word: 64 No. of processors used: 1 to 32 Has the code been vectorized or parallelized?: Yes

  19. Design of Processors with Reconfigurable Microarchitecture

    Directory of Open Access Journals (Sweden)

    Andrey Mokhov

    2014-01-01

    Full Text Available Energy becomes a dominating factor for a wide spectrum of computations: from intensive data processing in “big data” companies resulting in large electricity bills, to infrastructure monitoring with wireless sensors relying on energy harvesting. In this context it is essential for a computation system to be adaptable to the power supply and the service demand, which often vary dramatically during runtime. In this paper we present an approach to building processors with reconfigurable microarchitecture capable of changing the way they fetch and execute instructions depending on energy availability and application requirements. We show how to use Conditional Partial Order Graphs to formally specify the microarchitecture of such a processor, explore the design possibilities for its instruction set, and synthesise the instruction decoder using correct-by-construction techniques. The paper is focused on the design methodology, which is evaluated by implementing a power-proportional version of Intel 8051 microprocessor.

  20. XOP: A second generation fast processor for on-line use in high energy physics experiments

    International Nuclear Information System (INIS)

    Lingjaerde, T.

    1981-01-01

    Processors for trigger calculations and data compression in high energy physics are characterized by a high data input capability combined with fas execution of relatively simple routines. In order to achieve the required performance it is advantageous to replace the classical computer instruction-set by microcoded instructions, the various fields of which control the internal subunits in parallel. The fast processor called ESOP is based on such a principle: the different operations are handled step by step by dedicated optimized modules under control of a central instruction unit. Thus, the arithmetic operations, address calculations, conditional checking, loop counts and next instruction evaluation all overlap in time. Based upon the experience from ESOP the architecture of a new processor 'XOP' is beginning to take shape which will be faster and easier to use. In this context the most important innovations are: easy handling of operands in the arithmetic unit by means of three data buses and large data files, a powerful data addressing unit for easy handling of vectors, as well as single operands, and a very flexible logic for conditional branching. Input/output will be made transparent through the introduction of internal fast processors which will be used in conjunction with powerful firmware as a software debugging aid. (orig.)

  1. Applying graphics processor units to Monte Carlo dose calculation in radiation therapy

    Directory of Open Access Journals (Sweden)

    Bakhtiari M

    2010-01-01

    Full Text Available We investigate the potential in using of using a graphics processor unit (GPU for Monte-Carlo (MC-based radiation dose calculations. The percent depth dose (PDD of photons in a medium with known absorption and scattering coefficients is computed using a MC simulation running on both a standard CPU and a GPU. We demonstrate that the GPU′s capability for massive parallel processing provides a significant acceleration in the MC calculation, and offers a significant advantage for distributed stochastic simulations on a single computer. Harnessing this potential of GPUs will help in the early adoption of MC for routine planning in a clinical environment.

  2. Globe hosts launch of new processor

    CERN Multimedia

    2006-01-01

    Launch of the quadecore processor chip at the Globe. On 14 November, in a series of major media events around the world, the chip-maker Intel launched its new 'quadcore' processor. For the regions of Europe, the Middle East and Africa, the day-long launch event took place in CERN's Globe of Science and Innovation, with over 30 journalists in attendance, coming from as far away as Johannesburg and Dubai. CERN was a significant choice for the event: the first tests of this new generation of processor in Europe had been made at CERN over the preceding months, as part of CERN openlab, a research partnership with leading IT companies such as Intel, HP and Oracle. The event also provided the opportunity for the journalists to visit ATLAS and the CERN Computer Centre. The strategy of putting multiple processor cores on the same chip, which has been pursued by Intel and other chip-makers in the last few years, represents an important departure from the more traditional improvements in the sheer speed of such chips. ...

  3. Lattice gauge theory using parallel processors

    International Nuclear Information System (INIS)

    Lee, T.D.; Chou, K.C.; Zichichi, A.

    1987-01-01

    The book's contents include: Lattice Gauge Theory Lectures: Introduction and Current Fermion Simulations; Monte Carlo Algorithms for Lattice Gauge Theory; Specialized Computers for Lattice Gauge Theory; Lattice Gauge Theory at Finite Temperature: A Monte Carlo Study; Computational Method - An Elementary Introduction to the Langevin Equation, Present Status of Numerical Quantum Chromodynamics; Random Lattice Field Theory; The GF11 Processor and Compiler; and The APE Computer and First Physics Results; Columbia Supercomputer Project: Parallel Supercomputer for Lattice QCD; Statistical and Systematic Errors in Numerical Simulations; Monte Carlo Simulation for LGT and Programming Techniques on the Columbia Supercomputer; Food for Thought: Five Lectures on Lattice Gauge Theory

  4. Quantum chemistry on a superconducting quantum processor

    Energy Technology Data Exchange (ETDEWEB)

    Kaicher, Michael P.; Wilhelm, Frank K. [Theoretical Physics, Saarland University, 66123 Saarbruecken (Germany); Love, Peter J. [Department of Physics and Astronomy, Tufts University, Medford, MA 02155 (United States)

    2016-07-01

    Quantum chemistry is the most promising civilian application for quantum processors to date. We study its adaptation to superconducting (sc) quantum systems, computing the ground state energy of LiH through a variational hybrid quantum classical algorithm. We demonstrate how interactions native to sc qubits further reduce the amount of quantum resources needed, pushing sc architectures as a near-term candidate for simulations of more complex atoms/molecules.

  5. Debugging in a multi-processor environment

    International Nuclear Information System (INIS)

    Spann, J.M.

    1981-01-01

    The Supervisory Control and Diagnostic System (SCDS) for the Mirror Fusion Test Facility (MFTF) consists of nine 32-bit minicomputers arranged in a tightly coupled distributed computer system utilizing a share memory as the data exchange medium. Debugging of more than one program in the multi-processor environment is a difficult process. This paper describes what new tools were developed and how the testing of software is performed in the SCDS for the MFTF project

  6. Nonlinear Wave Simulation on the Xeon Phi Knights Landing Processor

    Science.gov (United States)

    Hristov, Ivan; Goranov, Goran; Hristova, Radoslava

    2018-02-01

    We consider an interesting from computational point of view standing wave simulation by solving coupled 2D perturbed Sine-Gordon equations. We make an OpenMP realization which explores both thread and SIMD levels of parallelism. We test the OpenMP program on two different energy equivalent Intel architectures: 2× Xeon E5-2695 v2 processors, (code-named "Ivy Bridge-EP") in the Hybrilit cluster, and Xeon Phi 7250 processor (code-named "Knights Landing" (KNL). The results show 2 times better performance on KNL processor.

  7. Nonlinear Wave Simulation on the Xeon Phi Knights Landing Processor

    Directory of Open Access Journals (Sweden)

    Hristov Ivan

    2018-01-01

    Full Text Available We consider an interesting from computational point of view standing wave simulation by solving coupled 2D perturbed Sine-Gordon equations. We make an OpenMP realization which explores both thread and SIMD levels of parallelism. We test the OpenMP program on two different energy equivalent Intel architectures: 2× Xeon E5-2695 v2 processors, (code-named “Ivy Bridge-EP” in the Hybrilit cluster, and Xeon Phi 7250 processor (code-named “Knights Landing” (KNL. The results show 2 times better performance on KNL processor.

  8. WATERLOPP V2/64: A highly parallel machine for numerical computation

    Science.gov (United States)

    Ostlund, Neil S.

    1985-07-01

    Current technological trends suggest that the high performance scientific machines of the future are very likely to consist of a large number (greater than 1024) of processors connected and communicating with each other in some as yet undetermined manner. Such an assembly of processors should behave as a single machine in obtaining numerical solutions to scientific problems. However, the appropriate way of organizing both the hardware and software of such an assembly of processors is an unsolved and active area of research. It is particularly important to minimize the organizational overhead of interprocessor comunication, global synchronization, and contention for shared resources if the performance of a large number ( n) of processors is to be anything like the desirable n times the performance of a single processor. In many situations, adding a processor actually decreases the performance of the overall system since the extra organizational overhead is larger than the extra processing power added. The systolic loop architecture is a new multiple processor architecture which attemps at a solution to the problem of how to organize a large number of asynchronous processors into an effective computational system while minimizing the organizational overhead. This paper gives a brief overview of the basic systolic loop architecture, systolic loop algorithms for numerical computation, and a 64-processor implementation of the architecture, WATERLOOP V2/64, that is being used as a testbed for exploring the hardware, software, and algorithmic aspects of the architecture.

  9. Single-Chip Computers With Microelectromechanical Systems-Based Magnetic Memory

    NARCIS (Netherlands)

    Carley, L. Richard; Bain, James A.; Fedder, Gary K.; Greve, David W.; Guillou, David F.; Lu, Michael S.C.; Mukherjee, Tamal; Santhanam, Suresh; Abelmann, Leon; Min, Seungook

    This article describes an approach for implementing a complete computer system (CPU, RAM, I/O, and nonvolatile mass memory) on a single integrated-circuit substrate (a chip)—hence, the name "single-chip computer." The approach presented combines advances in the field of microelectromechanical

  10. Computing magnetic anisotropy constants of single molecule magnets

    Indian Academy of Sciences (India)

    Administrator

    Treating the anisotropy Hamiltonian as perturbation, we compute ... The dependence of DM on the energy gap between the ground and the excited states in both the systems has also been studied by using different sets of exchange constants.

  11. Discovering Motifs in Biological Sequences Using the Micron Automata Processor.

    Science.gov (United States)

    Roy, Indranil; Aluru, Srinivas

    2016-01-01

    Finding approximately conserved sequences, called motifs, across multiple DNA or protein sequences is an important problem in computational biology. In this paper, we consider the (l, d) motif search problem of identifying one or more motifs of length l present in at least q of the n given sequences, with each occurrence differing from the motif in at most d substitutions. The problem is known to be NP-complete, and the largest solved instance reported to date is (26,11). We propose a novel algorithm for the (l,d) motif search problem using streaming execution over a large set of non-deterministic finite automata (NFA). This solution is designed to take advantage of the micron automata processor, a new technology close to deployment that can simultaneously execute multiple NFA in parallel. We demonstrate the capability for solving much larger instances of the (l, d) motif search problem using the resources available within a single automata processor board, by estimating run-times for problem instances (39,18) and (40,17). The paper serves as a useful guide to solving problems using this new accelerator technology.

  12. Programming the Linpack Benchmark for the IBM PowerXCell 8i Processor

    Directory of Open Access Journals (Sweden)

    Michael Kistler

    2009-01-01

    Full Text Available In this paper we present the design and implementation of the Linpack benchmark for the IBM BladeCenter QS22, which incorporates two IBM PowerXCell 8i1 processors. The PowerXCell 8i is a new implementation of the Cell Broadband Engine™2 architecture and contains a set of special-purpose processing cores known as Synergistic Processing Elements (SPEs. The SPEs can be used as computational accelerators to augment the main PowerPC processor. The added computational capability of the SPEs results in a peak double precision floating point capability of 108.8 GFLOPS. We explain how we modified the standard open source implementation of Linpack to accelerate key computational kernels using the SPEs of the PowerXCell 8i processors. We describe in detail the implementation and performance of the computational kernels and also explain how we employed the SPEs for high-speed data movement and reformatting. The result of these modifications is a Linpack benchmark optimized for the IBM PowerXCell 8i processor that achieves 170.7 GFLOPS on a BladeCenter QS22 with 32 GB of DDR2 SDRAM memory. Our implementation of Linpack also supports clusters of QS22s, and was used to achieve a result of 11.1 TFLOPS on a cluster of 84 QS22 blades. We compare our results on a single BladeCenter QS22 with the base Linpack implementation without SPE acceleration to illustrate the benefits of our optimizations.

  13. Systematic approach for deriving feasible mappings of parallel algorithms to parallel computing platforms

    NARCIS (Netherlands)

    Arkin, Ethem; Tekinerdogan, Bedir; Imre, Kayhan M.

    2017-01-01

    The need for high-performance computing together with the increasing trend from single processor to parallel computer architectures has leveraged the adoption of parallel computing. To benefit from parallel computing power, usually parallel algorithms are defined that can be mapped and executed

  14. Accelerating molecular dynamic simulation on the cell processor and Playstation 3.

    Science.gov (United States)

    Luttmann, Edgar; Ensign, Daniel L; Vaidyanathan, Vishal; Houston, Mike; Rimon, Noam; Øland, Jeppe; Jayachandran, Guha; Friedrichs, Mark; Pande, Vijay S

    2009-01-30

    Implementation of molecular dynamics (MD) calculations on novel architectures will vastly increase its power to calculate the physical properties of complex systems. Herein, we detail algorithmic advances developed to accelerate MD simulations on the Cell processor, a commodity processor found in PlayStation 3 (PS3). In particular, we discuss issues regarding memory access versus computation and the types of calculations which are best suited for streaming processors such as the Cell, focusing on implicit solvation models. We conclude with a comparison of improved performance on the PS3's Cell processor over more traditional processors. (c) 2008 Wiley Periodicals, Inc.

  15. Quantified measurement of brain blood volume: comparative evaluations between the single photon emission computer tomography and the positron computer tomography

    International Nuclear Information System (INIS)

    Bouvard, G.; Fernandez, Y.; Petit-Taboue, M.C.; Derlon, J.M.; Travere, J.M.; Le Poec, C.

    1991-01-01

    The quantified measurement of cerebral blood volume is interesting for the brain blood circulation studies. This measurement is often used in positron computed tomography. It's more difficult in single photon emission computed tomography: there are physical problems with the limited resolution of the detector, the Compton effect and the photon attenuation. The objectif of this study is to compare the results between these two techniques. The quantified measurement of brain blood volume is possible with the single photon emission computer tomogragry. However, there is a loss of contrast [fr

  16. Computer architecture fundamentals and principles of computer design

    CERN Document Server

    Dumas II, Joseph D

    2005-01-01

    Introduction to Computer ArchitectureWhat is Computer Architecture?Architecture vs. ImplementationBrief History of Computer SystemsThe First GenerationThe Second GenerationThe Third GenerationThe Fourth GenerationModern Computers - The Fifth GenerationTypes of Computer SystemsSingle Processor SystemsParallel Processing SystemsSpecial ArchitecturesQuality of Computer SystemsGenerality and ApplicabilityEase of UseExpandabilityCompatibilityReliabilitySuccess and Failure of Computer Architectures and ImplementationsQuality and the Perception of QualityCost IssuesArchitectural Openness, Market Timi

  17. The Interface Between Redundant Processor Modules Of Safety Grade PLC Using Mass Storage DPRAM

    International Nuclear Information System (INIS)

    Hwang, Sung Jae; Song, Seong Hwan; No, Young Hun; Yun, Dong Hwa; Park, Gang Min; Kim, Min Gyu; Choi, Kyung Chul; Lee, Ui Taek

    2010-01-01

    Processor module of safety grade PLC (hereinafter called as POSAFE-Q) developed by POSCO ICT provides high reliability and safety. However, POSAFEQ would have suffered a malfunction when we think taking place of abnormal operation by exceptional environmental. POSAFE-Q would not able to conduct its function normally in such case. To prevent these situations, the necessity of redundant processor module has been raised. Therefore, redundant processor module, NCPU-2Q, has been developed which has not only functions of single processor module with high reliability and safety but also functions of redundant processor

  18. High-Speed General Purpose Genetic Algorithm Processor.

    Science.gov (United States)

    Hoseini Alinodehi, Seyed Pourya; Moshfe, Sajjad; Saber Zaeimian, Masoumeh; Khoei, Abdollah; Hadidi, Khairollah

    2016-07-01

    In this paper, an ultrafast steady-state genetic algorithm processor (GAP) is presented. Due to the heavy computational load of genetic algorithms (GAs), they usually take a long time to find optimum solutions. Hardware implementation is a significant approach to overcome the problem by speeding up the GAs procedure. Hence, we designed a digital CMOS implementation of GA in [Formula: see text] process. The proposed processor is not bounded to a specific application. Indeed, it is a general-purpose processor, which is capable of performing optimization in any possible application. Utilizing speed-boosting techniques, such as pipeline scheme, parallel coarse-grained processing, parallel fitness computation, parallel selection of parents, dual-population scheme, and support for pipelined fitness computation, the proposed processor significantly reduces the processing time. Furthermore, by relying on a built-in discard operator the proposed hardware may be used in constrained problems that are very common in control applications. In the proposed design, a large search space is achievable through the bit string length extension of individuals in the genetic population by connecting the 32-bit GAPs. In addition, the proposed processor supports parallel processing, in which the GAs procedure can be run on several connected processors simultaneously.

  19. Composable processor virtualization for embedded systems

    NARCIS (Netherlands)

    Molnos, A.M.; Milutinovic, A.; She, D.; Goossens, K.G.W.

    2010-01-01

    Processor virtualization divides a physical processor's time among a set of virual machines, enabling efficient hardware utilization, application security and allowing co-existence of different operating systems on the same processor. Through initially intended for the server domain, virtualization

  20. Measuring Emotion Regulation with Single Dry Electrode Brain Computer Interface

    NARCIS (Netherlands)

    van der Wal, C.N.; Irrmischer, M.; Guo, Y.; Friston, K.; Faisal, A.; Hill, S.; Peng, H.

    2015-01-01

    Wireless brain computer interfaces (BCI’s) are promising for new intelligent applications in which emotions are detected by measuring brain activity. Applications, such as serious games and video game therapy, are measuring and using the user’s emotional state in order to determine the intensity

  1. Dual-scale topology optoelectronic processor.

    Science.gov (United States)

    Marsden, G C; Krishnamoorthy, A V; Esener, S C; Lee, S H

    1991-12-15

    The dual-scale topology optoelectronic processor (D-STOP) is a parallel optoelectronic architecture for matrix algebraic processing. The architecture can be used for matrix-vector multiplication and two types of vector outer product. The computations are performed electronically, which allows multiplication and summation concepts in linear algebra to be generalized to various nonlinear or symbolic operations. This generalization permits the application of D-STOP to many computational problems. The architecture uses a minimum number of optical transmitters, which thereby reduces fabrication requirements while maintaining area-efficient electronics. The necessary optical interconnections are space invariant, minimizing space-bandwidth requirements.

  2. Nuclear interactive evaluations on distributed processors

    International Nuclear Information System (INIS)

    Dix, G.E.; Congdon, S.P.

    1988-01-01

    BWR [boiling water reactor] nuclear design is a complicated process, involving trade-offs among a variety of conflicting objectives. Complex computer calculations and usually required for each design iteration. GE Nuclear Energy has implemented a system where the evaluations are performed interactively on a large number of small microcomputers. This approach minimizes the time it takes to carry out design iterations even through the processor speeds are low compared with modern super computers. All of the desktop microcomputers are linked to a common data base via an ethernet communications system so that design data can be shared and data quality can be maintained

  3. Bulk-memory processor for data acquisition

    International Nuclear Information System (INIS)

    Nelson, R.O.; McMillan, D.E.; Sunier, J.W.; Meier, M.; Poore, R.V.

    1981-01-01

    To meet the diverse needs and data rate requirements at the Van de Graaff and Weapons Neutron Research (WNR) facilities, a bulk memory system has been implemented which includes a fast and flexible processor. This bulk memory processor (BMP) utilizes bit slice and microcode techniques and features a 24 bit wide internal architecture allowing direct addressing of up to 16 megawords of memory and histogramming up to 16 million counts per channel without overflow. The BMP is interfaced to the MOSTEK MK 8000 bulk memory system and to the standard MODCOMP computer I/O bus. Coding for the BMP both at the microcode level and with macro instructions is supported. The generalized data acquisition system has been extended to support the BMP in a manner transparent to the user

  4. A dedicated line-processor as used at the SHF

    International Nuclear Information System (INIS)

    Bevan, A.V.; Hatley, R.W.; Price, D.R.; Rankin, P.

    1985-01-01

    A hardwired trigger processor was used at the SLAC Hybrid Facility to find evidence for charged tracks originating from the fiducial volume of a 40'' rapidcycling bubble chamber. Straight-line projections of these tracks in the plane perpendicular to the applied magnetic field were searched for using data from three sets of proportional wire chambers (PWC). This information was made directly available to the processor by means of a special digitizing card. The results memory of the processor simulated read-only memory in a 168/E processor and was accessible by it. The 168/E controlled the issuing of a trigger command to the bubble chamber flash tubes. The same design of digitizer card used by the line processor was incorporated into the 168/E, again as read only memory, which allowed it access to the raw data for continual monitoring of trigger integrity. The design logic of the trigger processor was verified by running real PWC data through a FORTRAN simulation of the hardware. This enabled the debugging to become highly automated since a step by step, computer controlled comparison of processor registers to simulation predictions could be made

  5. Single-photon emission computed tomography/computed tomography in lung cancer and malignant lymphoma.

    Science.gov (United States)

    Schillaci, Orazio

    2006-10-01

    In nuclear oncology, despite the fast-growing diffusion of (18)F-fluorodeoxyglucose positron emission tomography (FDG-PET), single-photon emission computed tomography (SPECT) studies can still play an useful clinical role in several applications. The main limitation of SPECT imaging with tumor-seeking agents is the lack of the structural delineation of the pathologic processes they detect; this drawback sometimes renders SPECT interpretation difficult and can diminish its diagnostic accuracy. Fusion with morphological studies can overcome this limitation by giving an anatomical map to scintigraphic data. In the past, software-based fusion of independently performed SPECT and CT images proved to be time-consuming and impractical for routine use. The recent development of dual-modality integrated imaging systems that provide functional (SPECT) and anatomical (CT) images in the same scanning session, with the acquired images coregistered by means of the hardware, has opened a new era in this field. The first reports indicate that SPECT/CT is very useful in cancer imaging because it is able to provide further information of clinical value in several cases. In SPECT, studies of lung cancer and malignant lymphomas using different radiopharmaceutical, hybrid images are of value in providing the correct localization of tumor sites, with a precise detection of the involved organs, and the definition of their functional status, and in allowing the exclusion of disease in sites of physiologic tracer uptake. Therefore, in lung cancer and lymphomas, hybrid SPECT/CT can play a role in the diagnosis of the primary tumor, in the staging of the disease, in the follow-up, in the monitoring of therapy, in the detection of recurrence, and in dosimetric estimations for target radionuclide therapy.

  6. Performance of Artificial Intelligence Workloads on the Intel Core 2 Duo Series Desktop Processors

    Directory of Open Access Journals (Sweden)

    Abdul Kareem PARCHUR

    2010-12-01

    Full Text Available As the processor architecture becomes more advanced, Intel introduced its Intel Core 2 Duo series processors. Performance impact on Intel Core 2 Duo processors are analyzed using SPEC CPU INT 2006 performance numbers. This paper studied the behavior of Artificial Intelligence (AI benchmarks on Intel Core 2 Duo series processors. Moreover, we estimated the task completion time (TCT @1 GHz, @2 GHz and @3 GHz Intel Core 2 Duo series processors frequency. Our results show the performance scalability in Intel Core 2 Duo series processors. Even though AI benchmarks have similar execution time, they have dissimilar characteristics which are identified using principal component analysis and dendogram. As the processor frequency increased from 1.8 GHz to 3.167 GHz the execution time is decreased by ~370 sec for AI workloads. In the case of Physics/Quantum Computing programs it was ~940 sec.

  7. Promising role of single photon emission computed tomography/computed tomography in Meckel's scan

    International Nuclear Information System (INIS)

    Jain, Anurag; Chauhan, MS; Pandit, AG; Kumar, Rajeev; Sharma, Amit

    2012-01-01

    Meckel's scan is a common procedure performed in nuclear medicine. Single-photon emission computed tomography/computed tomography (SPECT/CT) in a suspected case of heterotopic location of gastric mucosa can increase the accuracy of its anatomic localization. We present two suspected cases of Meckel's diverticulum in, which SPECT/CT co-registration has helped in better localization of the pathology

  8. Code compression for VLIW embedded processors

    Science.gov (United States)

    Piccinelli, Emiliano; Sannino, Roberto

    2004-04-01

    The implementation of processors for embedded systems implies various issues: main constraints are cost, power dissipation and die area. On the other side, new terminals perform functions that require more computational flexibility and effort. Long code streams must be loaded into memories, which are expensive and power consuming, to run on DSPs or CPUs. To overcome this issue, the "SlimCode" proprietary algorithm presented in this paper (patent pending technology) can reduce the dimensions of the program memory. It can run offline and work directly on the binary code the compiler generates, by compressing it and creating a new binary file, about 40% smaller than the original one, to be loaded into the program memory of the processor. The decompression unit will be a small ASIC, placed between the Memory Controller and the System bus of the processor, keeping unchanged the internal CPU architecture: this implies that the methodology is completely transparent to the core. We present comparisons versus the state-of-the-art IBM Codepack algorithm, along with its architectural implementation into the ST200 VLIW family core.

  9. Seismometer array station processors

    International Nuclear Information System (INIS)

    Key, F.A.; Lea, T.G.; Douglas, A.

    1977-01-01

    A description is given of the design, construction and initial testing of two types of Seismometer Array Station Processor (SASP), one to work with data stored on magnetic tape in analogue form, the other with data in digital form. The purpose of a SASP is to detect the short period P waves recorded by a UK-type array of 20 seismometers and to edit these on to a a digital library tape or disc. The edited data are then processed to obtain a rough location for the source and to produce seismograms (after optimum processing) for analysis by a seismologist. SASPs are an important component in the scheme for monitoring underground explosions advocated by the UK in the Conference of the Committee on Disarmament. With digital input a SASP can operate at 30 times real time using a linear detection process and at 20 times real time using the log detector of Weichert. Although the log detector is slower, it has the advantage over the linear detector that signals with lower signal-to-noise ratio can be detected and spurious large amplitudes are less likely to produce a detection. It is recommended, therefore, that where possible array data should be recorded in digital form for input to a SASP and that the log detector of Weichert be used. Trial runs show that a SASP is capable of detecting signals down to signal-to-noise ratios of about two with very few false detections, and at mid-continental array sites it should be capable of detecting most, if not all, the signals with magnitude above msub(b) 4.5; the UK argues that, given a suitable network, it is realistic to hope that sources of this magnitude and above can be detected and identified by seismological means alone. (author)

  10. Single-Cell Transcriptomics Bioinformatics and Computational Challenges

    Directory of Open Access Journals (Sweden)

    Lana Garmire

    2016-09-01

    Full Text Available The emerging single-cell RNA-Seq (scRNA-Seq technology holds the promise to revolutionize our understanding of diseases and associated biological processes at an unprecedented resolution. It opens the door to reveal the intercellular heterogeneity and has been employed to a variety of applications, ranging from characterizing cancer cells subpopulations to elucidating tumor resistance mechanisms. Parallel to improving experimental protocols to deal with technological issues, deriving new analytical methods to reveal the complexity in scRNA-Seq data is just as challenging. Here we review the current state-of-the-art bioinformatics tools and methods for scRNA-Seq analysis, as well as addressing some critical analytical challenges that the field faces.

  11. Efficient quantum walk on a quantum processor

    Science.gov (United States)

    Qiang, Xiaogang; Loke, Thomas; Montanaro, Ashley; Aungskunsiri, Kanin; Zhou, Xiaoqi; O'Brien, Jeremy L.; Wang, Jingbo B.; Matthews, Jonathan C. F.

    2016-01-01

    The random walk formalism is used across a wide range of applications, from modelling share prices to predicting population genetics. Likewise, quantum walks have shown much potential as a framework for developing new quantum algorithms. Here we present explicit efficient quantum circuits for implementing continuous-time quantum walks on the circulant class of graphs. These circuits allow us to sample from the output probability distributions of quantum walks on circulant graphs efficiently. We also show that solving the same sampling problem for arbitrary circulant quantum circuits is intractable for a classical computer, assuming conjectures from computational complexity theory. This is a new link between continuous-time quantum walks and computational complexity theory and it indicates a family of tasks that could ultimately demonstrate quantum supremacy over classical computers. As a proof of principle, we experimentally implement the proposed quantum circuit on an example circulant graph using a two-qubit photonics quantum processor. PMID:27146471

  12. Single photon emission computed tomography by using fan beam collimator

    International Nuclear Information System (INIS)

    Akiyama, Yoshihisa

    1992-01-01

    A multislice fan beam collimator which has parallel collimation along the cephalic-caudul axis of a patient and converging collimation within planes that are perpendicular to that axis was designed for a SPECT system with a rotating scintillation camera, and it was constructed by the lead casting method which was developed in recent years. A reconstruction algorithm for fan beam SPECT was formed originally by combining the reconstruction algorithm of the parallel beam SPECT with that of the fan beam X-ray CT. The algorithm for fan beam SPECT was confirmed by means of computer simulation and a head phantom filled with diluted radionuclide. Not only 99m Tc but also 123 I was used as a radionuclide. A SPECT image with the fan beam collimator was compared with that of a parallel hole, low energy, high resolution collimator which was routinely used for clinical and research SPECT studies. Both system resolution and sensitivity of the fan beam collimator were ∼20% better than those of the parallel hole collimator. Comparing SPECT images obtained from fan beam collimator with those of parallel hole collimator, the SPECT images using fan beam collimator had far better resolution. A fan beam collimator is a useful implement for the SPECT study. (author)

  13. Global synchronization of parallel processors using clock pulse width modulation

    Science.gov (United States)

    Chen, Dong; Ellavsky, Matthew R.; Franke, Ross L.; Gara, Alan; Gooding, Thomas M.; Haring, Rudolf A.; Jeanson, Mark J.; Kopcsay, Gerard V.; Liebsch, Thomas A.; Littrell, Daniel; Ohmacht, Martin; Reed, Don D.; Schenck, Brandon E.; Swetz, Richard A.

    2013-04-02

    A circuit generates a global clock signal with a pulse width modification to synchronize processors in a parallel computing system. The circuit may include a hardware module and a clock splitter. The hardware module may generate a clock signal and performs a pulse width modification on the clock signal. The pulse width modification changes a pulse width within a clock period in the clock signal. The clock splitter may distribute the pulse width modified clock signal to a plurality of processors in the parallel computing system.

  14. A fast processor for di-lepton triggers

    CERN Document Server

    Kostarakis, P; Barsotti, E; Conetti, S; Cox, B; Enagonio, J; Haldeman, M; Haynes, W; Katsanevas, S; Kerns, C; Lebrun, P; Smith, H; Soszyniski, T; Stoffel, J; Treptow, K; Turkot, F; Wagner, R

    1981-01-01

    As a new application of the Fermilab ECL-CAMAC logic modules a fast trigger processor was developed for Fermilab experiment E-537, aiming to measure the higher mass di-muon production by antiprotons. The processor matches the hit information received from drift chambers and scintillation counters, to find candidate muon tracks and determine their directions and momenta. The tracks are then paired to compute an invariant mass: when the computed mass falls within the desired range, the event is accepted. The process is accomplished in times of 5 to 10 microseconds, while achieving a trigger rate reduction of up to a factor of ten. (5 refs).

  15. OLYMPUS system and development of its pre-processor

    International Nuclear Information System (INIS)

    Okamoto, Masao; Takeda, Tatsuoki; Tanaka, Masatoshi; Asai, Kiyoshi; Nakano, Koh.

    1977-08-01

    The OLYMPUS SYSTEM developed by K. V. Roverts et al. was converted and introduced in computer system FACOM 230/75 of the JAERI Computing Center. A pre-processor was also developed for the OLYMPUS SYSTEM. The OLYMPUS SYSTEM is very useful for development, standardization and exchange of programs in thermonuclear fusion research and plasma physics. The pre-processor developed by the present authors is not only essential for the JAERI OLYMPUS SYSTEM, but also useful in manipulation, creation and correction of program files. (auth.)

  16. Real-time wavefront processors for the next generation of adaptive optics systems: a design and analysis

    Science.gov (United States)

    Truong, Tuan; Brack, Gary L.; Troy, Mitchell; Trinh, Thang; Shi, Fang; Dekany, Richard G.

    2003-02-01

    Adaptive optics (AO) systems currently under investigation will require at least two orders of magitude increase in the number of actuators, which in turn translates to effectively a 104 increase in compute latency. Since the performance of an AO system invariably improves as the compute latency decreases, it is important to study how today's computer systems will scale to address this expected increase in actuator utilization. This paper answers this question by characterizing the performance of a single deformable mirror (DM) Shack-Hartmann natural guide star AO system implemented on the present-generation digital signal processor (DSP) TMS320C6701 from Texas Instruments. We derive the compute latency of such a system in terms of a few basic parameters, such as the number of DM actuators, the number of data channels used to read out the camera pixels, the number of DSPs, the available memory bandwidth, as well as the inter-processor communication (IPC) bandwidth and the pixel transfer rate. We show how the results would scale for future systems that utilizes multiple DMs and guide stars. We demonstrate that the principal performance bottleneck of such a system is the available memory bandwidth of the processors and to lesser extent the IPC bandwidth. This paper concludes with suggestions for mitigating this bottleneck.

  17. Array processors: an introduction to their architecture, software, and applications in nuclear medicine

    International Nuclear Information System (INIS)

    King, M.A.; Doherty, P.W.; Rosenberg, R.J.; Cool, S.L.

    1983-01-01

    Array processors are ''number crunchers'' that dramatically enhance the processing power of nuclear medicine computer systems for applicatons dealing with the repetitive operations involved in digital image processing of large segments of data. The general architecture and the programming of array processors are introduced, along with some applications of array processors to the reconstruction of emission tomographic images, digital image enhancement, and functional image formation

  18. Asymmetrical floating point array processors, their application to exploration and exploitation

    Energy Technology Data Exchange (ETDEWEB)

    Geriepy, B L

    1983-01-01

    An asymmetrical floating point array processor is a special-purpose scientific computer which operates under asymmetrical control of a host computer. Although an array processor can receive fixed point input and produce fixed point output, its primary mode of operation is floating point. The first generation of array processors was oriented towards time series information. The next generation of array processors has proved much more versatile and their applicability ranges from petroleum reservoir simulation to speech syntheses. Array processors are becoming commonplace in mining, the primary usage being construction of grids-by usual methods or by kriging. The Australian mining community is among the world's leaders in regard to computer-assisted exploration and exploitation systems. Part of this leadership role must be providing guidance to computer vendors in regard to current and future requirements.

  19. Computed tomography angiography and perfusion to assess coronary artery stenosis causing perfusion defects by single photon emission computed tomography

    DEFF Research Database (Denmark)

    Rochitte, Carlos E; George, Richard T; Chen, Marcus Y

    2014-01-01

    AIMS: To evaluate the diagnostic power of integrating the results of computed tomography angiography (CTA) and CT myocardial perfusion (CTP) to identify coronary artery disease (CAD) defined as a flow limiting coronary artery stenosis causing a perfusion defect by single photon emission computed...... emission computed tomography (SPECT/MPI). Sixteen centres enroled 381 patients who underwent combined CTA-CTP and SPECT/MPI prior to conventional coronary angiography. All four image modalities were analysed in blinded independent core laboratories. The prevalence of obstructive CAD defined by combined ICA...... tomography (SPECT). METHODS AND RESULTS: We conducted a multicentre study to evaluate the accuracy of integrated CTA-CTP for the identification of patients with flow-limiting CAD defined by ≥50% stenosis by invasive coronary angiography (ICA) with a corresponding perfusion deficit on stress single photon...

  20. A microprocessor-based single board computer for high energy physics event pattern recognition

    International Nuclear Information System (INIS)

    Bernstein, H.; Gould, J.J.; Imossi, R.; Kopp, J.K.; Love, W.A.; Ozaki, S.; Platner, E.D.; Kramer, M.A.

    1981-01-01

    A single board MC 68000 based computer has been assembled and bench marked against the CDC 7600 running portions of the pattern recognition code used at the MPS. This computer has a floating coprocessor to achieve throughputs equivalent to several percent that of the 7600. A major part of this work was the construction of a FORTRAN compiler including assembler, linker and library. The intention of this work is to assemble a large number of these single board computers in a parallel FASTBUS environment to act as an on-line and off-line filter for the raw data from MPS II and ISABELLE experiments. (orig.)

  1. Microprocessor-based single board computer for high energy physics event pattern recognition

    International Nuclear Information System (INIS)

    Bernstein, H.; Gould, J.J.; Imossi, R.; Kopp, J.K.; Love, W.A.; Ozaki, S.; Platner, E.D.; Kramer, M.A.

    1981-01-01

    A single board MC 68000 based computer has been assembled and bench marked against the CDC 7600 running portions of the pattern recognition code used at the MPS. This computer has a floating coprocessor to achieve throughputs equivalent to several percent that of the 7600. A major part of this work was the construction of a FORTRAN compiler including assembler, linker and library. The intention of this work is to assemble a large number of these single board computers in a parallel FASTBUS environment to act as an on-line and off-line filter for the raw data from MPS II and ISABELLE experiments

  2. The design of a graphics processor

    International Nuclear Information System (INIS)

    Holmes, M.; Thorne, A.R.

    1975-12-01

    The design of a graphics processor is described which takes into account known and anticipated user requirements, the availability of cheap minicomputers, the state of integrated circuit technology, and the overall need to minimise cost for a given performance. The main user needs are the ability to display large high resolution pictures, and to dynamically change the user's view in real time by means of fast coordinate processing hardware. The transformations that can be applied to 2D or 3D coordinates either singly or in combination are: translation, scaling, mirror imaging, rotation, and the ability to map the transformation origin on to any point on the screen. (author)

  3. Foundations for computer simulation of a low pressure oil flooded single screw air compressor

    Science.gov (United States)

    Bein, T. W.

    1981-12-01

    The necessary logic to construct a computer model to predict the performance of an oil flooded, single screw air compressor is developed. The geometric variables and relationships used to describe the general single screw mechanism are developed. The governing equations to describe the processes are developed from their primary relationships. The assumptions used in the development are also defined and justified. The computer model predicts the internal pressure, temperature, and flowrates through the leakage paths throughout the compression cycle of the single screw compressor. The model uses empirical external values as the basis for the internal predictions. The computer values are compared to the empirical values, and conclusions are drawn based on the results. Recommendations are made for future efforts to improve the computer model and to verify some of the conclusions that are drawn.

  4. A CAMAC-based laboratory computer system

    International Nuclear Information System (INIS)

    Westphal, G.P.

    1975-01-01

    A CAMAC-based laboratory computer network is described by sharing a common mass memory this offers distinct advantages over slow and core-consuming single-processor installations. A fast compiler-BASIC, with extensions for CAMAC and real-time, provides a convenient means for interactive experiment control

  5. Channel processor in 2D cluster finding algorithm for high energy physics application

    International Nuclear Information System (INIS)

    Paul, Rourab; Chakrabarti, Amlan; Mitra, Jubin; Khan, Shuaib A.; Nayak, Tapan; Mukherjee, Sanjoy

    2016-01-01

    In a Large Ion Collider Experiment (ALICE) at CERN 1 TB/s (approximately) data comes from front end electronics. Previously, we had 1 GBT link operated with a cluster clock frequencies of 133 MHz and 320 MHz in Run 1 and Run 2 respectively. The cluster algorithm proposed in Run 1 and 2 could not work in Run 3 as the data speed increased almost 20 times. Older version cluster algorithm receives data sequentially as a stream. It has 2 main sub processes - Channel Processor, Merging process. The initial step of channel processor finds a peak Q max and sums up pads (sensors) data from -2 time bin to +2 time bin in the time direction. The computed value stores in a register named cluster fragment data (cfd o ). The merging process merges cfd o in pad direction. The data streams in Run 2 comes sequentially, which processed by the channel processor and merging block in a sequential manner with very less resource over head. In Run 3 data comes parallely, 1600 data from 1600 pads of a single time instant comes at each 200 ns interval (5 MHz) which is very challenging to process in the budgeted resource platform of Arria 10 FPGA hardware with 250 to 320 MHz cluster clock

  6. An energy-efficient high-performance processor with reconfigurable data-paths using RSFQ circuits

    International Nuclear Information System (INIS)

    Takagi, Naofumi

    2013-01-01

    Highlights: ► An idea of a high-performance computer using RSFQ circuits is shown. ► An outline of processor with reconfigurable data-paths (RDPs) is shown. ► Architectural details of an SFQ-RDP are described. -- Abstract: We show recent progress in our research on an energy-efficient high-performance processor with reconfigurable data-paths (RDPs) using rapid single-flux-quantum (RSFQ) circuits. We mainly describe the architectural details of an RDP implemented using RSFQ circuits. An RDP consists of a lot of floating-point units (FPUs) and operand routing networks (ORNs) which connect the FPUs. We reconfigure the RDP to fit a computation, i.e., a group of floating-point operations, appearing in a ‘for’ loop of programs for numerical computations by setting the route in ORNs before the execution of the loop. In the RDP, a lot of FPUs work in parallel with pipelined fashion, and hence, very high-performance computation is achieved

  7. ALICE chip processor

    CERN Multimedia

    Maximilien Brice

    2003-01-01

    This tiny chip provides data processing for the time projection chamber on ALICE. Known as the ALICE TPC Read Out (ALTRO), this device was designed to minimize the size and power consumption of the TPC front end electronics. This single chip contains 16 low-power analogue-to-digital converters with six million transistors of digital processing and 8 kbits of data storage.

  8. Optimal Processor Assignment for Pipeline Computations

    Science.gov (United States)

    1991-10-01

    the use of ratios: initially each task is assigned a procesbuor2 the remaining proceborb are distributed in proportion to the quantities f,(1), 1 < i...algorithmns. IEEE Trans. onl Parallel and Distributed Systemns, 1 (4):470-499, October 1990. [26] P. Al. Kogge. The Architeture of Pipelined Comnputers

  9. Benchmarking NWP Kernels on Multi- and Many-core Processors

    Science.gov (United States)

    Michalakes, J.; Vachharajani, M.

    2008-12-01

    Increased computing power for weather, climate, and atmospheric science has provided direct benefits for defense, agriculture, the economy, the environment, and public welfare and convenience. Today, very large clusters with many thousands of processors are allowing scientists to move forward with simulations of unprecedented size. But time-critical applications such as real-time forecasting or climate prediction need strong scaling: faster nodes and processors, not more of them. Moreover, the need for good cost- performance has never been greater, both in terms of performance per watt and per dollar. For these reasons, the new generations of multi- and many-core processors being mass produced for commercial IT and "graphical computing" (video games) are being scrutinized for their ability to exploit the abundant fine- grain parallelism in atmospheric models. We present results of our work to date identifying key computational kernels within the dynamics and physics of a large community NWP model, the Weather Research and Forecast (WRF) model. We benchmark and optimize these kernels on several different multi- and many-core processors. The goals are to (1) characterize and model performance of the kernels in terms of computational intensity, data parallelism, memory bandwidth pressure, memory footprint, etc. (2) enumerate and classify effective strategies for coding and optimizing for these new processors, (3) assess difficulties and opportunities for tool or higher-level language support, and (4) establish a continuing set of kernel benchmarks that can be used to measure and compare effectiveness of current and future designs of multi- and many-core processors for weather and climate applications.

  10. Precision analog signal processor for beam position measurements in electron storage rings

    International Nuclear Information System (INIS)

    Hinkson, J.A.; Unser, K.B.

    1995-05-01

    Beam position monitors (BPM) in electron and positron storage rings have evolved from simple systems composed of beam pickups, coaxial cables, multiplexing relays, and a single receiver (usually a analyzer) into very complex and costly systems of multiple receivers and processors. The older may have taken minutes to measure the circulating beam closed orbit. Today instrumentation designers are required to provide high-speed measurements of the beam orbit, often at the ring revolution frequency. In addition the instruments must have very high accuracy and resolution. A BPM has been developed for the Advanced Light Source (ALS) in Berkeley which features high resolution and relatively low cost. The instrument has a single purpose; to measure position of a stable stored beam. Because the pickup signals are multiplexed into a single receiver, and due to its narrow bandwidth, the receiver is not intended for single-turn studies. The receiver delivers normalized measurements of X and Y position entirely by analog means at nominally 1 V/mm. No computers are involved. No software is required. Bergoz, a French company specializing in precision beam instrumentation, integrated the ALS design m their new BPM analog signal processor module. Performance comparisons were made on the ALS. In this paper we report on the architecture and performance of the ALS prototype BPM

  11. Precision analog signal processor for beam position measurements in electron storage rings

    International Nuclear Information System (INIS)

    Hinkson, J.A.; Unser, K.B.

    1995-01-01

    Beam position monitors (BPM) in electron and positron storage rings have evolved from simple systems composed of beam pickups, coaxial cables, multiplexing relays, and a single receiver (usually a analyzer) into very complex and costly systems of multiple receivers and processors. The older may have taken minutes to measure the circulating beam closed orbit. Today instrumentation designers are required to provide high-speed measurements of the beam orbit, often at the ring revolution frequency. In addition the instruments must have very high accuracy and resolution. A BPM has been developed for the Advanced Light Source (ALS) in Berkeley which features high resolution and relatively low cost. The instrument has a single purpose; to measure position of a stable stored beam. Because the pickup signals are multiplexed into a single receiver, and due to its narrow bandwidth, the receiver is not intended for single-turn studies. The receiver delivers normalized measurements of X and Y posit ion entirely by analog means at nominally 1 V/mm. No computers are involved. No software is required. Bergoz, a French company specializing in precision beam instrumentation, integrated the ALS design m their new BPM analog signal processor module. Performance comparisons were made on the ALS. In this paper we report on the architecture and performance of the ALS prototype BPM

  12. Single-lens computed tomography imaging spectrometer and method of capturing spatial and spectral information

    Science.gov (United States)

    Wilson, Daniel W. (Inventor); Johnson, William R. (Inventor); Bearman, Gregory H. (Inventor)

    2011-01-01

    Computed tomography imaging spectrometers ("CTISs") employing a single lens are provided. The CTISs may be either transmissive or reflective, and the single lens is either configured to transmit and receive uncollimated light (in transmissive systems), or is configured to reflect and receive uncollimated light (in reflective systems). An exemplary transmissive CTIS includes a focal plane array detector, a single lens configured to transmit and receive uncollimated light, a two-dimensional grating, and a field stop aperture. An exemplary reflective CTIS includes a focal plane array detector, a single mirror configured to reflect and receive uncollimated light, a two-dimensional grating, and a field stop aperture.

  13. Making CSB + -Trees Processor Conscious

    DEFF Research Database (Denmark)

    Samuel, Michael; Pedersen, Anders Uhl; Bonnet, Philippe

    2005-01-01

    of the CSB+-tree. We argue that it is necessary to consider a larger group of parameters in order to adapt CSB+-tree to processor architectures as different as Pentium and Itanium. We identify this group of parameters and study how it impacts the performance of CSB+-tree on Itanium 2. Finally, we propose......Cache-conscious indexes, such as CSB+-tree, are sensitive to the underlying processor architecture. In this paper, we focus on how to adapt the CSB+-tree so that it performs well on a range of different processor architectures. Previous work has focused on the impact of node size on the performance...... a systematic method for adapting CSB+-tree to new platforms. This work is a first step towards integrating CSB+-tree in MySQL’s heap storage manager....

  14. Java Processor Optimized for RTSJ

    Directory of Open Access Journals (Sweden)

    Tu Shiliang

    2007-01-01

    Full Text Available Due to the preeminent work of the real-time specification for Java (RTSJ, Java is increasingly expected to become the leading programming language in real-time systems. To provide a Java platform suitable for real-time applications, a Java processor which can execute Java bytecode is directly proposed in this paper. It provides efficient support in hardware for some mechanisms specified in the RTSJ and offers a simpler programming model through ameliorating the scoped memory of the RTSJ. The worst case execution time (WCET of the bytecodes implemented in this processor is predictable by employing the optimization method proposed in our previous work, in which all the processing interfering predictability is handled before bytecode execution. Further advantage of this method is to make the implementation of the processor simpler and suited to a low-cost FPGA chip.

  15. An HDLC communication processor

    International Nuclear Information System (INIS)

    Brehmer, W.; Wawer, W.

    1981-09-01

    Systems for data acquisition and process control may comprise several intelligent stations with local computing power, each of them performing specific tasks in the control system. These stations generally are not independent of each other but are interconnected by the process being monitored or controlled. Therefore they must communicate with each other by transmitting or receiving messages which may contain instructions directed to the control system, status information and data from peripheral devices, variables which synchronize the execution of programs in the autonomous intelligent stations, and data for man-machine communication. This report describes a microprocessor based device which interfaces the I/O port of a host computer (CAMAC Dataway) to an HDLC bus system. The microprocessor (M6800) performs the HDLC protocol for a Multi-Point Unbalanced System. (orig.) [de

  16. Optical Array Processor: Laboratory Results

    Science.gov (United States)

    Casasent, David; Jackson, James; Vaerewyck, Gerard

    1987-01-01

    A Space Integrating (SI) Optical Linear Algebra Processor (OLAP) is described and laboratory results on its performance in several practical engineering problems are presented. The applications include its use in the solution of a nonlinear matrix equation for optimal control and a parabolic Partial Differential Equation (PDE), the transient diffusion equation with two spatial variables. Frequency-multiplexed, analog and high accuracy non-base-two data encoding are used and discussed. A multi-processor OLAP architecture is described and partitioning and data flow issues are addressed.

  17. Fast processor for dilepton triggers

    International Nuclear Information System (INIS)

    Katsanevas, S.; Kostarakis, P.; Baltrusaitis, R.

    1983-01-01

    We describe a fast trigger processor, developed for and used in Fermilab experiment E-537, for selecting high-mass dimuon events produced by negative pions and anti-protons. The processor finds candidate tracks by matching hit information received from drift chambers and scintillation counters, and determines their momenta. Invariant masses are calculated for all possible pairs of tracks and an event is accepted if any invariant mass is greater than some preselectable minimum mass. The whole process, accomplished within 5 to 10 microseconds, achieves up to a ten-fold reduction in trigger rate

  18. SAPIENS: Spreading Activation Processor for Information Encoded in Network Structures. Technical Report No. 296.

    Science.gov (United States)

    Ortony, Andrew; Radin, Dean I.

    The product of researchers' efforts to develop a computer processor which distinguishes between relevant and irrelevant information in the database, Spreading Activation Processor for Information Encoded in Network Structures (SAPIENS) exhibits (1) context sensitivity, (2) efficiency, (3) decreasing activation over time, (4) summation of…

  19. Mobile clusters of single board computers: an option for providing resources to student projects and researchers.

    Science.gov (United States)

    Baun, Christian

    2016-01-01

    Clusters usually consist of servers, workstations or personal computers as nodes. But especially for academic purposes like student projects or scientific projects, the cost for purchase and operation can be a challenge. Single board computers cannot compete with the performance or energy-efficiency of higher-value systems, but they are an option to build inexpensive cluster systems. Because of the compact design and modest energy consumption, it is possible to build clusters of single board computers in a way that they are mobile and can be easily transported by the users. This paper describes the construction of such a cluster, useful applications and the performance of the single nodes. Furthermore, the clusters' performance and energy-efficiency is analyzed by executing the High Performance Linpack benchmark with a different number of nodes and different proportion of the systems total main memory utilized.

  20. Reconfigurable Computing As an Enabling Technology for Single-Photon-Counting Laser Altimetry

    Science.gov (United States)

    Powell, Wesley; Hicks, Edward; Pinchinat, Maxime; Dabney, Philip; McGarry, Jan; Murray, Paul

    2003-01-01

    Single-photon-counting laser altimetry is a new measurement technique offering significant advantages in vertical resolution, reducing instrument size, mass, and power, and reducing laser complexity as compared to analog or threshold detection laser altimetry techniques. However, these improvements come at the cost of a dramatically increased requirement for onboard real-time data processing. Reconfigurable computing has been shown to offer considerable performance advantages in performing this processing. These advantages have been demonstrated on the Multi-KiloHertz Micro-Laser Altimeter (MMLA), an aircraft based single-photon-counting laser altimeter developed by NASA Goddard Space Flight Center with several potential spaceflight applications. This paper describes how reconfigurable computing technology was employed to perform MMLA data processing in real-time under realistic operating constraints, along with the results observed. This paper also expands on these prior results to identify concepts for using reconfigurable computing to enable spaceflight single-photon-counting laser altimeter instruments.

  1. Computing platforms for software-defined radio

    CERN Document Server

    Nurmi, Jari; Isoaho, Jouni; Garzia, Fabio

    2017-01-01

    This book addresses Software-Defined Radio (SDR) baseband processing from the computer architecture point of view, providing a detailed exploration of different computing platforms by classifying different approaches, highlighting the common features related to SDR requirements and by showing pros and cons of the proposed solutions. Coverage includes architectures exploiting parallelism by extending single-processor environment (such as VLIW, SIMD, TTA approaches), multi-core platforms distributing the computation to either a homogeneous array or a set of specialized heterogeneous processors, and architectures exploiting fine-grained, coarse-grained, or hybrid reconfigurability. Describes a computer engineering approach to SDR baseband processing hardware; Discusses implementation of numerous compute-intensive signal processing algorithms on single and multicore platforms; Enables deep understanding of optimization techniques related to power and energy consumption of multicore platforms using several basic a...

  2. Very wide register : an asymmetric register file organization for low power embedded processors.

    NARCIS (Netherlands)

    Raghavan, P.; Lambrechts, A.; Jayapala, M.; Catthoor, F.; Verkest, D.T.M.L.; Corporaal, H.

    2007-01-01

    In current embedded systems processors, multi-ported register files are one of the most power hungry parts of the processor, even when they are clustered. This paper presents a novel register file architecture, which has single ported cells and asymmetric interfaces to the memory and to the

  3. Slowdown in the $M/M/1$ discriminatory processor-sharing queue

    NARCIS (Netherlands)

    Cheung, S.K.; Kim, Bara; Kim, Jeongsim

    2008-01-01

    We consider a queue with multiple K job classes, Poisson arrivals, and exponentially distributed required service times in which a single processor serves according to the discriminatory processor-sharing (DPS) discipline. For this queue, we obtain the first and second moments of the slowdown, which

  4. Reward-based learning under hardware constraints - Using a RISC processor embedded in a neuromorphic substrate

    Directory of Open Access Journals (Sweden)

    Simon eFriedmann

    2013-09-01

    Full Text Available In this study, we propose and analyze in simulations a new, highly flexible method of imple-menting synaptic plasticity in a wafer-scale, accelerated neuromorphic hardware system. Thestudy focuses on globally modulated STDP, as a special use-case of this method. Flexibility isachieved by embedding a general-purpose processor dedicated to plasticity into the wafer. Toevaluate the suitability of the proposed system, we use a reward modulated STDP rule in a spiketrain learning task. A single layer of neurons is trained to fire at specific points in time withonly the reward as feedback. This model is simulated to measure its performance, i.e. the in-crease in received reward after learning. Using this performance as baseline, we then simulatethe model with various constraints imposed by the proposed implementation and compare theperformance. The simulated constraints include discretized synaptic weights, a restricted inter-face between analog synapses and embedded processor, and mismatch of analog circuits. Wefind that probabilistic updates can increase the performance of low-resolution weights, a simpleinterface between analog synapses and processor is sufficient for learning, and performance isinsensitive to mismatch. Further, we consider communication latency between wafer and theconventional control computer system that is simulating the environment. This latency increasesthe delay, with which the reward is sent to the embedded processor. Because of the time continu-ous operation of the analog synapses, delay can cause a deviation of the updates as compared tothe not delayed situation. We find that for highly accelerated systems latency has to be kept to aminimum. This study demonstrates the suitability of the proposed implementation to emulatethe selected reward modulated STDP learning rule. It is therefore an ideal candidate for imple-mentation in an upgraded version of the wafer-scale system developed within the BrainScaleSproject.

  5. A Software Implementation of a Satellite Interface Message Processor.

    Science.gov (United States)

    Eastwood, Margaret A.; Eastwood, Lester F., Jr.

    A design for network control software for a computer network is described in which some nodes are linked by a communications satellite channel. It is assumed that the network has an ARPANET-like configuration; that is, that specialized processors at each node are responsible for message switching and network control. The purpose of the control…

  6. Sojourn time asymptotics in processor-sharing queues

    NARCIS (Netherlands)

    Borst, S.C.; Núñez Queija, R.; Zwart, B.

    2006-01-01

    Over the past few decades, the Processor-Sharing (PS) discipline has attracted a great deal of attention in the queueing literature. While the PS paradigm emerged in the sixties as an idealization of round-robin scheduling in time-shared computer systems, it has recently captured renewed interest as

  7. Elementary function calculation programs for the central processor-6

    International Nuclear Information System (INIS)

    Dobrolyubov, L.V.; Ovcharenko, G.A.; Potapova, V.A.

    1976-01-01

    Subprograms of elementary functions calculations are given for the central processor (CP AS-6). A procedure is described to obtain calculated formulae which represent the elementary functions as a polynomial. Standard programs for random numbers are considered. All the programs described are based upon the algorithms of respective programs for BESM computer

  8. Optimization of Particle-in-Cell Codes on RISC Processors

    Science.gov (United States)

    Decyk, Viktor K.; Karmesin, Steve Roy; Boer, Aeint de; Liewer, Paulette C.

    1996-01-01

    General strategies are developed to optimize particle-cell-codes written in Fortran for RISC processors which are commonly used on massively parallel computers. These strategies include data reorganization to improve cache utilization and code reorganization to improve efficiency of arithmetic pipelines.

  9. Adaptation of a Fault-Tolerant Fpga-Based Launch Sequencer as a Cubesat Payload Processor

    Science.gov (United States)

    2014-06-01

    32–bit, reduced instruction set computing ( RISC ) processor that interfaces with a universal asynchronous receiver/transmitter (UART) for a field...test a fault–tolerant reduced instruction set computer processor running a subset of the multiprocessor without interlocked pipelined stages instruction...James H. Newman Thesis Co-Advisor Clark Robertson Chair, Department of Electrical and Computer Engineering iv THIS PAGE

  10. A COTS-based single board radiation-hardened computer for space applications

    International Nuclear Information System (INIS)

    Stewart, S.; Hillman, R.; Layton, P.; Krawzsenek, D.

    1999-01-01

    There is great community interest in the ability to use COTS (Commercial-Off-The-Shelf) technology in radiation environments. Space Electronics, Inc. has developed a high performance COTS-based radiation hardened computer. COTS approaches were selected for both hardware and software. Through parts testing, selection and packaging, all requirements have been met without parts or process development. Reliability, total ionizing dose and single event performance are attractive. The characteristics, performance and radiation resistance of the single board computer will be presented. (authors)

  11. Encouraging more women into computer science: Initiating a single-sex intervention program in Sweden

    Science.gov (United States)

    Brandell, Gerd; Carlsson, Svante; Ekblom, Håkan; Nord, Ann-Charlotte

    1997-11-01

    The process of starting a new program in computer science and engineering, heavily based on applied mathematics and only open to women, is described in this paper. The program was introduced into an educational system without any tradition in single-sex education. Important observations made during the process included the considerable interest in mathematics and curiosity about computer science found among female students at the secondary school level, and the acceptance of the single-sex program by the staff, administration, and management of the university as well as among male and female students. The process described highlights the importance of preparing the environment for a totally new type of educational program.

  12. Simultaneous fluid-flow, heat-transfer and solid-stress computation in a single computer code

    Energy Technology Data Exchange (ETDEWEB)

    Spalding, D B [Concentration Heat and Momentum Ltd, London (United Kingdom)

    1998-12-31

    Computer simulation of flow- and thermally-induced stresses in mechanical-equipment assemblies has, in the past, required the use of two distinct software packages, one to determine the forces and the temperatures, and the other to compute the resultant stresses. The present paper describes how a single computer program can perform both tasks at the same time. The technique relies on the similarity of the equations governing velocity distributions in fluids to those governing displacements in solids. The same SIMPLE-like algorithm is used for solving both. Applications to 1-, 2- and 3-dimensional situations are presented. It is further suggested that Solid-Fluid-Thermal, ie SFT analysis may come to replace CFD on the one hand and the analysis of stresses in solids on the other, by performing the functions of both. (author) 7 refs.

  13. Simultaneous fluid-flow, heat-transfer and solid-stress computation in a single computer code

    Energy Technology Data Exchange (ETDEWEB)

    Spalding, D.B. [Concentration Heat and Momentum Ltd, London (United Kingdom)

    1997-12-31

    Computer simulation of flow- and thermally-induced stresses in mechanical-equipment assemblies has, in the past, required the use of two distinct software packages, one to determine the forces and the temperatures, and the other to compute the resultant stresses. The present paper describes how a single computer program can perform both tasks at the same time. The technique relies on the similarity of the equations governing velocity distributions in fluids to those governing displacements in solids. The same SIMPLE-like algorithm is used for solving both. Applications to 1-, 2- and 3-dimensional situations are presented. It is further suggested that Solid-Fluid-Thermal, ie SFT analysis may come to replace CFD on the one hand and the analysis of stresses in solids on the other, by performing the functions of both. (author) 7 refs.

  14. Kajian dan Implementasi Real TIME Operating System pada Single Board Computer Berbasis Arm

    OpenAIRE

    A, Wiedjaja; M, Handi; L, Jonathan; Christian, Benyamin; Kristofel, Luis

    2014-01-01

    Operating System is an important software in computer system. For personal and office use the operating system is sufficient. However, to critical mission applications such as nuclear power plants and braking system on the car (auto braking system) which need a high level of reliability, it requires operating system which operates in real time. The study aims to assess the implementation of the Linux-based operating system on a Single Board Computer (SBC) ARM-based, namely Pandaboard ES with ...

  15. An accurate projection algorithm for array processor based SPECT systems

    International Nuclear Information System (INIS)

    King, M.A.; Schwinger, R.B.; Cool, S.L.

    1985-01-01

    A data re-projection algorithm has been developed for use in single photon emission computed tomography (SPECT) on an array processor based computer system. The algorithm makes use of an accurate representation of pixel activity (uniform square pixel model of intensity distribution), and is rapidly performed due to the efficient handling of an array based algorithm and the Fast Fourier Transform (FFT) on parallel processing hardware. The algorithm consists of using a pixel driven nearest neighbour projection operation to an array of subdivided projection bins. This result is then convolved with the projected uniform square pixel distribution before being compressed to original bin size. This distribution varies with projection angle and is explicitly calculated. The FFT combined with a frequency space multiplication is used instead of a spatial convolution for more rapid execution. The new algorithm was tested against other commonly used projection algorithms by comparing the accuracy of projections of a simulated transverse section of the abdomen against analytically determined projections of that transverse section. The new algorithm was found to yield comparable or better standard error and yet result in easier and more efficient implementation on parallel hardware. Applications of the algorithm include iterative reconstruction and attenuation correction schemes and evaluation of regions of interest in dynamic and gated SPECT

  16. Impacts of the IBM Cell Processor to Support Climate Models

    Science.gov (United States)

    Zhou, Shujia; Duffy, Daniel; Clune, Tom; Suarez, Max; Williams, Samuel; Halem, Milt

    2008-01-01

    NASA is interested in the performance and cost benefits for adapting its applications to the IBM Cell processor. However, its 256KB local memory per SPE and the new communication mechanism, make it very challenging to port an application. We selected the solar radiation component of the NASA GEOS-5 climate model, which: (1) is representative of column physics (approximately 50% computational time), (2) has a high computational load relative to transferring data from and to main memory, (3) performs independent calculations across multiple columns. We converted the baseline code (single-precision, Fortran) to C and ported it with manually SIMDizing 4 independent columns and found that a Cell with 8 SPEs can process 2274 columns per second. Compared with the baseline results, the Cell is approximately 5.2X, approximately 8.2X, approximately 15.1X faster than a core on Intel Woodcrest, Dempsey, and Itanium2, respectively. We believe this dramatic performance improvement makes a hybrid cluster with Cell and traditional nodes competitive.

  17. COMPUTING: International symposium

    International Nuclear Information System (INIS)

    Anon.

    1984-01-01

    Recent Developments in Computing, Processor, and Software Research for High Energy Physics, a four-day international symposium, was held in Guanajuato, Mexico, from 8-11 May, with 112 attendees from nine countries. The symposium was the third in a series of meetings exploring activities in leading-edge computing technology in both processor and software research and their effects on high energy physics. Topics covered included fixed-target on- and off-line reconstruction processors; lattice gauge and general theoretical processors and computing; multiprocessor projects; electron-positron collider on- and offline reconstruction processors; state-of-the-art in university computer science and industry; software research; accelerator processors; and proton-antiproton collider on and off-line reconstruction processors

  18. Dynamic overset grid communication on distributed memory parallel processors

    Science.gov (United States)

    Barszcz, Eric; Weeratunga, Sisira K.; Meakin, Robert L.

    1993-01-01

    A parallel distributed memory implementation of intergrid communication for dynamic overset grids is presented. Included are discussions of various options considered during development. Results are presented comparing an Intel iPSC/860 to a single processor Cray Y-MP. Results for grids in relative motion show the iPSC/860 implementation to be faster than the Cray implementation.

  19. Nonlinear Wave Simulation on the Xeon Phi Knights Landing Processor

    OpenAIRE

    Hristov Ivan; Goranov Goran; Hristova Radoslava

    2018-01-01

    We consider an interesting from computational point of view standing wave simulation by solving coupled 2D perturbed Sine-Gordon equations. We make an OpenMP realization which explores both thread and SIMD levels of parallelism. We test the OpenMP program on two different energy equivalent Intel architectures: 2× Xeon E5-2695 v2 processors, (code-named “Ivy Bridge-EP”) in the Hybrilit cluster, and Xeon Phi 7250 processor (code-named “Knights Landing” (KNL). The results show 2 times better per...

  20. Matrix preconditioning: a robust operation for optical linear algebra processors.

    Science.gov (United States)

    Ghosh, A; Paparao, P

    1987-07-15

    Analog electrooptical processors are best suited for applications demanding high computational throughput with tolerance for inaccuracies. Matrix preconditioning is one such application. Matrix preconditioning is a preprocessing step for reducing the condition number of a matrix and is used extensively with gradient algorithms for increasing the rate of convergence and improving the accuracy of the solution. In this paper, we describe a simple parallel algorithm for matrix preconditioning, which can be implemented efficiently on a pipelined optical linear algebra processor. From the results of our numerical experiments we show that the efficacy of the preconditioning algorithm is affected very little by the errors of the optical system.

  1. Ring-array processor distribution topology for optical interconnects

    Science.gov (United States)

    Li, Yao; Ha, Berlin; Wang, Ting; Wang, Sunyu; Katz, A.; Lu, X. J.; Kanterakis, E.

    1992-01-01

    The existing linear and rectangular processor distribution topologies for optical interconnects, although promising in many respects, cannot solve problems such as clock skews, the lack of supporting elements for efficient optical implementation, etc. The use of a ring-array processor distribution topology, however, can overcome these problems. Here, a study of the ring-array topology is conducted with an aim of implementing various fast clock rate, high-performance, compact optical networks for digital electronic multiprocessor computers. Practical design issues are addressed. Some proof-of-principle experimental results are included.

  2. Parallel Processor for 3D Recovery from Optical Flow

    Directory of Open Access Journals (Sweden)

    Jose Hugo Barron-Zambrano

    2009-01-01

    Full Text Available 3D recovery from motion has received a major effort in computer vision systems in the recent years. The main problem lies in the number of operations and memory accesses to be performed by the majority of the existing techniques when translated to hardware or software implementations. This paper proposes a parallel processor for 3D recovery from optical flow. Its main feature is the maximum reuse of data and the low number of clock cycles to calculate the optical flow, along with the precision with which 3D recovery is achieved. The results of the proposed architecture as well as those from processor synthesis are presented.

  3. The associative memory system for the FTK processor at ATLAS

    CERN Document Server

    Magalotti, D; The ATLAS collaboration; Donati, S; Luciano, P; Piendibene, M; Giannetti, P; Lanza, A; Verzellesi, G; Sakellariou, Andreas; Billereau, W; Combe, J M

    2014-01-01

    In high energy physics experiments, the most interesting processes are very rare and hidden in an extremely large level of background. As the experiment complexity, accelerator backgrounds, and instantaneous luminosity increase, more effective and accurate data selection techniques are needed. The Fast TracKer processor (FTK) is a real time tracking processor designed for the ATLAS trigger upgrade. The FTK core is the Associative Memory system. It provides massive computing power to minimize the processing time of complex tracking algorithms executed online. This paper reports on the results and performance of a new prototype of Associative Memory system.

  4. Implementation of the Two-Point Angular Correlation Function on a High-Performance Reconfigurable Computer

    Directory of Open Access Journals (Sweden)

    Volodymyr V. Kindratenko

    2009-01-01

    Full Text Available We present a parallel implementation of an algorithm for calculating the two-point angular correlation function as applied in the field of computational cosmology. The algorithm has been specifically developed for a reconfigurable computer. Our implementation utilizes a microprocessor and two reconfigurable processors on a dual-MAP SRC-6 system. The two reconfigurable processors are used as two application-specific co-processors. Two independent computational kernels are simultaneously executed on the reconfigurable processors while data pre-fetching from disk and initial data pre-processing are executed on the microprocessor. The overall end-to-end algorithm execution speedup achieved by this implementation is over 90× as compared to a sequential implementation of the algorithm executed on a single 2.8 GHz Intel Xeon microprocessor.

  5. SAD PROCESSOR FOR MULTIPLE MACROBLOCK MATCHING IN FAST SEARCH VIDEO MOTION ESTIMATION

    Directory of Open Access Journals (Sweden)

    Nehal N. Shah

    2015-02-01

    Full Text Available Motion estimation is a very important but computationally complex task in video coding. Process of determining motion vectors based on the temporal correlation of consecutive frame is used for video compression. In order to reduce the computational complexity of motion estimation and maintain the quality of encoding during motion compensation, different fast search techniques are available. These block based motion estimation algorithms use the sum of absolute difference (SAD between corresponding macroblock in current frame and all the candidate macroblocks in the reference frame to identify best match. Existing implementations can perform SAD between two blocks using sequential or pipeline approach but performing multi operand SAD in single clock cycle with optimized recourses is state of art. In this paper various parallel architectures for computation of the fixed block size SAD is evaluated and fast parallel SAD architecture is proposed with optimized resources. Further SAD processor is described with 9 processing elements which can be configured for any existing fast search block matching algorithm. Proposed SAD processor consumes 7% fewer adders compared to existing implementation for one processing elements. Using nine PE it can process 84 HD frames per second in worse case which is good outcome for real time implementation. In average case architecture process 325 HD frames per second.

  6. Wavelength-encoded OCDMA system using opto-VLSI processors.

    Science.gov (United States)

    Aljada, Muhsen; Alameh, Kamal

    2007-07-01

    We propose and experimentally demonstrate a 2.5 Gbits/sper user wavelength-encoded optical code-division multiple-access encoder-decoder structure based on opto-VLSI processing. Each encoder and decoder is constructed using a single 1D opto-very-large-scale-integrated (VLSI) processor in conjunction with a fiber Bragg grating (FBG) array of different Bragg wavelengths. The FBG array spectrally and temporally slices the broadband input pulse into several components and the opto-VLSI processor generates codewords using digital phase holograms. System performance is measured in terms of the autocorrelation and cross-correlation functions as well as the eye diagram.

  7. Wavelength-encoded OCDMA system using opto-VLSI processors

    Science.gov (United States)

    Aljada, Muhsen; Alameh, Kamal

    2007-07-01

    We propose and experimentally demonstrate a 2.5 Gbits/sper user wavelength-encoded optical code-division multiple-access encoder-decoder structure based on opto-VLSI processing. Each encoder and decoder is constructed using a single 1D opto-very-large-scale-integrated (VLSI) processor in conjunction with a fiber Bragg grating (FBG) array of different Bragg wavelengths. The FBG array spectrally and temporally slices the broadband input pulse into several components and the opto-VLSI processor generates codewords using digital phase holograms. System performance is measured in terms of the autocorrelation and cross-correlation functions as well as the eye diagram.

  8. VON WISPR Family Processors: Volume 1

    National Research Council Canada - National Science Library

    Wagstaff, Ronald

    1997-01-01

    ...) and the background noise they are embedded in. Processors utilizing those fluctuations such as the von WISPR Family Processors discussed herein, are methods or algorithms that preferentially attenuate the fluctuating signals and noise...

  9. Study on irradiation effects of nucleus electromagnetic pulse on single chip computer system

    International Nuclear Information System (INIS)

    Hou Minsheng; Liu Shanghe; Wang Shuping

    2001-01-01

    Intense electromagnetic pulse, namely nucleus electromagnetic pulse (NEMP), lightning electromagnetic pulse (LEMP) and high power microwave (HPM), can disturb and destroy the single chip computer system. To study this issue, the authors made irradiation experiments by NEMPs generated by gigahertz transversal electromagnetic (GTEM) Cell. The experiments show that shutdown, restarting, communication errors of the single chip microcomputer system would occur when it was irradiated by the NEMPs. Based on the experiments, the cause on the effects on the single chip microcomputer system is discussed

  10. Processor farming in two-level analysis of historical bridge

    Science.gov (United States)

    Krejčí, T.; Kruis, J.; Koudelka, T.; Šejnoha, M.

    2017-11-01

    This contribution presents a processor farming method in connection with a multi-scale analysis. In this method, each macro-scopic integration point or each finite element is connected with a certain meso-scopic problem represented by an appropriate representative volume element (RVE). The solution of a meso-scale problem provides then effective parameters needed on the macro-scale. Such an analysis is suitable for parallel computing because the meso-scale problems can be distributed among many processors. The application of the processor farming method to a real world masonry structure is illustrated by an analysis of Charles bridge in Prague. The three-dimensional numerical model simulates the coupled heat and moisture transfer of one half of arch No. 3. and it is a part of a complex hygro-thermo-mechanical analysis which has been developed to determine the influence of climatic loading on the current state of the bridge.

  11. Deterministic chaos in the processor load

    International Nuclear Information System (INIS)

    Halbiniak, Zbigniew; Jozwiak, Ireneusz J.

    2007-01-01

    In this article we present the results of research whose purpose was to identify the phenomenon of deterministic chaos in the processor load. We analysed the time series of the processor load during efficiency tests of database software. Our research was done on a Sparc Alpha processor working on the UNIX Sun Solaris 5.7 operating system. The conducted analyses proved the presence of the deterministic chaos phenomenon in the processor load in this particular case

  12. Female Students' Experiences of Computer Technology in Single- versus Mixed-Gender School Settings

    Science.gov (United States)

    Burke, Lee-Ann; Murphy, Elizabeth

    2006-01-01

    This study explores how female students compare learning computer technology in a single- versus a mixed- gender school setting. Twelve females participated, all of whom were enrolled in a grade 12 course in Communications' Technology. Data collection included a questionnaire, a semi-structured interview and focus groups. Participants described…

  13. Detection of User Independent Single Trial ERPs in Brain Computer Interfaces: An Adaptive Spatial Filtering Approach

    DEFF Research Database (Denmark)

    Leza, Cristina; Puthusserypady, Sadasivan

    2017-01-01

    Brain Computer Interfaces (BCIs) use brain signals to communicate with the external world. The main challenges to address are speed, accuracy and adaptability. Here, a novel algorithm for P300 based BCI spelling system is presented, specifically suited for single-trial detection of Event...

  14. Myndplay: Measuring Attention Regulation with Single Dry Electrode Brain Computer Interface

    NARCIS (Netherlands)

    van der Wal, C.N.; Irrmischer, M.; Guo, Y.; Friston, K.; Faisal, A.; Hill, S.; Peng, H.

    2015-01-01

    Future applications for the detection of attention can be helped by the development and validation of single electrode brain computer interfaces that are small and user-friendly. The two objectives of this study were: to (1) understand the correlates of attention regulation as detected with the

  15. A device for taking moessbauer data by a single-board computer

    International Nuclear Information System (INIS)

    Yin Chuanyuan

    1987-01-01

    A device for taking moessbauer data based on a single-board computer is described. The device is simple in construction and to operate. The spectrum is displayed by an oscillograph. So it is very convenient for teaching and research work

  16. Scientific programming on massively parallel processor CP-PACS

    International Nuclear Information System (INIS)

    Boku, Taisuke

    1998-01-01

    The massively parallel processor CP-PACS takes various problems of calculation physics as the object, and it has been designed so that its architecture has been devised to do various numerical processings. In this report, the outline of the CP-PACS and the example of programming in the Kernel CG benchmark in NAS Parallel Benchmarks, version 1, are shown, and the pseudo vector processing mechanism and the parallel processing tuning of scientific and technical computation utilizing the three-dimensional hyper crossbar net, which are two great features of the architecture of the CP-PACS are described. As for the CP-PACS, the PUs based on RISC processor and added with pseudo vector processor are used. Pseudo vector processing is realized as the loop processing by scalar command. The features of the connection net of PUs are explained. The algorithm of the NPB version 1 Kernel CG is shown. The part that takes the time for processing most in the main loop is the product of matrix and vector (matvec), and the parallel processing of the matvec is explained. The time for the computation by the CPU is determined. As the evaluation of the performance, the evaluation of the time for execution, the short vector processing of pseudo vector processor based on slide window, and the comparison with other parallel computers are reported. (K.I.)

  17. Safety-critical Java on a time-predictable processor

    DEFF Research Database (Denmark)

    Korsholm, Stephan E.; Schoeberl, Martin; Puffitsch, Wolfgang

    2015-01-01

    For real-time systems the whole execution stack needs to be time-predictable and analyzable for the worst-case execution time (WCET). This paper presents a time-predictable platform for safety-critical Java. The platform consists of (1) the Patmos processor, which is a time-predictable processor......; (2) a C compiler for Patmos with support for WCET analysis; (3) the HVM, which is a Java-to-C compiler; (4) the HVM-SCJ implementation which supports SCJ Level 0, 1, and 2 (for both single and multicore platforms); and (5) a WCET analysis tool. We show that real-time Java programs translated to C...... and compiled to a Patmos binary can be analyzed by the AbsInt aiT WCET analysis tool. To the best of our knowledge the presented system is the second WCET analyzable real-time Java system; and the first one on top of a RISC processor....

  18. JPP: A Java Pre-Processor

    OpenAIRE

    Kiniry, Joseph R.; Cheong, Elaine

    1998-01-01

    The Java Pre-Processor, or JPP for short, is a parsing pre-processor for the Java programming language. Unlike its namesake (the C/C++ Pre-Processor, cpp), JPP provides functionality above and beyond simple textual substitution. JPP's capabilities include code beautification, code standard conformance checking, class and interface specification and testing, and documentation generation.

  19. Coordinated Energy Management in Heterogeneous Processors

    Directory of Open Access Journals (Sweden)

    Indrani Paul

    2014-01-01

    Full Text Available This paper examines energy management in a heterogeneous processor consisting of an integrated CPU–GPU for high-performance computing (HPC applications. Energy management for HPC applications is challenged by their uncompromising performance requirements and complicated by the need for coordinating energy management across distinct core types – a new and less understood problem. We examine the intra-node CPU–GPU frequency sensitivity of HPC applications on tightly coupled CPU–GPU architectures as the first step in understanding power and performance optimization for a heterogeneous multi-node HPC system. The insights from this analysis form the basis of a coordinated energy management scheme, called DynaCo, for integrated CPU–GPU architectures. We implement DynaCo on a modern heterogeneous processor and compare its performance to a state-of-the-art power- and performance-management algorithm. DynaCo improves measured average energy-delay squared (ED2 product by up to 30% with less than 2% average performance loss across several exascale and other HPC workloads.

  20. A novel single neuron perceptron with universal approximation and XOR computation properties.

    Science.gov (United States)

    Lotfi, Ehsan; Akbarzadeh-T, M-R

    2014-01-01

    We propose a biologically motivated brain-inspired single neuron perceptron (SNP) with universal approximation and XOR computation properties. This computational model extends the input pattern and is based on the excitatory and inhibitory learning rules inspired from neural connections in the human brain's nervous system. The resulting architecture of SNP can be trained by supervised excitatory and inhibitory online learning rules. The main features of proposed single layer perceptron are universal approximation property and low computational complexity. The method is tested on 6 UCI (University of California, Irvine) pattern recognition and classification datasets. Various comparisons with multilayer perceptron (MLP) with gradient decent backpropagation (GDBP) learning algorithm indicate the superiority of the approach in terms of higher accuracy, lower time, and spatial complexity, as well as faster training. Hence, we believe the proposed approach can be generally applicable to various problems such as in pattern recognition and classification.

  1. High-Performance Computing Paradigm and Infrastructure

    CERN Document Server

    Yang, Laurence T

    2006-01-01

    With hyperthreading in Intel processors, hypertransport links in next generation AMD processors, multi-core silicon in today's high-end microprocessors from IBM and emerging grid computing, parallel and distributed computers have moved into the mainstream

  2. Monte Carlo simulations on SIMD computer architectures

    International Nuclear Information System (INIS)

    Burmester, C.P.; Gronsky, R.; Wille, L.T.

    1992-01-01

    In this paper algorithmic considerations regarding the implementation of various materials science applications of the Monte Carlo technique to single instruction multiple data (SIMD) computer architectures are presented. In particular, implementation of the Ising model with nearest, next nearest, and long range screened Coulomb interactions on the SIMD architecture MasPar MP-1 (DEC mpp-12000) series of massively parallel computers is demonstrated. Methods of code development which optimize processor array use and minimize inter-processor communication are presented including lattice partitioning and the use of processor array spanning tree structures for data reduction. Both geometric and algorithmic parallel approaches are utilized. Benchmarks in terms of Monte Carl updates per second for the MasPar architecture are presented and compared to values reported in the literature from comparable studies on other architectures

  3. Computing Relative Free Energies of Solvation using Single Reference Thermodynamic Integration Augmented with Hamiltonian Replica Exchange.

    Science.gov (United States)

    Khavrutskii, Ilja V; Wallqvist, Anders

    2010-11-09

    This paper introduces an efficient single-topology variant of Thermodynamic Integration (TI) for computing relative transformation free energies in a series of molecules with respect to a single reference state. The presented TI variant that we refer to as Single-Reference TI (SR-TI) combines well-established molecular simulation methodologies into a practical computational tool. Augmented with Hamiltonian Replica Exchange (HREX), the SR-TI variant can deliver enhanced sampling in select degrees of freedom. The utility of the SR-TI variant is demonstrated in calculations of relative solvation free energies for a series of benzene derivatives with increasing complexity. Noteworthy, the SR-TI variant with the HREX option provides converged results in a challenging case of an amide molecule with a high (13-15 kcal/mol) barrier for internal cis/trans interconversion using simulation times of only 1 to 4 ns.

  4. Computing in the presence of soft bit errors. [caused by single event upset on spacecraft

    Science.gov (United States)

    Rasmussen, R. D.

    1984-01-01

    It is shown that single-event-upsets (SEUs) due to cosmic rays are a significant source of single bit error in spacecraft computers. The physical mechanism of SEU, electron hole generation by means of Linear Energy Transfer (LET), it discussed with reference made to the results of a study of the environmental effects on computer systems of the Galileo spacecraft. Techniques for making software more tolerant of cosmic ray effects are considered, including: reducing the number of registers used by the software; continuity testing of variables; redundant execution of major procedures for error detection; and encoding state variables to detect single-bit changes. Attention is also given to design modifications which may reduce the cosmic ray exposure of on-board hardware. These modifications include: shielding components operating in LEO; removing low-power Schottky parts; and the use of CMOS diodes. The SEU parameters of different electronic components are listed in a table.

  5. HTGR core seismic analysis using an array processor

    International Nuclear Information System (INIS)

    Shatoff, H.; Charman, C.M.

    1983-01-01

    A Floating Point Systems array processor performs nonlinear dynamic analysis of the high-temperature gas-cooled reactor (HTGR) core with significant time and cost savings. The graphite HTGR core consists of approximately 8000 blocks of various shapes which are subject to motion and impact during a seismic event. Two-dimensional computer programs (CRUNCH2D, MCOCO) can perform explicit step-by-step dynamic analyses of up to 600 blocks for time-history motions. However, use of two-dimensional codes was limited by the large cost and run times required. Three-dimensional analysis of the entire core, or even a large part of it, had been considered totally impractical. Because of the needs of the HTGR core seismic program, a Floating Point Systems array processor was used to enhance computer performance of the two-dimensional core seismic computer programs, MCOCO and CRUNCH2D. This effort began by converting the computational algorithms used in the codes to a form which takes maximum advantage of the parallel and pipeline processors offered by the architecture of the Floating Point Systems array processor. The subsequent conversion of the vectorized FORTRAN coding to the array processor required a significant programming effort to make the system work on the General Atomic (GA) UNIVAC 1100/82 host. These efforts were quite rewarding, however, since the cost of running the codes has been reduced approximately 50-fold and the time threefold. The core seismic analysis with large two-dimensional models has now become routine and extension to three-dimensional analysis is feasible. These codes simulate the one-fifth-scale full-array HTGR core model. This paper compares the analysis with the test results for sine-sweep motion

  6. No-go theorem for passive single-rail linear optical quantum computing.

    Science.gov (United States)

    Wu, Lian-Ao; Walther, Philip; Lidar, Daniel A

    2013-01-01

    Photonic quantum systems are among the most promising architectures for quantum computers. It is well known that for dual-rail photons effective non-linearities and near-deterministic non-trivial two-qubit gates can be achieved via the measurement process and by introducing ancillary photons. While in principle this opens a legitimate path to scalable linear optical quantum computing, the technical requirements are still very challenging and thus other optical encodings are being actively investigated. One of the alternatives is to use single-rail encoded photons, where entangled states can be deterministically generated. Here we prove that even for such systems universal optical quantum computing using only passive optical elements such as beam splitters and phase shifters is not possible. This no-go theorem proves that photon bunching cannot be passively suppressed even when extra ancilla modes and arbitrary number of photons are used. Our result provides useful guidance for the design of optical quantum computers.

  7. Rational calculation accuracy in acousto-optical matrix-vector processor

    Science.gov (United States)

    Oparin, V. V.; Tigin, Dmitry V.

    1994-01-01

    The high speed of parallel computations for a comparatively small-size processor and acceptable power consumption makes the usage of acousto-optic matrix-vector multiplier (AOMVM) attractive for processing of large amounts of information in real time. The limited accuracy of computations is an essential disadvantage of such a processor. The reduced accuracy requirements allow for considerable simplification of the AOMVM architecture and the reduction of the demands on its components.

  8. Processor-in-memory-and-storage architecture

    Science.gov (United States)

    DeBenedictis, Erik

    2018-01-02

    A method and apparatus for performing reliable general-purpose computing. Each sub-core of a plurality of sub-cores of a processor core processes a same instruction at a same time. A code analyzer receives a plurality of residues that represents a code word corresponding to the same instruction and an indication of whether the code word is a memory address code or a data code from the plurality of sub-cores. The code analyzer determines whether the plurality of residues are consistent or inconsistent. The code analyzer and the plurality of sub-cores perform a set of operations based on whether the code word is a memory address code or a data code and a determination of whether the plurality of residues are consistent or inconsistent.

  9. SOFT COMPUTING SINGLE HIDDEN LAYER MODELS FOR SHELF LIFE PREDICTION OF BURFI

    Directory of Open Access Journals (Sweden)

    Sumit Goyal

    2012-05-01

    Full Text Available Burfi is an extremely popular sweetmeat, which is prepared by desiccating the standardized water buffalo milk. Soft computing feedforward single layer models were developed for predicting the shelf life of burfi stored at 30g.C. The data of the product relating to moisture, titratable acidity, free fatty acids, tyrosine, and peroxide value were used as input variables, and the overall acceptability score as output variable. The results showed excellent agreement between the experimental and the predicted data, suggesting that the developed soft computing model can alternatively be used for predicting the shelf life of burfi.

  10. Computation of Quantum Bound States on a Singly Punctured Two-Torus

    International Nuclear Information System (INIS)

    Kar-Tim Chan; Zainuddin Hishamuddin; Molladavoudi Saeid

    2013-01-01

    We study a quantum mechanical system on a singly punctured two-torus with bound states described by the Maass waveforms which are eigenfunctions of the hyperbolic Laplace—Beltrami operator. Since the discrete eigenvalues of the Maass cusp form are not known analytically, they are solved numerically using an adapted algorithm of Hejhal and Then to compute Maass cusp forms on the punctured two-torus. We report on the computational results of the lower lying eigenvalues for the punctured two-torus and find that they are doubly-degenerate. We also visualize the eigenstates of selected eigenvalues using GridMathematica

  11. Computer-automated tuning of semiconductor double quantum dots into the single-electron regime

    Energy Technology Data Exchange (ETDEWEB)

    Baart, T. A.; Vandersypen, L. M. K. [QuTech, Delft University of Technology, P.O. Box 5046, 2600 GA Delft (Netherlands); Kavli Institute of Nanoscience, Delft University of Technology, P.O. Box 5046, 2600 GA Delft (Netherlands); Eendebak, P. T. [QuTech, Delft University of Technology, P.O. Box 5046, 2600 GA Delft (Netherlands); Netherlands Organisation for Applied Scientific Research (TNO), P.O. Box 155, 2600 AD Delft (Netherlands); Reichl, C.; Wegscheider, W. [Solid State Physics Laboratory, ETH Zürich, 8093 Zürich (Switzerland)

    2016-05-23

    We report the computer-automated tuning of gate-defined semiconductor double quantum dots in GaAs heterostructures. We benchmark the algorithm by creating three double quantum dots inside a linear array of four quantum dots. The algorithm sets the correct gate voltages for all the gates to tune the double quantum dots into the single-electron regime. The algorithm only requires (1) prior knowledge of the gate design and (2) the pinch-off value of the single gate T that is shared by all the quantum dots. This work significantly alleviates the user effort required to tune multiple quantum dot devices.

  12. Refficientlib: an efficient load-rebalanced adaptive mesh refinement algorithm for high-performance computational physics meshes

    OpenAIRE

    Baiges Aznar, Joan; Bayona Roa, Camilo Andrés

    2017-01-01

    No separate or additional fees are collected for access to or distribution of the work. In this paper we present a novel algorithm for adaptive mesh refinement in computational physics meshes in a distributed memory parallel setting. The proposed method is developed for nodally based parallel domain partitions where the nodes of the mesh belong to a single processor, whereas the elements can belong to multiple processors. Some of the main features of the algorithm presented in this paper a...

  13. The role of dendritic non-linearities in single neuron computation

    Directory of Open Access Journals (Sweden)

    Boris Gutkin

    2014-05-01

    Full Text Available Experiment has demonstrated that summation of excitatory post-synaptic protientials (EPSPs in dendrites is non-linear. The sum of multiple EPSPs can be larger than their arithmetic sum, a superlinear summation due to the opening of voltage-gated channels and similar to somatic spiking. The so-called dendritic spike. The sum of multiple of EPSPs can also be smaller than their arithmetic sum, because the synaptic current necessarily saturates at some point. While these observations are well-explained by biophysical models the impact of dendritic spikes on computation remains a matter of debate. One reason is that dendritic spikes may fail to make the neuron spike; similarly, dendritic saturations are sometime presented as a glitch which should be corrected by dendritic spikes. We will provide solid arguments against this claim and show that dendritic saturations as well as dendritic spikes enhance single neuron computation, even when they cannot directly make the neuron fire. To explore the computational impact of dendritic spikes and saturations, we are using a binary neuron model in conjunction with Boolean algebra. We demonstrate using these tools that a single dendritic non-linearity, either spiking or saturating, combined with somatic non-linearity, enables a neuron to compute linearly non-separable Boolean functions (lnBfs. These functions are impossible to compute when summation is linear and the exclusive OR is a famous example of lnBfs. Importantly, the implementation of these functions does not require the dendritic non-linearity to make the neuron spike. Next, We show that reduced and realistic biophysical models of the neuron are capable of computing lnBfs. Within these models and contrary to the binary model, the dendritic and somatic non-linearity are tightly coupled. Yet we show that these neuron models are capable of linearly non-separable computations.

  14. Matrix-vector multiplication using digital partitioning for more accurate optical computing

    Science.gov (United States)

    Gary, C. K.

    1992-01-01

    Digital partitioning offers a flexible means of increasing the accuracy of an optical matrix-vector processor. This algorithm can be implemented with the same architecture required for a purely analog processor, which gives optical matrix-vector processors the ability to perform high-accuracy calculations at speeds comparable with or greater than electronic computers as well as the ability to perform analog operations at a much greater speed. Digital partitioning is compared with digital multiplication by analog convolution, residue number systems, and redundant number representation in terms of the size and the speed required for an equivalent throughput as well as in terms of the hardware requirements. Digital partitioning and digital multiplication by analog convolution are found to be the most efficient alogrithms if coding time and hardware are considered, and the architecture for digital partitioning permits the use of analog computations to provide the greatest throughput for a single processor.

  15. PET and Single-Photon Emission Computed Tomography in Brain Concussion.

    Science.gov (United States)

    Raji, Cyrus A; Henderson, Theodore A

    2018-02-01

    This article offers an overview of the application of PET and single photon emission computed tomography brain imaging to concussion, a type of mild traumatic brain injury and traumatic brain injury, in general. The article reviews the application of these neuronuclear imaging modalities in cross-sectional and longitudinal studies. Additionally, this article frames the current literature with an overview of the basic physics and radiation exposure risks of each modality. Copyright © 2017 Elsevier Inc. All rights reserved.

  16. Degenerative dementia: nosological aspects and results of single photon emission computed tomography

    International Nuclear Information System (INIS)

    Dubois, B.; Habert, M.O.

    1999-01-01

    Ten years ago, the diagnosis discussion of a dementia case for the old patient was limited to two pathologies: the Alzheimer illness and the Pick illness. During these last years, the frame of these primary degenerative dementia has fallen into pieces. The different diseases and the results got with single photon emission computed tomography are discussed. for example: fronto-temporal dementia, primary progressive aphasia, progressive apraxia, visio-spatial dysfunction, dementia at Lewy's bodies, or cortico-basal degeneration. (N.C.)

  17. Exploring Biomolecular Interactions Through Single-Molecule Force Spectroscopy and Computational Simulation

    OpenAIRE

    Yang, Darren

    2016-01-01

    Molecular interactions between cellular components such as proteins and nucleic acids govern the fundamental processes of living systems. Technological advancements in the past decade have allowed the characterization of these molecular interactions at the single-molecule level with high temporal and spatial resolution. Simultaneously, progress in computer simulation has enabled theoretical research at the atomistic level, assisting in the interpretation of experimental results. This thesi...

  18. Imaging by single photon emission computed tomography: interest in the pre surgical check up of epilepsy

    International Nuclear Information System (INIS)

    Biraben, A.; Bernard, A.M.

    1999-01-01

    With the single photon emission computed tomography, it is a more reliable technique that is at someone's disposal, especially to limit spatially the evolution of epilepsy crisis before any surgery act. The determination of the precise area is necessary to make sure that the crisis come really from this area and the determination of the functionality of this area is checked to be sure that the ablation of the zone will not lead to an unacceptable functional deficit. (N.C.)

  19. Universal quantum computation with the orbital angular momentum of a single photon

    International Nuclear Information System (INIS)

    García-Escartín, Juan Carlos; Chamorro-Posada, Pedro

    2011-01-01

    We prove that a single photon with quantum data encoded in its orbital angular momentum can be manipulated with simple optical elements to provide any desired quantum computation. We will show how to build any quantum unitary operator using beamsplitters, phase shifters, holograms and an extraction gate based on quantum interrogation. The advantages and challenges of these approach are then discussed, in particular the problem of the readout of the results

  20. A comparative study of attenuation correction algorithms in single photon emission computed tomography (SPECT)

    International Nuclear Information System (INIS)

    Murase, Kenya; Itoh, Hisao; Mogami, Hiroshi; Ishine, Masashiro; Kawamura, Masashi; Iio, Atsushi; Hamamoto, Ken

    1987-01-01

    A computer based simulation method was developed to assess the relative effectiveness and availability of various attenuation compensation algorithms in single photon emission computed tomography (SPECT). The effect of the nonuniformity of attenuation coefficient distribution in the body, the errors in determining a body contour and the statistical noise on reconstruction accuracy and the computation time in using the algorithms were studied. The algorithms were classified into three groups: precorrection, post correction and iterative correction methods. Furthermore, a hybrid method was devised by combining several methods. This study will be useful for understanding the characteristics limitations and strengths of the algorithms and searching for a practical correction method for photon attenuation in SPECT. (orig.)

  1. List-mode PET image reconstruction for motion correction using the Intel XEON PHI co-processor

    Science.gov (United States)

    Ryder, W. J.; Angelis, G. I.; Bashar, R.; Gillam, J. E.; Fulton, R.; Meikle, S.

    2014-03-01

    List-mode image reconstruction with motion correction is computationally expensive, as it requires projection of hundreds of millions of rays through a 3D array. To decrease reconstruction time it is possible to use symmetric multiprocessing computers or graphics processing units. The former can have high financial costs, while the latter can require refactoring of algorithms. The Xeon Phi is a new co-processor card with a Many Integrated Core architecture that can run 4 multiple-instruction, multiple data threads per core with each thread having a 512-bit single instruction, multiple data vector register. Thus, it is possible to run in the region of 220 threads simultaneously. The aim of this study was to investigate whether the Xeon Phi co-processor card is a viable alternative to an x86 Linux server for accelerating List-mode PET image reconstruction for motion correction. An existing list-mode image reconstruction algorithm with motion correction was ported to run on the Xeon Phi coprocessor with the multi-threading implemented using pthreads. There were no differences between images reconstructed using the Phi co-processor card and images reconstructed using the same algorithm run on a Linux server. However, it was found that the reconstruction runtimes were 3 times greater for the Phi than the server. A new version of the image reconstruction algorithm was developed in C++ using OpenMP for mutli-threading and the Phi runtimes decreased to 1.67 times that of the host Linux server. Data transfer from the host to co-processor card was found to be a rate-limiting step; this needs to be carefully considered in order to maximize runtime speeds. When considering the purchase price of a Linux workstation with Xeon Phi co-processor card and top of the range Linux server, the former is a cost-effective computation resource for list-mode image reconstruction. A multi-Phi workstation could be a viable alternative to cluster computers at a lower cost for medical imaging

  2. Broadcasting collective operation contributions throughout a parallel computer

    Science.gov (United States)

    Faraj, Ahmad [Rochester, MN

    2012-02-21

    Methods, systems, and products are disclosed for broadcasting collective operation contributions throughout a parallel computer. The parallel computer includes a plurality of compute nodes connected together through a data communications network. Each compute node has a plurality of processors for use in collective parallel operations on the parallel computer. Broadcasting collective operation contributions throughout a parallel computer according to embodiments of the present invention includes: transmitting, by each processor on each compute node, that processor's collective operation contribution to the other processors on that compute node using intra-node communications; and transmitting on a designated network link, by each processor on each compute node according to a serial processor transmission sequence, that processor's collective operation contribution to the other processors on the other compute nodes using inter-node communications.

  3. Programming massively parallel processors a hands-on approach

    CERN Document Server

    Kirk, David B

    2010-01-01

    Programming Massively Parallel Processors discusses basic concepts about parallel programming and GPU architecture. ""Massively parallel"" refers to the use of a large number of processors to perform a set of computations in a coordinated parallel way. The book details various techniques for constructing parallel programs. It also discusses the development process, performance level, floating-point format, parallel patterns, and dynamic parallelism. The book serves as a teaching guide where parallel programming is the main topic of the course. It builds on the basics of C programming for CUDA, a parallel programming environment that is supported on NVI- DIA GPUs. Composed of 12 chapters, the book begins with basic information about the GPU as a parallel computer source. It also explains the main concepts of CUDA, data parallelism, and the importance of memory access efficiency using CUDA. The target audience of the book is graduate and undergraduate students from all science and engineering disciplines who ...

  4. The diagnostic value of single-photon emission computed tomography/computed tomography for severe sacroiliac joint dysfunction.

    Science.gov (United States)

    Tofuku, Katsuhiro; Koga, Hiroaki; Komiya, Setsuro

    2015-04-01

    We aimed to evaluate the value of single-photon emission computed tomography (SPECT)/computed tomography (CT) for the diagnosis of sacroiliac joint (SIJ) dysfunction. SPECT/CT was performed in 32 patients with severe SIJ dysfunction, who did not respond to 1-year conservative treatment and had a score of >4 points on a 10-cm visual analog scale. We investigated the relationship between the presence of severe SIJ dysfunction and tracer accumulation, as confirmed by SPECT/CT. In cases of bilateral SIJ dysfunction, we also compared the intensity of tracer accumulation on each side. Moreover, we examined the relationship between the intensity of tracer accumulation and the different treatments the patients subsequently received. All 32 patients with severe SIJ dysfunction had tracer accumulation with a standardized uptake value (SUV) of >2.2 (mean SUV 4.7). In the 19 patients with lateralized symptom intensity, mean SUVs of the dominant side were significantly higher than those of the nondominant side. In 10 patients with no lateralization, the difference in the SUVs between sides was dysfunction as well as a useful technique for predicting the prognosis of this condition.

  5. Alternative Water Processor Test Development

    Science.gov (United States)

    Pickering, Karen D.; Mitchell, Julie; Vega, Leticia; Adam, Niklas; Flynn, Michael; Wjee (er. Rau); Lunn, Griffin; Jackson, Andrew

    2012-01-01

    The Next Generation Life Support Project is developing an Alternative Water Processor (AWP) as a candidate water recovery system for long duration exploration missions. The AWP consists of biological water processor (BWP) integrated with a forward osmosis secondary treatment system (FOST). The basis of the BWP is a membrane aerated biological reactor (MABR), developed in concert with Texas Tech University. Bacteria located within the MABR metabolize organic material in wastewater, converting approximately 90% of the total organic carbon to carbon dioxide. In addition, bacteria convert a portion of the ammonia-nitrogen present in the wastewater to nitrogen gas, through a combination of nitrogen and denitrification. The effluent from the BWP system is low in organic contaminants, but high in total dissolved solids. The FOST system, integrated downstream of the BWP, removes dissolved solids through a combination of concentration-driven forward osmosis and pressure driven reverse osmosis. The integrated system is expected to produce water with a total organic carbon less than 50 mg/l and dissolved solids that meet potable water requirements for spaceflight. This paper describes the test definition, the design of the BWP and FOST subsystems, and plans for integrated testing.

  6. The UA1 trigger processor

    International Nuclear Information System (INIS)

    Grayer, G.H.

    1981-01-01

    Experiment UA1 is a large multi-purpose spectrometer at the CERN proton-antiproton collider, scheduled for late 1981. The principal trigger is formed on the basis of the energy deposition in calorimeters. A trigger decision taken in under 2.4 microseconds can avoid dead time losses due to the bunched nature of the beam. To achieve this we have built fast 8-bit charge to digital converters followed by two identical digital processors tailored to the experiment. The outputs of groups of the 2440 photomultipliers in the calorimeters are summed to form a total of 288 input channels to the ADCs. A look-up table in RAM is used to convert the digitised photomultiplier signals to energy in one processor, combinations of input channels, and also counts the number of clusters with electromagnetic or hadronic energy above pre-determined levels. Up to twelve combinations of these conditions, together with external information, may be combined in coincidence or in veto to form the final trigger. Provision has been made for testing using simulated data in an off-line mode, and sampling real data when on-line. (orig.)

  7. Experimental and Computational Characterization of Biological Liquid Crystals: A Review of Single-Molecule Bioassays

    Directory of Open Access Journals (Sweden)

    Sungsoo Na

    2009-09-01

    Full Text Available Quantitative understanding of the mechanical behavior of biological liquid crystals such as proteins is essential for gaining insight into their biological functions, since some proteins perform notable mechanical functions. Recently, single-molecule experiments have allowed not only the quantitative characterization of the mechanical behavior of proteins such as protein unfolding mechanics, but also the exploration of the free energy landscape for protein folding. In this work, we have reviewed the current state-of-art in single-molecule bioassays that enable quantitative studies on protein unfolding mechanics and/or various molecular interactions. Specifically, single-molecule pulling experiments based on atomic force microscopy (AFM have been overviewed. In addition, the computational simulations on single-molecule pulling experiments have been reviewed. We have also reviewed the AFM cantilever-based bioassay that provides insight into various molecular interactions. Our review highlights the AFM-based single-molecule bioassay for quantitative characterization of biological liquid crystals such as proteins.

  8. Data register and processor for multiwire chambers

    International Nuclear Information System (INIS)

    Karpukhin, V.V.

    1985-01-01

    A data register and a processor for data receiving and processing from drift chambers of a device for investigating relativistic positroniums are described. The data are delivered to the register input in the form of the Grey 8 bit code, memorized and transformed to a position code. The register information is delivered to the KAMAK trunk and to the front panel plug. The processor selects particle tracks in a horizontal plane of the facility. ΔY maximum coordinate divergence and minimum point quantity on the track are set from the processor front panel. Processor solution time is 16 μs maximum quantity of simultaneously analyzed coordinates is 16

  9. Sensitometric control of roentgen film processors

    International Nuclear Information System (INIS)

    Forsberg, H.; Karolinska Sjukhuset, Stockholm

    1987-01-01

    Monitoring of film processors performance is essential since image quality, patient dose and costs are influenced by the performance. A system for sensitometric constancy control of film processors and their associated components is described. Experience with the system for 3 years is given when implemented on 17 film processors. Modern high quality film processors have a stability that makes a test frequency of once a week sufficient to maintain adequate image quality. The test system is so sensitive that corrective actions almost invariably have been taken before any technical problem degraded the image quality to a visible degree. (orig.)

  10. Processor tradeoffs in distributed real-time systems

    Science.gov (United States)

    Krishna, C. M.; Shin, Kang G.; Bhandari, Inderpal S.

    1987-01-01

    The problem of the optimization of the design of real-time distributed systems is examined with reference to a class of computer architectures similar to the continuously reconfigurable multiprocessor flight control system structure, CM2FCS. Particular attention is given to the impact of processor replacement and the burn-in time on the probability of dynamic failure and mean cost. The solution is obtained numerically and interpreted in the context of real-time applications.

  11. Snow particles extracted from X-ray computed microtomography imagery and their single-scattering properties

    Science.gov (United States)

    Ishimoto, Hiroshi; Adachi, Satoru; Yamaguchi, Satoru; Tanikawa, Tomonori; Aoki, Teruo; Masuda, Kazuhiko

    2018-04-01

    Sizes and shapes of snow particles were determined from X-ray computed microtomography (micro-CT) images, and their single-scattering properties were calculated at visible and near-infrared wavelengths using a Geometrical Optics Method (GOM). We analyzed seven snow samples including fresh and aged artificial snow and natural snow obtained from field samples. Individual snow particles were numerically extracted, and the shape of each snow particle was defined by applying a rendering method. The size distribution and specific surface area distribution were estimated from the geometrical properties of the snow particles, and an effective particle radius was derived for each snow sample. The GOM calculations at wavelengths of 0.532 and 1.242 μm revealed that the realistic snow particles had similar scattering phase functions as those of previously modeled irregular shaped particles. Furthermore, distinct dendritic particles had a characteristic scattering phase function and asymmetry factor. The single-scattering properties of particles of effective radius reff were compared with the size-averaged single-scattering properties. We found that the particles of reff could be used as representative particles for calculating the average single-scattering properties of the snow. Furthermore, the single-scattering properties of the micro-CT particles were compared to those of particle shape models using our current snow retrieval algorithm. For the single-scattering phase function, the results of the micro-CT particles were consistent with those of a conceptual two-shape model. However, the particle size dependence differed for the single-scattering albedo and asymmetry factor.

  12. Computational quantum chemistry for single Heisenberg spin couplings made simple: Just one spin flip required

    International Nuclear Information System (INIS)

    Mayhall, Nicholas J.; Head-Gordon, Martin

    2014-01-01

    We highlight a simple strategy for computing the magnetic coupling constants, J, for a complex containing two multiradical centers. On the assumption that the system follows Heisenberg Hamiltonian physics, J is obtained from a spin-flip electronic structure calculation where only a single electron is excited (and spin-flipped), from the single reference with maximum S ^ z , M, to the M − 1 manifold, regardless of the number of unpaired electrons, 2M, on the radical centers. In an active space picture involving 2M orbitals, only one β electron is required, together with only one α hole. While this observation is extremely simple, the reduction in the number of essential configurations from exponential in M to only linear provides dramatic computational benefits. This (M, M − 1) strategy for evaluating J is an unambiguous, spin-pure, wave function theory counterpart of the various projected broken symmetry density functional theory schemes, and likewise gives explicit energies for each possible spin-state that enable evaluation of properties. The approach is illustrated on five complexes with varying numbers of unpaired electrons, for which one spin-flip calculations are used to compute J. Some implications for further development of spin-flip methods are discussed

  13. Computing single step operators of logic programming in radial basis function neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Hamadneh, Nawaf; Sathasivam, Saratha; Choon, Ong Hong [School of Mathematical Sciences, Universiti Sains Malaysia, 11800 USM, Penang (Malaysia)

    2014-07-10

    Logic programming is the process that leads from an original formulation of a computing problem to executable programs. A normal logic program consists of a finite set of clauses. A valuation I of logic programming is a mapping from ground atoms to false or true. The single step operator of any logic programming is defined as a function (T{sub p}:I→I). Logic programming is well-suited to building the artificial intelligence systems. In this study, we established a new technique to compute the single step operators of logic programming in the radial basis function neural networks. To do that, we proposed a new technique to generate the training data sets of single step operators. The training data sets are used to build the neural networks. We used the recurrent radial basis function neural networks to get to the steady state (the fixed point of the operators). To improve the performance of the neural networks, we used the particle swarm optimization algorithm to train the networks.

  14. Computing single step operators of logic programming in radial basis function neural networks

    Science.gov (United States)

    Hamadneh, Nawaf; Sathasivam, Saratha; Choon, Ong Hong

    2014-07-01

    Logic programming is the process that leads from an original formulation of a computing problem to executable programs. A normal logic program consists of a finite set of clauses. A valuation I of logic programming is a mapping from ground atoms to false or true. The single step operator of any logic programming is defined as a function (Tp:I→I). Logic programming is well-suited to building the artificial intelligence systems. In this study, we established a new technique to compute the single step operators of logic programming in the radial basis function neural networks. To do that, we proposed a new technique to generate the training data sets of single step operators. The training data sets are used to build the neural networks. We used the recurrent radial basis function neural networks to get to the steady state (the fixed point of the operators). To improve the performance of the neural networks, we used the particle swarm optimization algorithm to train the networks.

  15. Abdominal fat-evaluation by use of single scan computed tomography

    International Nuclear Information System (INIS)

    Jacobi, V.; Steinkamp, M.; Kirchner, J.; Fischer, H.; Diedrich, C.F.; Kollath, J.

    1997-01-01

    Purpose: Several studies emphasised the importance of the relationship between intraabdominal and total body adipose tissue as a risk factor for the development of metabolic or cardiovascular diseases. Therefore, the aim of the present study was to examine whether a single scan computed tomography is able to determine the whole intraabdominal fat volume with high accuracy and reproducibility. Materials and methods: Regions of interests (ROIs) were drawn manually for measuring intraabdominal fat in 51 unsuspicious abdominal CT. Results: The sexual differentiation of adipose tissue already described in a lot of studies could be confirmed in this study. Fat still predominates in females in lower half of the body (gynoid obesity). In men it predominates in the upper half (android obesity). Significant correlation concerning measuring the whole intraabdominal fat volume could be found in L1-level in women (r=0.992) and in L2-level in men (r=0.992). Measurement of a single scan enables us to assess whole intraabdominal fat volume due to a special formula. Conclusion: The determination of intraabdominal fat measured by a single scan computed tomography is a procedure associated with high accuracy and reproducibility. (orig.) [de

  16. Computing single step operators of logic programming in radial basis function neural networks

    International Nuclear Information System (INIS)

    Hamadneh, Nawaf; Sathasivam, Saratha; Choon, Ong Hong

    2014-01-01

    Logic programming is the process that leads from an original formulation of a computing problem to executable programs. A normal logic program consists of a finite set of clauses. A valuation I of logic programming is a mapping from ground atoms to false or true. The single step operator of any logic programming is defined as a function (T p :I→I). Logic programming is well-suited to building the artificial intelligence systems. In this study, we established a new technique to compute the single step operators of logic programming in the radial basis function neural networks. To do that, we proposed a new technique to generate the training data sets of single step operators. The training data sets are used to build the neural networks. We used the recurrent radial basis function neural networks to get to the steady state (the fixed point of the operators). To improve the performance of the neural networks, we used the particle swarm optimization algorithm to train the networks

  17. Thermal Dissipation Efficiency in a Micro-Processor Using Carbon Nanotubes Based Composite

    Science.gov (United States)

    Thang, Bui Hung; Van Quang, Cao; Nghia, Van Trong; Hong, Phan Ngoc; Van Chuc, Nguyen; Tam, Ngo Thi Thanh; Quang, Le Dinh; Khang, Dao Duc; Khoi, Phan Hong; Minh, Phan Ngoc

    2009-09-01

    Modern electronic and optoelectronic devices such as μ-processor, light emitting diode, semiconductor laser issued a challenge in the thermal dissipation problem. Finding an effective way for thermal dissipation therefore becomes a very important issue. It is known that carbon nanotubes (CNTs) is one of the most valuable materials with high thermal conductivity (2000 W/m.K compared to thermal conductivity of Ag 419 W/m.K). This suggested an approach in applying the CNTs as an essential component for thermal dissipation media to improve the performance of computer processor and other high power electronic devices. In this work multi walled carbon nanotubes (MWCNTs) based composites were utilized as the thermal dissipation media in a micro processor of a personal computer. The MWCNTs of different concentrations were added into polyaniline, commercial silicon thermal paste and commercial silver thermal paste by mechanical methods. A personal computer with configuration: Intel Pentium IV 3.066 GHz, 512 MB of RAM and Windows XP Service Pack 2 Operating System was employed. The thermal dissipation efficiency of the system was evaluated by directly measure the temperature of the μ-processor during the operation of the computer in different CPU speeds. The measured results showed that the CNTs based composite could reduce the temperature of the u-processor more than 5° C, and the time for increasing the temperature of the μ-processor was three times longer than that when using commercial thermal paste.

  18. Using of opportunities of graphic processors for acceleration of scientific and technical calculations

    International Nuclear Information System (INIS)

    Dudnik, V.A.; Kudryavtsev, V.I.; Sereda, T.M.; Us, S.A.; Shestakov, M.V.

    2009-01-01

    The new opportunities of modern graphic processors (GPU) for acceleration of the scientific and technical calculations with the help of paralleling of a calculating task between the central processor and GPU are described. The description of using the technology NVIDIA CUDA for connection of parallel computing opportunities of GPU within the programme of the some intensive mathematical tasks is resulted. The examples of comparison of parameters of productivity in the process of these tasks' calculation without application of GPU and with use of opportunities NVIDIA CUDA for graphic processor GeForce 8800 are resulted

  19. Assessment of left ventricular function by electrocardiogram-gated myocardial single photon emission computed tomography using quantitative gated single photon emission computed tomography software

    International Nuclear Information System (INIS)

    Morita, Koichi; Adachi, Itaru; Konno, Masanori

    1999-01-01

    Electrocardiogram (ECG)-gated myocardial single photon emission computed tomography (SPECT) can assess left ventricular (LV) perfusion and function easily using quantitative gated SPECT (QGS) software. ECG-gated SPECT was performed in 44 patients with coronary artery disease under post-stress and resting conditions to assess the values of LV functional parameters, by comparison to LV ejection fraction derived from gated blood pool scan and myocardial characteristics. A good correlation was obtained between ejection fraction using QGS and that using cardiac blood pool scan (r=0.812). Some patients with myocardial ischemia had lower ejection fraction under post-stress compared to resting conditions, indicating post-stress LV dysfunction. LV wall motion and wall thickening were significantly impaired in ischemic and infarcted myocardium, and the degree of abnormality in the infarcted areas was greater than in the ischemia area. LV functional parameters derived using QGS were useful to assess post-stress LV dysfunction and myocardial viability. In conclusion, ECG-gated myocardial SPECT permits simultaneous quantitative assessment of myocardial perfusion and function. (author)

  20. Occult primary tumors of the head and neck: accuracy of thallium 201 single-photon emission computed tomography and computed tomography and/or magnetic resonance imaging

    NARCIS (Netherlands)

    van Veen, S. A.; Balm, A. J.; Valdés Olmos, R. A.; Hoefnagel, C. A.; Hilgers, F. J.; Tan, I. B.; Pameijer, F. A.

    2001-01-01

    To determine the accuracy of thallium 201 single-photon emission computed tomography (thallium SPECT) and computed tomography and/or magnetic resonance imaging (CT/MRI) in the detection of occult primary tumors of the head and neck. Study of diagnostic tests. National Cancer Institute, Amsterdam,

  1. Producing chopped firewood with firewood processors

    International Nuclear Information System (INIS)

    Kaerhae, K.; Jouhiaho, A.

    2009-01-01

    The TTS Institute's research and development project studied both the productivity of new, chopped firewood processors (cross-cutting and splitting machines) suitable for professional and independent small-scale production, and the costs of the chopped firewood produced. Seven chopped firewood processors were tested in the research, six of which were sawing processors and one shearing processor. The chopping work was carried out using wood feeding racks and a wood lifter. The work was also carried out without any feeding appliances. Altogether 132.5 solid m 3 of wood were chopped in the time studies. The firewood processor used had the most significant impact on chopping work productivity. In addition to the firewood processor, the stem mid-diameter, the length of the raw material, and of the firewood were also found to affect productivity. The wood feeding systems also affected productivity. If there is a feeding rack and hydraulic grapple loader available for use in chopping firewood, then it is worth using the wood feeding rack. A wood lifter is only worth using with the largest stems (over 20 cm mid-diameter) if a feeding rack cannot be used. When producing chopped firewood from small-diameter wood, i.e. with a mid-diameter less than 10 cm, the costs of chopping work were over 10 EUR solid m -3 with sawing firewood processors. The shearing firewood processor with a guillotine blade achieved a cost level of 5 EUR solid m -3 when the mid-diameter of the chopped stem was 10 cm. In addition to the raw material, the cost-efficient chopping work also requires several hundred annual operating hours with a firewood processor, which is difficult for individual firewood entrepreneurs to achieve. The operating hours of firewood processors can be increased to the required level by the joint use of the processors by a number of firewood entrepreneurs. (author)

  2. On the Organization of Parallel Operation of Some Algorithms for Finding the Shortest Path on a Graph on a Computer System with Multiple Instruction Stream and Single Data Stream

    Directory of Open Access Journals (Sweden)

    V. E. Podol'skii

    2015-01-01

    Full Text Available The paper considers the implementing Bellman-Ford and Lee algorithms to find the shortest graph path on a computer system with multiple instruction stream and single data stream (MISD. The MISD computer is a computer that executes commands of arithmetic-logical processing (on the CPU and commands of structures processing (on the structures processor in parallel on a single data stream. Transformation of sequential programs into the MISD programs is a labor intensity process because it requires a stream of the arithmetic-logical processing to be manually separated from that of the structures processing. Algorithms based on the processing of data structures (e.g., algorithms on graphs show high performance on a MISD computer. Bellman-Ford and Lee algorithms for finding the shortest path on a graph are representatives of these algorithms. They are applied to robotics for automatic planning of the robot movement in-situ. Modification of Bellman-Ford and Lee algorithms for finding the shortest graph path in coprocessor MISD mode and the parallel MISD modification of these algorithms were first obtained in this article. Thus, this article continues a series of studies on the transformation of sequential algorithms into MISD ones (Dijkstra and Ford-Fulkerson 's algorithms and has a pronouncedly applied nature. The article also presents the analysis results of Bellman-Ford and Lee algorithms in MISD mode. The paper formulates the basic trends of a technique for parallelization of algorithms into arithmetic-logical processing stream and structures processing stream. Among the key areas for future research, development of the mathematical approach to provide a subsequently formalized and automated process of parallelizing sequential algorithms between the CPU and structures processor is highlighted. Among the mathematical models that can be used in future studies there are graph models of algorithms (e.g., dependency graph of a program. Due to the high

  3. Processor farming method for multi-scale analysis of masonry structures

    Science.gov (United States)

    Krejčí, Tomáš; Koudelka, Tomáš

    2017-07-01

    This paper describes a processor farming method for a coupled heat and moisture transport in masonry using a two-level approach. The motivation for the two-level description comes from difficulties connected with masonry structures, where the size of stone blocks is much larger than the size of mortar layers and very fine finite element mesh has to be used. The two-level approach is suitable for parallel computing because nearly all computations can be performed independently with little synchronization. This approach is called processor farming. The master processor is dealing with the macro-scale level - the structure and the slave processors are dealing with a homogenization procedure on the meso-scale level which is represented by an appropriate representative volume element.

  4. A Modular Pipelined Processor for High Resolution Gamma-Ray Spectroscopy

    Science.gov (United States)

    Veiga, Alejandro; Grunfeld, Christian

    2016-02-01

    The design of a digital signal processor for gamma-ray applications is presented in which a single ADC input can simultaneously provide temporal and energy characterization of gamma radiation for a wide range of applications. Applying pipelining techniques, the processor is able to manage and synchronize very large volumes of streamed real-time data. Its modular user interface provides a flexible environment for experimental design. The processor can fit in a medium-sized FPGA device operating at ADC sampling frequency, providing an efficient solution for multi-channel applications. Two experiments are presented in order to characterize its temporal and energy resolution.

  5. The contribution of single photon emission computed tomography in the clinical assessment of Alzheimer type dementia

    International Nuclear Information System (INIS)

    Boudousq, V.; Collombier, L.; Kotzki, P.O.

    1999-01-01

    Interest of brain single-photon emission computed tomography to support clinical diagnosis of Alzheimer-type dementia is now established. Numerous studies have reported a decreased perfusion in the association cortex of the parietal lobe and the posterior temporal regions. In patients with mild cognitive complaints, the presence of focal hypoperfusion may increase substantially the probability of the disease. In addition, emission tomography emerges as a helpful tool in situation in which there is diagnostic doubt. In this case, the presence of temporo-parietal perfusion deficit associated with hippocampal atrophy on MRI or X-ray computed tomography contributes to diagnostic accuracy. However, some studies suggest that emission tomography may be useful for preclinical prediction of Alzheimer's disease and to predict cognitive decline. (author)

  6. 123I-IMP single photon emission computed tomography (SPECT) study in childhood epilepsy

    International Nuclear Information System (INIS)

    Hara, Masafumi; Shimomura, Osamu; Kojima, Akihiro; Izunaga, Hiroshi; Tomiguchi, Seiji; Hirota, Yoshihisa; Taku, Keiichi; Miike, Teruhisa; Takahashi, Mutsumasa

    1990-01-01

    N-isopropyl-p[ 123 I]-iodoamphetamine (IMP) single photon emission computed tomography (SPECT), X-ray computed tomography (X-CT) and magnetic resonance imaging (MRI) were performed in 18 children with idiopathic seizures. In children with idiopathic seizures SPECT identified abnormal lesions in the highest rate (50%) compared with X-CT (11%) and MRI (13%), but the findings of SPECT poorly correlated with the foci on electroencephalography (EEG). Idiopathic epilepsy with abnormal uptake on SPECT was refractory to medical treatments and frequently associated with mental and/or developmental retardation. Perfusion defects identified on SPECT probably influenced the development of the brains in children. IMP SPECT is useful in the diagnosis and medical treatment in children with seizures. (author)

  7. Single-photon emission computed tomography in human immunodeficiency virus encephalopathy: A preliminary report

    International Nuclear Information System (INIS)

    Masdeu, J.C.; Yudd, A.; Van Heertum, R.L.; Grundman, M.; Hriso, E.; O'Connell, R.A.; Luck, D.; Camli, U.; King, L.N.

    1991-01-01

    Depression or psychosis in a previously asymptomatic individual infected with the human immunodeficiency virus (HIV) may be psychogenic, related to brain involvement by the HIV or both. Although prognosis and treatment differ depending on etiology, computed tomography (CT) and magnetic resonance imaging (MRI) are usually unrevealing in early HIV encephalopathy and therefore cannot differentiate it from psychogenic conditions. Thirty of 32 patients (94%) with HIV encephalopathy had single-photon emission computed tomography (SPECT) findings that differed from the findings in 15 patients with non-HIV psychoses and 6 controls. SPECT showed multifocal cortical and subcortical areas of hypoperfusion. In 4 cases, cognitive improvement after 6-8 weeks of zidovudine (AZT) therapy was reflected in amelioration of SPECT findings. CT remained unchanged. SPECT may be a useful technique for the evaluation of HIV encephalopathy

  8. Revealing −1 Programmed Ribosomal Frameshifting Mechanisms by Single-Molecule Techniques and Computational Methods

    Directory of Open Access Journals (Sweden)

    Kai-Chun Chang

    2012-01-01

    Full Text Available Programmed ribosomal frameshifting (PRF serves as an intrinsic translational regulation mechanism employed by some viruses to control the ratio between structural and enzymatic proteins. Most viral mRNAs which use PRF adapt an H-type pseudoknot to stimulate −1 PRF. The relationship between the thermodynamic stability and the frameshifting efficiency of pseudoknots has not been fully understood. Recently, single-molecule force spectroscopy has revealed that the frequency of −1 PRF correlates with the unwinding forces required for disrupting pseudoknots, and that some of the unwinding work dissipates irreversibly due to the torsional restraint of pseudoknots. Complementary to single-molecule techniques, computational modeling provides insights into global motions of the ribosome, whose structural transitions during frameshifting have not yet been elucidated in atomic detail. Taken together, recent advances in biophysical tools may help to develop antiviral therapies that target the ubiquitous −1 PRF mechanism among viruses.

  9. Single- versus dual-energy quantitative computed tomography for spinal densitometry in patients with rheumatoid arthritis

    International Nuclear Information System (INIS)

    Laan, R.F.J.M.; Erning, L.J.Th.O. van; Lemmens, J.A.M.; Putte, L.B.A. van de; Ruijs, S.H.J.; Riel, P.L.C.M. van

    1992-01-01

    Lumbar bone mineral density was measured by both single- and dual-energy quantitative computed tomography in 109 patients with rheumatoid arthritis. The results were corrected for the age-related increase in vertebral fat content by converting them to percentages of expected densities, using sex and energy-level specific regression equations obtained in a normal reference population. The percentages of expected density are approximately 10% lower in the single- than in the dual-energy mode, both in the patients with and without prednisone therapy. This difference is statistically highly significant, and is positively correlated with the duration of the disease and with the degree of radiological joint destruction. The data suggest that the vertebral fat content may be increased in patients with rheumatoid arthritis, as a consequence of disease-dependent mechanisms. (Author)

  10. Studies of left ventricular volume estimation from single photon emission computed tomography

    International Nuclear Information System (INIS)

    Hiraki, Yoshio; Shimizu, Mitsuharu; Joja, Ikuo; Aono, Kaname; Yanagi, Hidekiyo; Indo, Haruaki; Seno, Yoshimasa; Teramoto, Shigeru; Nagaya, Isao.

    1988-01-01

    We studied the comparative accuracy of 99m Tc cardiac blood pool Single Photon Emission Computed Tomography (SPECT) for the measurement of left ventricular volume in 20 patients undergoing SPECT and single plane contrast left ventriculography (LVG). Left ventricular volume was calculated based on the total number of voxels in left ventricle. End-diastolic left ventricular volume (EDV) and end-systolic left ventricular volume (ESV) calculated from SPECT were compared with those from LVG. SPECT volume values showed a high degree of correlation with those by LVG (r = 0.923 for EDV, r = 0.903 for ESV). We appreciated the usefulness and accuracy of SPECT in measuring left ventricular volume because of its three-dimensional information. (author)

  11. The efficacy of extracorporeal shock wave lithotripsy on single dense calcified gallstones according to computed tomography

    Energy Technology Data Exchange (ETDEWEB)

    Mori, Takao (Mie Univ., Tsu (Japan). School of Medicine); Shimono, Kazuko; Moriyama, Shigeru; Masuda, Touru; Ikeda, Tetsuya; Umegae, Satoru; Nagata, Norikazu

    1993-05-01

    The efficacy and complications of extracorporeal shock wave lithotripsy (ESWL) for single gallstones were compared between 15 patients with a CT-lucent stone and 18 patients with a dense calcified stone. In all of five patients with a stone smaller than 10 mm in diameter, complete or sufficient clearance was observed, regardless of calcification. However, in 28 patients with a stone larger than 11 mm in diameter, the rates of complete or sufficient clearance were lower in those with a dense calcified stone (64%) than in those with a computed tomography (CT) lucent stone (93%). There was no difference in the rate of complications between patients with a CT-lucent stone and those with a dense calcified stone. These results thus suggest that extracorporeal shock wave lithotripsy may be safely employed for patients with a single calcified gallstone. (author).

  12. Tunable microwave signal generation based on an Opto-DMD processor and a photonic crystal fiber

    International Nuclear Information System (INIS)

    Wang Tao; Sang Xin-Zhu; Yan Bin-Bin; Li Yan; Song Fei-Jun; Zhang Xia; Wang Kui-Ru; Yuan Jin-Hui; Yu Chong-Xiu; Ai Qi; Chen Xiao; Zhang Ying; Chen Gen-Xiang; Xiao Feng; Kamal Alameh

    2014-01-01

    Frequency-tunable microwave signal generation is proposed and experimentally demonstrated with a dual-wavelength single-longitudinal-mode (SLM) erbium-doped fiber ring laser based on a digital Opto-DMD processor and four-wave mixing (FWM) in a high-nonlinear photonic crystal fiber (PCF). The high-nonlinear PCF is employed for the generation of the FWM to obtain stable and uniform dual-wavelength oscillation. Two different short passive sub-ring cavities in the main ring cavity serve as mode filters to make SLM lasing. The two lasing wavelengths are electronically selected by loading different gratings on the Opto-DMD processor controlled with a computer. The wavelength spacing can be smartly adjusted from 0.165 nm to 1.08 nm within a tuning accuracy of 0.055 nm. Two microwave signals at 17.23 GHz and 27.47 GHz are achieved. The stability of the microwave signal is discussed. The system has the ability to generate a 137.36-GHz photonic millimeter signal at room temperature

  13. Three Dimensional Simulation of Ion Thruster Plume-Spacecraft Interaction Based on a Graphic Processor Unit

    International Nuclear Information System (INIS)

    Ren Junxue; Xie Kan; Qiu Qian; Tang Haibin; Li Juan; Tian Huabing

    2013-01-01

    Based on the three-dimensional particle-in-cell (PIC) method and Compute Unified Device Architecture (CUDA), a parallel particle simulation code combined with a graphic processor unit (GPU) has been developed for the simulation of charge-exchange (CEX) xenon ions in the plume of an ion thruster. Using the proposed technique, the potential and CEX plasma distribution are calculated for the ion thruster plume surrounding the DS1 spacecraft at different thrust levels. The simulation results are in good agreement with measured CEX ion parameters reported in literature, and the GPU's results are equal to a CPU's. Compared with a single CPU Intel Core 2 E6300, 16-processor GPU NVIDIA GeForce 9400 GT indicates a speedup factor of 3.6 when the total macro particle number is 1.1×10 6 . The simulation results also reveal how the back flow CEX plasma affects the spacecraft floating potential, which indicates that the plume of the ion thruster is indeed able to alleviate the extreme negative floating potentials of spacecraft in geosynchronous orbit

  14. Compilation Techniques Specific for a Hardware Cryptography-Embedded Multimedia Mobile Processor

    Directory of Open Access Journals (Sweden)

    Masa-aki FUKASE

    2007-12-01

    Full Text Available The development of single chip VLSI processors is the key technology of ever growing pervasive computing to answer overall demands for usability, mobility, speed, security, etc. We have so far developed a hardware cryptography-embedded multimedia mobile processor architecture, HCgorilla. Since HCgorilla integrates a wide range of techniques from architectures to applications and languages, one-sided design approach is not always useful. HCgorilla needs more complicated strategy, that is, hardware/software (H/S codesign. Thus, we exploit the software support of HCgorilla composed of a Java interface and parallelizing compilers. They are assumed to be installed in servers in order to reduce the load and increase the performance of HCgorilla-embedded clients. Since compilers are the essence of software's responsibility, we focus in this article on our recent results about the design, specifications, and prototyping of parallelizing compilers for HCgorilla. The parallelizing compilers are composed of a multicore compiler and a LIW compiler. They are specified to abstract parallelism from executable serial codes or the Java interface output and output the codes executable in parallel by HCgorilla. The prototyping compilers are written in Java. The evaluation by using an arithmetic test program shows the reasonability of the prototyping compilers compared with hand compilers.

  15. Design and evaluation of an architecture for a digital signal processor for instrumentation applications

    Science.gov (United States)

    Fellman, Ronald D.; Kaneshiro, Ronald T.; Konstantinides, Konstantinos

    1990-03-01

    The authors present the design and evaluation of an architecture for a monolithic, programmable, floating-point digital signal processor (DSP) for instrumentation applications. An investigation of the most commonly used algorithms in instrumentation led to a design that satisfies the requirements for high computational and I/O (input/output) throughput. In the arithmetic unit, a 16- x 16-bit multiplier and a 32-bit accumulator provide the capability for single-cycle multiply/accumulate operations, and three format adjusters automatically adjust the data format for increased accuracy and dynamic range. An on-chip I/O unit is capable of handling data block transfers through a direct memory access port and real-time data streams through a pair of parallel I/O ports. I/O operations and program execution are performed in parallel. In addition, the processor includes two data memories with independent addressing units, a microsequencer with instruction RAM, and multiplexers for internal data redirection. The authors also present the structure and implementation of a design environment suitable for the algorithmic, behavioral, and timing simulation of a complete DSP system. Various benchmarking results are reported.

  16. Fundamental physics issues of multilevel logic in developing a parallel processor.

    Science.gov (United States)

    Bandyopadhyay, Anirban; Miki, Kazushi

    2007-06-01

    In the last century, On and Off physical switches, were equated with two decisions 0 and 1 to express every information in terms of binary digits and physically realize it in terms of switches connected in a circuit. Apart from memory-density increase significantly, more possible choices in particular space enables pattern-logic a reality, and manipulation of pattern would allow controlling logic, generating a new kind of processor. Neumann's computer is based on sequential logic, processing bits one by one. But as pattern-logic is generated on a surface, viewing whole pattern at a time is a truly parallel processing. Following Neumann's and Shannons fundamental thermodynamical approaches we have built compatible model based on series of single molecule based multibit logic systems of 4-12 bits in an UHV-STM. On their monolayer multilevel communication and pattern formation is experimentally verified. Furthermore, the developed intelligent monolayer is trained by Artificial Neural Network. Therefore fundamental weak interactions for the building of truly parallel processor are explored here physically and theoretically.

  17. Computer-aided, single-specimen controlled bending test for fracture-kinetics measurement in ceramics

    International Nuclear Information System (INIS)

    Borovik, V.G.; Chushko, V.M.; Kovalev, S.P.

    1995-01-01

    Fracture testing of ceramics by using controlled crack growth is proposed to allow study of crack-kinetics behavior under a given loading history. A computer-aided, real-time data acquisition system improves the quality of crack-growth parameters obtained in a simple, single-specimen bend test. Several ceramic materials were tested in the present study: aluminum nitride as a linear-elastic material; and alumina and yttria-stabilized zirconia, both representative of ceramics with microstructure-dependent nonlinear fracture properties. Ambiguities in the crack-growth diagrams are discussed to show the importance of accounting for crack-growth history in correctly describing nonequilibrium fracture behavior

  18. A Computer Program for Flow-Log Analysis of Single Holes (FLASH)

    Science.gov (United States)

    Day-Lewis, F. D.; Johnson, C.D.; Paillet, Frederick L.; Halford, K.J.

    2011-01-01

    A new computer program, FLASH (Flow-Log Analysis of Single Holes), is presented for the analysis of borehole vertical flow logs. The code is based on an analytical solution for steady-state multilayer radial flow to a borehole. The code includes options for (1) discrete fractures and (2) multilayer aquifers. Given vertical flow profiles collected under both ambient and stressed (pumping or injection) conditions, the user can estimate fracture (or layer) transmissivities and far-field hydraulic heads. FLASH is coded in Microsoft Excel with Visual Basic for Applications routines. The code supports manual and automated model calibration. ?? 2011, The Author(s). Ground Water ?? 2011, National Ground Water Association.

  19. Left ventricular volume determination from single-photon emission computed tomography

    International Nuclear Information System (INIS)

    Bunker, S.R.; Hartshorne, M.F.; Schmidt, W.P.; Cawthon, M.A.; Karl, R.D. Jr.; Bauman, J.M.; Howard, W.H. III; Rubal, B.J.

    1985-01-01

    To compare the accuracy of single-photon emission computed tomography (SPECT) with that of contrast cineangiography in measuring left ventricular end-diastolic volume, 25 consecutive patients undergoing catheterizaiton for coronary artery or valvular heart disease were first evaluated scintigraphically. SPECT volume values showed a high degree of correlation with those determined by angiography with a standard error of the estimate of 23 ml. SPECT offers a highly accurate and essentially noninvasive method for measuring chamber volumes that is independent of geometric assumptions about ventricular configuration and chest wall attenuation and does not require blood sample counting

  20. Left ventricular volume determination from single-photon emission computed tomography

    Energy Technology Data Exchange (ETDEWEB)

    Bunker, S.R.; Hartshorne, M.F.; Schmidt, W.P.; Cawthon, M.A.; Karl, R.D. Jr.; Bauman, J.M.; Howard, W.H. III; Rubal, B.J.

    1985-02-01

    To compare the accuracy of single-photon emission computed tomography (SPECT) with that of contrast cineangiography in measuring left ventricular end-diastolic volume, 25 consecutive patients undergoing catheterizaiton for coronary artery or valvular heart disease were first evaluated scintigraphically. SPECT volume values showed a high degree of correlation with those determined by angiography with a standard error of the estimate of 23 ml. SPECT offers a highly accurate and essentially noninvasive method for measuring chamber volumes that is independent of geometric assumptions about ventricular configuration and chest wall attenuation and does not require blood sample counting.

  1. Painful spondylolysis or spondylolisthesis studied by radiography and single-photon emission computed tomography

    International Nuclear Information System (INIS)

    Collier, B.D.; Johnson, R.P.; Carrera, G.F.

    1985-01-01

    Planar bone scintigraphy (PBS) and single-photon emission computed tomography (SPECT) were compared in 19 adults with radiographic evidence of spondylolysis and/or spondylolisthesis. SPECT was more sensitive than PBS when used to identify symptomatic patients and sites of painful defects in the pars interarticularis. In addition, SPECT allowed more accurate localization than PBS. In 6 patients, spondylolysis or spondylolisthesis was unrealted to low back pain, and SPECT images of the posterior neural arch were normal. The authors conclude that when spondylolysis or spondylolisthesis is the cause of low back pain, pars defects are frequently heralded by increased scintigraphic activity which is best detected and localized by SPECT

  2. Quantum computers based on electron spins controlled by ultrafast off-resonant single optical pulses.

    Science.gov (United States)

    Clark, Susan M; Fu, Kai-Mei C; Ladd, Thaddeus D; Yamamoto, Yoshihisa

    2007-07-27

    We describe a fast quantum computer based on optically controlled electron spins in charged quantum dots that are coupled to microcavities. This scheme uses broadband optical pulses to rotate electron spins and provide the clock signal to the system. Nonlocal two-qubit gates are performed by phase shifts induced by electron spins on laser pulses propagating along a shared waveguide. Numerical simulations of this scheme demonstrate high-fidelity single-qubit and two-qubit gates with operation times comparable to the inverse Zeeman frequency.

  3. Binary Factorization in Hopfield-Like Neural Networks: Single-Step Approximation and Computer Simulations

    Czech Academy of Sciences Publication Activity Database

    Frolov, A. A.; Sirota, A.M.; Húsek, Dušan; Muraviev, I. P.

    2004-01-01

    Roč. 14, č. 2 (2004), s. 139-152 ISSN 1210-0552 R&D Projects: GA ČR GA201/01/1192 Grant - others:BARRANDE(EU) 99010-2/99053; Intellectual computer Systems(EU) Grant 2.45 Institutional research plan: CEZ:AV0Z1030915 Keywords : nonlinear binary factor analysis * feature extraction * recurrent neural network * Single-Step approximation * neurodynamics simulation * attraction basins * Hebbian learning * unsupervised learning * neuroscience * brain function modeling Subject RIV: BA - General Mathematics

  4. Painful spondylolysis or spondylolisthesis studied by radiography and single-photon emission computed tomography

    Energy Technology Data Exchange (ETDEWEB)

    Collier, B.D.; Johnson, R.P.; Carrera, G.F.; Meyer, G.A.; Schwab, J.P.; Flatley, T.J.; Isitman, A.T.; Hellman, R.S.; Zielonka, J.S.; Knobel, J.

    1985-01-01

    Planar bone scintigraphy (PBS) and single-photon emission computed tomography (SPECT) were compared in 19 adults with radiographic evidence of spondylolysis and/or spondylolisthesis. SPECT was more sensitive than PBS when used to identify symptomatic patients and sites of painful defects in the pars interarticularis. In addition, SPECT allowed more accurate localization than PBS. In 6 patients, spondylolysis or spondylolisthesis was unrealted to low back pain, and SPECT images of the posterior neural arch were normal. The authors conclude that when spondylolysis or spondylolisthesis is the cause of low back pain, pars defects are frequently heralded by increased scintigraphic activity which is best detected and localized by SPECT.

  5. Bronchobiliary Fistula Localized by Cholescintigraphy with Single-Photon Emission Computed Tomography

    International Nuclear Information System (INIS)

    Artunduaga, Maddy; Patel, Niraj R.; Wendt, Julie A.; Guy, Elizabeth S.; Nachiappan, Arun C.

    2015-01-01

    Biliptysis is an important clinical feature to recognize as it is associated with bronchobiliary fistula, a rare entity. Bronchobiliary fistulas have been diagnosed with planar cholescintigraphy. However, cholescintigraphy with single-photon emission computed tomography (SPECT) can better spatially localize a bronchobiliary fistula as compared to planar cholescintigraphy alone, and is useful for preoperative planning if surgical treatment is required. Here, we present the case of a 23-year-old male who developed a bronchobiliary fistula in the setting of posttraumatic and postsurgical infection, which was diagnosed and localized by cholescintigraphy with SPECT

  6. An analysis of regional cerebral blood flow in impulsive murderers using single photon emission computed tomography.

    Science.gov (United States)

    Amen, Daniel G; Hanks, Chris; Prunella, Jill R; Green, Aisa

    2007-01-01

    The authors explored differences in regional cerebral blood flow in 11 impulsive murderers and 11 healthy comparison subjects using single photon emission computed tomography. The authors assessed subjects at rest and during a computerized go/no-go concentration task. Using statistical parametric mapping software, the authors performed voxel-by-voxel t tests to assess significant differences, making family-wide error corrections for multiple comparisons. Murderers were found to have significantly lower relative rCBF during concentration, particularly in areas associated with concentration and impulse control. These results indicate that nonemotionally laden stimuli may result in frontotemporal dysregulation in people predisposed to impulsive violence.

  7. IBIC characterisation of novel detectors for single atom doping of quantum computer devices

    International Nuclear Information System (INIS)

    Yang Changyi; Jamieson, David N.; Pakes, Chris I.; George, Damien P.; Hearne, Sean M.; Dzurak, Andrew S.; Gauja, Eric; Stanley, F.; Clark, R.G.

    2003-01-01

    Single ion implantation and online detection is highly desirable for the emerging application, in which single 31 P ions need to be inserted in prefabricated silicon cells to construct solid-state quantum bits (qubits). In order to fabricate qubit arrays, we have developed novel detectors that employ detector electrodes adjacent to the prefabricated cells that can detect single keV ion strikes appropriate for the fabrication of shallow phosphorus arrays. The method utilises a high purity silicon substrate with very high resistivity, a thin SiO 2 surface layer, nanometer masks for the lateral positioning single phosphorus implantation, biased electrodes applied to the surface of the silicon and sensitive electronics that can detect the charge transient from single keV ion strikes. A TCAD (Technology Computer Aided Design) software package was applied in the optimisation of the device design and simulation of the detector performance. Here we show the characterisation of these detectors using ion beam induced charge (IBIC) with a focused 2 MeV He ions in a nuclear microprobe. The IBIC imaging method in a nuclear microprobe allowed us to measure the dead-layer thickness of the detector structure (required to be very thin for successful detection of keV ions), and the spatial distribution of the charge collection efficiency around the entire region of the detector. We show that our detectors have near 100% charge collection efficiency for MeV ions, extremely thin dead-layer thickness (about 7 nm) and a wide active region extending laterally from the electrodes (10-20 μm) where qubit arrays can be constructed. We demonstrate that the device can be successfully applied in the detection of keV ionisation energy from single events of keV X-rays and keV 31 P ions

  8. IBIC characterisation of novel detectors for single atom doping of quantum computer devices

    Energy Technology Data Exchange (ETDEWEB)

    Yang Changyi E-mail: cjy@physics.unimelb.edu.auc.yang@physics.unimelb.edu.au; Jamieson, David N.; Pakes, Chris I.; George, Damien P.; Hearne, Sean M.; Dzurak, Andrew S.; Gauja, Eric; Stanley, F.; Clark, R.G

    2003-09-01

    Single ion implantation and online detection is highly desirable for the emerging application, in which single {sup 31}P ions need to be inserted in prefabricated silicon cells to construct solid-state quantum bits (qubits). In order to fabricate qubit arrays, we have developed novel detectors that employ detector electrodes adjacent to the prefabricated cells that can detect single keV ion strikes appropriate for the fabrication of shallow phosphorus arrays. The method utilises a high purity silicon substrate with very high resistivity, a thin SiO{sub 2} surface layer, nanometer masks for the lateral positioning single phosphorus implantation, biased electrodes applied to the surface of the silicon and sensitive electronics that can detect the charge transient from single keV ion strikes. A TCAD (Technology Computer Aided Design) software package was applied in the optimisation of the device design and simulation of the detector performance. Here we show the characterisation of these detectors using ion beam induced charge (IBIC) with a focused 2 MeV He ions in a nuclear microprobe. The IBIC imaging method in a nuclear microprobe allowed us to measure the dead-layer thickness of the detector structure (required to be very thin for successful detection of keV ions), and the spatial distribution of the charge collection efficiency around the entire region of the detector. We show that our detectors have near 100% charge collection efficiency for MeV ions, extremely thin dead-layer thickness (about 7 nm) and a wide active region extending laterally from the electrodes (10-20 {mu}m) where qubit arrays can be constructed. We demonstrate that the device can be successfully applied in the detection of keV ionisation energy from single events of keV X-rays and keV {sup 31}P ions.

  9. Verification of a hybrid adjoint methodology in Titan for single photon emission computed tomography - 316

    International Nuclear Information System (INIS)

    Royston, K.; Haghighat, A.; Yi, C.

    2010-01-01

    The hybrid deterministic transport code TITAN is being applied to a Single Photon Emission Computed Tomography (SPECT) simulation of a myocardial perfusion study. The TITAN code's hybrid methodology allows the use of a discrete ordinates solver in the phantom region and a characteristics method solver in the collimator region. Currently we seek to validate the adjoint methodology in TITAN for this application using a SPECT model that has been created in the MCNP5 Monte Carlo code. The TITAN methodology was examined based on the response of a single voxel detector placed in front of the heart with and without collimation. For the case without collimation, the TITAN response for single voxel-sized detector had a -9.96% difference relative to the MCNP5 response. To simulate collimation, the adjoint source was specified in directions located within the collimator acceptance angle. For a single collimator hole with a diameter matching the voxel dimension, a difference of -0.22% was observed. Comparisons to groupings of smaller collimator holes of two different sizes resulted in relative differences of 0.60% and 0.12%. The number of adjoint source directions within an acceptance angle was increased and showed no significant change in accuracy. Our results indicate that the hybrid adjoint methodology of TITAN yields accurate solutions greater than a factor of two faster than MCNP5. (authors)

  10. Safe and Efficient Support for Embeded Multi-Processors in ADA

    Science.gov (United States)

    Ruiz, Jose F.

    2010-08-01

    New software demands increasing processing power, and multi-processor platforms are spreading as the answer to achieve the required performance. Embedded real-time systems are also subject to this trend, but in the case of real-time mission-critical systems, the properties of reliability, predictability and analyzability are also paramount. The Ada 2005 language defined a subset of its tasking model, the Ravenscar profile, that provides the basis for the implementation of deterministic and time analyzable applications on top of a streamlined run-time system. This Ravenscar tasking profile, originally designed for single processors, has proven remarkably useful for modelling verifiable real-time single-processor systems. This paper proposes a simple extension to the Ravenscar profile to support multi-processor systems using a fully partitioned approach. The implementation of this scheme is simple, and it can be used to develop applications amenable to schedulability analysis.

  11. Reward-based learning under hardware constraints-using a RISC processor embedded in a neuromorphic substrate.

    Science.gov (United States)

    Friedmann, Simon; Frémaux, Nicolas; Schemmel, Johannes; Gerstner, Wulfram; Meier, Karlheinz

    2013-01-01

    In this study, we propose and analyze in simulations a new, highly flexible method of implementing synaptic plasticity in a wafer-scale, accelerated neuromorphic hardware system. The study focuses on globally modulated STDP, as a special use-case of this method. Flexibility is achieved by embedding a general-purpose processor dedicated to plasticity into the wafer. To evaluate the suitability of the proposed system, we use a reward modulated STDP rule in a spike train learning task. A single layer of neurons is trained to fire at specific points in time with only the reward as feedback. This model is simulated to measure its performance, i.e., the increase in received reward after learning. Using this performance as baseline, we then simulate the model with various constraints imposed by the proposed implementation and compare the performance. The simulated constraints include discretized synaptic weights, a restricted interface between analog synapses and embedded processor, and mismatch of analog circuits. We find that probabilistic updates can increase the performance of low-resolution weights, a simple interface between analog synapses and processor is sufficient for learning, and performance is insensitive to mismatch. Further, we consider communication latency between wafer and the conventional control computer system that is simulating the environment. This latency increases the delay, with which the reward is sent to the embedded processor. Because of the time continuous operation of the analog synapses, delay can cause a deviation of the updates as compared to the not delayed situation. We find that for highly accelerated systems latency has to be kept to a minimum. This study demonstrates the suitability of the proposed implementation to emulate the selected reward modulated STDP learning rule. It is therefore an ideal candidate for implementation in an upgraded version of the wafer-scale system developed within the BrainScaleS project.

  12. A CNN-Specific Integrated Processor

    Directory of Open Access Journals (Sweden)

    Suleyman Malki

    2009-01-01

    Full Text Available Integrated Processors (IP are algorithm-specific cores that either by programming or by configuration can be re-used within many microelectronic systems. This paper looks at Cellular Neural Networks (CNN to become realized as IP. First current digital implementations are reviewed, and the memoryprocessor bandwidth issues are analyzed. Then a generic view is taken on the structure of the network, and a new intra-communication protocol based on rotating wheels is proposed. It is shown that this provides for guaranteed high-performance with a minimal network interface. The resulting node is small and supports multi-level CNN designs, giving the system a 30-fold increase in capacity compared to classical designs. As it facilitates multiple operations on a single image, and single operations on multiple images, with minimal access to the external image memory, balancing the internal and external data transfer requirements optimizes the system operation. In conventional digital CNN designs, the treatment of boundary nodes requires additional logic to handle the CNN value propagation scheme. In the new architecture, only a slight modification of the existing cells is necessary to model the boundary effect. A typical prototype for visual pattern recognition will house 4096 CNN cells with a 2% overhead for making it an IP.

  13. Lattice QCD with Domain Decomposition on Intel Xeon Phi Co-Processors

    Energy Technology Data Exchange (ETDEWEB)

    Heybrock, Simon; Joo, Balint; Kalamkar, Dhiraj D; Smelyanskiy, Mikhail; Vaidyanathan, Karthikeyan; Wettig, Tilo; Dubey, Pradeep

    2014-12-01

    The gap between the cost of moving data and the cost of computing continues to grow, making it ever harder to design iterative solvers on extreme-scale architectures. This problem can be alleviated by alternative algorithms that reduce the amount of data movement. We investigate this in the context of Lattice Quantum Chromodynamics and implement such an alternative solver algorithm, based on domain decomposition, on Intel Xeon Phi co-processor (KNC) clusters. We demonstrate close-to-linear on-chip scaling to all 60 cores of the KNC. With a mix of single- and half-precision the domain-decomposition method sustains 400-500 Gflop/s per chip. Compared to an optimized KNC implementation of a standard solver [1], our full multi-node domain-decomposition solver strong-scales to more nodes and reduces the time-to-solution by a factor of 5.

  14. Allocating application to group of consecutive processors in fault-tolerant deadlock-free routing path defined by routers obeying same rules for path selection

    Science.gov (United States)

    Leung, Vitus J [Albuquerque, NM; Phillips, Cynthia A [Albuquerque, NM; Bender, Michael A [East Northport, NY; Bunde, David P [Urbana, IL

    2009-07-21

    In a multiple processor computing apparatus, directional routing restrictions and a logical channel construct permit fault tolerant, deadlock-free routing. Processor allocation can be performed by creating a linear ordering of the processors based on routing rules used for routing communications between the processors. The linear ordering can assume a loop configuration, and bin-packing is applied to this loop configuration. The interconnection of the processors can be conceptualized as a generally rectangular 3-dimensional grid, and the MC allocation algorithm is applied with respect to the 3-dimensional grid.

  15. Utility of single photon emission computed tomography/computed tomography imaging in evaluation of chronic low back pain

    International Nuclear Information System (INIS)

    Harisankar, Chidambaram Natrajan Balasubramanian; Mittal, Bhagwant Rai; Bhattacharya, Anish; Singh, Paramjeet; Sen, Ramesh

    2012-01-01

    Abnormal morphologic findings in imaging were thought to explain the etiology of low back pain (LBP). However, it is now known that variety of morphologic abnormalities is noted even in asymptomatic individuals. Single photon emission computed tomography/computed tomography (SPECT/CT) could be used to differentiate incidental findings from clinically significant findings. This study was performed to define the SPECT/CT patterns in patients with LBP and to correlate these with clinical and magnetic resonance imaging (MRI) findings. Thirty adult patients with LBP of duration 3 months or more were prospectively evaluated in this study. Patients with known or suspected malignancy, trauma or infectious processes were excluded. A detailed history of sensory and motor symptoms and neurologic examination was performed. All the patients were subjected to MRI and bone scintigraphy with hybrid SPECT/CT of the lumbo-sacral spine within 1 month of each other. The patients were classified into those with and without neurologic symptoms, activity limitation. The findings of clinical examination and imaging were compared. MRI and SPECT/CT findings were also compared. Thirty patients (18 men and 12 women; mean age 38 years; range 17-64 years) were eligible for the study. Clinically, 14 of 30 (46%) had neurologic signs and or symptoms. Six of the 30 patients (20%) had positive straight leg raising test (SLRT). Twenty-two of the 30 patients (73%) had SPECT abnormality. Most frequent SPECT/CT abnormality was tracer uptake in the anterior part of vertebral body with osteophytes/sclerotic changes. Significant positive agreement was noted between this finding and MRI evidence of degenerative disc disease. Only 13% of patients had more than one abnormality in SPECT. All 30 patients had MRI abnormalities. The most frequent abnormality was degenerative disc disease and facet joint arthropathy. MRI showed single intervertebral disc abnormality in 36% of the patients and more than one

  16. Utility of Quantitative Parameters from Single-Photon Emission Computed Tomography/Computed Tomography in Patients with Destructive Thyroiditis

    Science.gov (United States)

    Kim, Ji-Young; Kim, Ji Hyun; Moon, Jae Hoon; Kim, Kyoung Min; Oh, Tae Jung; Lee, Dong-Hwa; So, Young

    2018-01-01

    Objective Quantitative parameters from Tc-99m pertechnetate single-photon emission computed tomography/computed tomography (SPECT/CT) are emerging as novel diagnostic markers for functional thyroid diseases. We intended to assess the utility of SPECT/CT parameters in patients with destructive thyroiditis. Materials and Methods Thirty-five destructive thyroiditis patients (7 males and 28 females; mean age, 47.3 ± 13.0 years) and 20 euthyroid patients (6 males and 14 females; mean age, 45.0 ± 14.8 years) who underwent Tc-99m pertechnetate quantitative SPECT/CT were retrospectively enrolled. Quantitative parameters from the SPECT/CT (%uptake, standardized uptake value [SUV], thyroid volume, and functional thyroid mass [SUVmean × thyroid volume]) and thyroid hormone levels were investigated to assess correlations and predict the prognosis for destructive thyroiditis. The occurrence of hypothyroidism was the outcome for prognosis. Results All the SPECT/CT quantitative parameters were significantly lower in the 35 destructive thyroiditis patients compared to the 20 euthyroid patients using the same SPECT/CT scanner and protocol (p thyroid-stimulating hormone (TSH) significantly correlated with %uptake (p = 0.004), SUVmean (p thyroid mass (p thyroiditis patients, 16 progressed to hypothyroidism. On univariate and multivariate analyses, only T3 levels were associated with the later occurrence of hypothyroidism (p = 0.002, exp(β) = 1.022, 95% confidence interval: 1.008 – 1.035). Conclusion Novel quantitative SPECT/CT parameters could discriminate patients with destructive thyroiditis from euthyroid patients, suggesting the robustness of the quantitative SPECT/CT approach. However, disease progression of destructive thyroiditis could not be predicted using the parameters, as these only correlated with TSH, but not with T3, the sole predictor of the later occurrence of hypothyroidism. PMID:29713225

  17. The communication processor of TUMULT-64

    NARCIS (Netherlands)

    Smit, Gerardus Johannes Maria; Jansen, P.G.

    1988-01-01

    Tumult (Twente University MULTi-processor system) is a modular extendible multi-processor system designed and implemented at the Twente University of Technology in co-operation with Oce Nederland B.V. and the Dr. Neher Laboratories (Dutch PTT). Characteristics of the hardware are: MIMD type,

  18. Development of a small-scale computer cluster

    Science.gov (United States)

    Wilhelm, Jay; Smith, Justin T.; Smith, James E.

    2008-04-01

    An increase in demand for computing power in academia has necessitated the need for high performance machines. Computing power of a single processor has been steadily increasing, but lags behind the demand for fast simulations. Since a single processor has hard limits to its performance, a cluster of computers can have the ability to multiply the performance of a single computer with the proper software. Cluster computing has therefore become a much sought after technology. Typical desktop computers could be used for cluster computing, but are not intended for constant full speed operation and take up more space than rack mount servers. Specialty computers that are designed to be used in clusters meet high availability and space requirements, but can be costly. A market segment exists where custom built desktop computers can be arranged in a rack mount situation, gaining the space saving of traditional rack mount computers while remaining cost effective. To explore these possibilities, an experiment was performed to develop a computing cluster using desktop components for the purpose of decreasing computation time of advanced simulations. This study indicates that small-scale cluster can be built from off-the-shelf components which multiplies the performance of a single desktop machine, while minimizing occupied space and still remaining cost effective.

  19. Comparison of Processor Performance of SPECint2006 Benchmarks of some Intel Xeon Processors

    OpenAIRE

    Abdul Kareem PARCHUR; Ram Asaray SINGH

    2012-01-01

    High performance is a critical requirement to all microprocessors manufacturers. The present paper describes the comparison of performance in two main Intel Xeon series processors (Type A: Intel Xeon X5260, X5460, E5450 and L5320 and Type B: Intel Xeon X5140, 5130, 5120 and E5310). The microarchitecture of these processors is implemented using the basis of a new family of processors from Intel starting with the Pentium 4 processor. These processors can provide a performance boost for many ke...

  20. Development of a highly reliable CRT processor

    International Nuclear Information System (INIS)

    Shimizu, Tomoya; Saiki, Akira; Hirai, Kenji; Jota, Masayoshi; Fujii, Mikiya

    1996-01-01

    Although CRT processors have been employed by the main control board to reduce the operator's workload during monitoring, the control systems are still operated by hardware switches. For further advancement, direct controller operation through a display device is expected. A CRT processor providing direct controller operation must be as reliable as the hardware switches are. The authors are developing a new type of highly reliable CRT processor that enables direct controller operations. In this paper, we discuss the design principles behind a highly reliable CRT processor. The principles are defined by studies of software reliability and of the functional reliability of the monitoring and operation systems. The functional configuration of an advanced CRT processor is also addressed. (author)

  1. Online track processor for the CDF upgrade

    International Nuclear Information System (INIS)

    Thomson, E. J.

    2002-01-01

    A trigger track processor, called the eXtremely Fast Tracker (XFT), has been designed for the CDF upgrade. This processor identifies high transverse momentum (> 1.5 GeV/c) charged particles in the new central outer tracking chamber for CDF II. The XFT design is highly parallel to handle the input rate of 183 Gbits/s and output rate of 44 Gbits/s. The processor is pipelined and reports the result for a new event every 132 ns. The processor uses three stages: hit classification, segment finding, and segment linking. The pattern recognition algorithms for the three stages are implemented in programmable logic devices (PLDs) which allow in-situ modification of the algorithm at any time. The PLDs reside on three different types of modules. The complete system has been installed and commissioned at CDF II. An overview of the track processor and performance in CDF Run II are presented

  2. Hemodynamics of hepatocellular carcinoma with single-level dynamic computed tomography during hepatic arteriography

    International Nuclear Information System (INIS)

    Tanihata, Hirohiko

    2002-01-01

    The purpose of this study is to verify the hemodynamics of hepatocellular carcinoma (HCC) and to explore the draining pathway using single-level dynamic computed tomography during hepatic arteriography (single-level dynamic CTHA). One hundred one patients with 131 nodules of HCC underwent single level dynamic CTHA. Forty seven nodules were diagnosed by histological specimen and the other eighty four nodules by clinical findings of elevation in AFP and/or PIVKA II and hypervascular tumor in angiography. Single-level dynamic CTHA was performed under insertion of a catheter into proper hepatic artery or the more peripheral hepatic artery with a slice thickness of 3 mm at the same level. Each image of single level dynamic CTHA was continuously taken in a second for 40 seconds during injection of contrast medium at of 2 ml/sec for 10 seconds. The images of single-level dynamic CTHA were differentiated into three phases, as early phase 1 to 10 seconds, middle phase 11 to 20 seconds and late phase 21 to 40 seconds. After the analysis of the vascular pattern in each phase, the hemodynamics of HCC was classified into three patterns; hypovascular pattern in the 24 nodules whose average size was 13.4±4.2 mm, intermediate pattern in the 21 nodules whose average size was 20.8±7.8 mm and hypervascular pattern in 86 nodules whose average size was 31.6±16.3 mm. There were significant correlations between the tumor size and the vascular pattern. In the groups of hypovascular and intermediate pattern, the draining pathways were sinusoids. Of the 86 nodules of the group with hypervascular pattern, blood flow drained into portal vein including bright branch structure in 20 nodules, into portal vein and hepatic vein in 2 nodules, into portal vein and extrahepatic vein in 1 nodule, into hepatic vein in 11 nodules, into extrahepatic vein in 4 nodules and into sinusoids in 48 nodules. In conclusion, from a viewpoint of hemodynamics using single-level dynamic CTHA, I proposed the new

  3. Choosing processor array configuration by performance modeling for a highly parallel linear algebra algorithm

    International Nuclear Information System (INIS)

    Littlefield, R.J.; Maschhoff, K.J.

    1991-04-01

    Many linear algebra algorithms utilize an array of processors across which matrices are distributed. Given a particular matrix size and a maximum number of processors, what configuration of processors, i.e., what size and shape array, will execute the fastest? The answer to this question depends on tradeoffs between load balancing, communication startup and transfer costs, and computational overhead. In this paper we analyze in detail one algorithm: the blocked factored Jacobi method for solving dense eigensystems. A performance model is developed to predict execution time as a function of the processor array and matrix sizes, plus the basic computation and communication speeds of the underlying computer system. In experiments on a large hypercube (up to 512 processors), this model has been found to be highly accurate (mean error ∼ 2%) over a wide range of matrix sizes (10 x 10 through 200 x 200) and processor counts (1 to 512). The model reveals, and direct experiment confirms, that the tradeoffs mentioned above can be surprisingly complex and counterintuitive. We propose decision procedures based directly on the performance model to choose configurations for fastest execution. The model-based decision procedures are compared to a heuristic strategy and shown to be significantly better. 7 refs., 8 figs., 1 tab

  4. DE-BLURRING SINGLE PHOTON EMISSION COMPUTED TOMOGRAPHY IMAGES USING WAVELET DECOMPOSITION

    Directory of Open Access Journals (Sweden)

    Neethu M. Sasi

    2016-02-01

    Full Text Available Single photon emission computed tomography imaging is a popular nuclear medicine imaging technique which generates images by detecting radiations emitted by radioactive isotopes injected in the human body. Scattering of these emitted radiations introduces blur in this type of images. This paper proposes an image processing technique to enhance cardiac single photon emission computed tomography images by reducing the blur in the image. The algorithm works in two main stages. In the first stage a maximum likelihood estimate of the point spread function and the true image is obtained. In the second stage Lucy Richardson algorithm is applied on the selected wavelet coefficients of the true image estimate. The significant contribution of this paper is that processing of images is done in the wavelet domain. Pre-filtering is also done as a sub stage to avoid unwanted ringing effects. Real cardiac images are used for the quantitative and qualitative evaluations of the algorithm. Blur metric, peak signal to noise ratio and Tenengrad criterion are used as quantitative measures. Comparison against other existing de-blurring algorithms is also done. The simulation results indicate that the proposed method effectively reduces blur present in the image.

  5. A Taxonomy of Reconfigurable Single-/Multiprocessor Systems-on-Chip

    Directory of Open Access Journals (Sweden)

    Diana Göhringer

    2009-01-01

    Full Text Available Runtime adaptivity of hardware in processor architectures is a novel trend, which is under investigation in a variety of research labs all over the world. The runtime exchange of modules, implemented on a reconfigurable hardware, affects the instruction flow (e.g., in reconfigurable instruction set processors or the data flow, which has a strong impact on the performance of an application. Furthermore, the choice of a certain processor architecture related to the class of target applications is a crucial point in application development. A simple example is the domain of high-performance computing applications found in meteorology or high-energy physics, where vector processors are the optimal choice. A classification scheme for computer systems was provided in 1966 by Flynn where single/multiple data and instruction streams were combined to four types of architectures. This classification is now used as a foundation for an extended classification scheme including runtime adaptivity as further degree of freedom for processor architecture design. The developed scheme is validated by a multiprocessor system implemented on reconfigurable hardware as well as by a classification of existing static and reconfigurable processor systems.

  6. A Real-Time Sound Field Rendering Processor

    Directory of Open Access Journals (Sweden)

    Tan Yiyu

    2017-12-01

    Full Text Available Real-time sound field renderings are computationally intensive and memory-intensive. Traditional rendering systems based on computer simulations suffer from memory bandwidth and arithmetic units. The computation is time-consuming, and the sample rate of the output sound is low because of the long computation time at each time step. In this work, a processor with a hybrid architecture is proposed to speed up computation and improve the sample rate of the output sound, and an interface is developed for system scalability through simply cascading many chips to enlarge the simulated area. To render a three-minute Beethoven wave sound in a small shoe-box room with dimensions of 1.28 m × 1.28 m × 0.64 m, the field programming gate array (FPGA-based prototype machine with the proposed architecture carries out the sound rendering at run-time while the software simulation with the OpenMP parallelization takes about 12.70 min on a personal computer (PC with 32 GB random access memory (RAM and an Intel i7-6800K six-core processor running at 3.4 GHz. The throughput in the software simulation is about 194 M grids/s while it is 51.2 G grids/s in the prototype machine even if the clock frequency of the prototype machine is much lower than that of the PC. The rendering processor with a processing element (PE and interfaces consumes about 238,515 gates after fabricated by the 0.18 µm processing technology from the ROHM semiconductor Co., Ltd. (Kyoto Japan, and the power consumption is about 143.8 mW.

  7. Comparison of single photon emission computed tomography-computed tomography, computed tomography, single photon emission computed tomography and planar scintigraphy for characterization of isolated skull lesions seen on bone scintigraphy in cancer patients

    International Nuclear Information System (INIS)

    Sharma, Punit; Jain, Tarun Kumar; Reddy, Rama Mohan; Faizi, Nauroze Ashgar; Bal, Chandrasekhar; Malhotra, Arun; Kumar, Rakesh

    2014-01-01

    The purpose of this study is to evaluate the added value of single photon emission computed tomography-computed tomography (SPECT-CT) over planar scintigraphy, SPECT and CT alone for characterization of isolated skull lesions in bone scintigraphy (BS) in cancer patients. A total of 32 cancer patients (age: 39.5 ± 21.9; male: female - 1:1) with 36 isolated skull lesions on planar BS, underwent SPECT-CT of skull. Planar BS, SPECT, CT and SPECT-CT images were evaluated in separate sessions to minimize recall bias. A scoring scale of 1-5 was used, where 1 is definitely metastatic, 2 is probably metastatic, 3 is indeterminate, 4 is probably benign and 5 is definitely benign. With receiver operating characteristic analysis area under the curves (AUC) was calculated for each modality. For calculation of sensitivity, specificity and predictive values a Score ≤3 was taken as metastatic. Clinical/imaging follow-up and/or histopathology were taken as reference standard. Of 36 skull lesions 11 lesions each were on frontal, parietal and occipital bone while three lesions were in the temporal bone. Of these 36 lesions, 16 were indeterminate (Score-3) on planar and SPECT, five on CT and none on SPECT-CT. The AUC was largest for SPECT-CT followed by CT, SPECT and planar scintigraphy, respectively. Planar scintigraphy was inferior to SPECT-CT (P = 0.006) and CT (P = 0.012) but not SPECT (P = 0.975). SPECT was also inferior to SPECT-CT (P = 0.007) and CT (P = 0.015). Although no significant difference was found between SPECT-CT and CT (P = 0.469), the former was more specific (100% vs. 94%). SPECT-CT is better than planar scintigraphy and SPECT alone for correctly characterizing isolated skull lesions on BS in cancer patients. It is more specific than CT, but provides no significant advantage over CT alone for this purpose

  8. Design and implementation of a high performance network security processor

    Science.gov (United States)

    Wang, Haixin; Bai, Guoqiang; Chen, Hongyi

    2010-03-01

    The last few years have seen many significant progresses in the field of application-specific processors. One example is network security processors (NSPs) that perform various cryptographic operations specified by network security protocols and help to offload the computation intensive burdens from network processors (NPs). This article presents a high performance NSP system architecture implementation intended for both internet protocol security (IPSec) and secure socket layer (SSL) protocol acceleration, which are widely employed in virtual private network (VPN) and e-commerce applications. The efficient dual one-way pipelined data transfer skeleton and optimised integration scheme of the heterogenous parallel crypto engine arrays lead to a Gbps rate NSP, which is programmable with domain specific descriptor-based instructions. The descriptor-based control flow fragments large data packets and distributes them to the crypto engine arrays, which fully utilises the parallel computation resources and improves the overall system data throughput. A prototyping platform for this NSP design is implemented with a Xilinx XC3S5000 based FPGA chip set. Results show that the design gives a peak throughput for the IPSec ESP tunnel mode of 2.85 Gbps with over 2100 full SSL handshakes per second at a clock rate of 95 MHz.

  9. DEMONIC programming: a computational language for single-particle equilibrium thermodynamics, and its formal semantics.

    Directory of Open Access Journals (Sweden)

    Samson Abramsky

    2015-11-01

    Full Text Available Maxwell's Demon, 'a being whose faculties are so sharpened that he can follow every molecule in its course', has been the centre of much debate about its abilities to violate the second law of thermodynamics. Landauer's hypothesis, that the Demon must erase its memory and incur a thermodynamic cost, has become the standard response to Maxwell's dilemma, and its implications for the thermodynamics of computation reach into many areas of quantum and classical computing. It remains, however, still a hypothesis. Debate has often centred around simple toy models of a single particle in a box. Despite their simplicity, the ability of these systems to accurately represent thermodynamics (specifically to satisfy the second law and whether or not they display Landauer Erasure, has been a matter of ongoing argument. The recent Norton-Ladyman controversy is one such example. In this paper we introduce a programming language to describe these simple thermodynamic processes, and give a formal operational semantics and program logic as a basis for formal reasoning about thermodynamic systems. We formalise the basic single-particle operations as statements in the language, and then show that the second law must be satisfied by any composition of these basic operations. This is done by finding a computational invariant of the system. We show, furthermore, that this invariant requires an erasure cost to exist within the system, equal to kTln2 for a bit of information: Landauer Erasure becomes a theorem of the formal system. The Norton-Ladyman controversy can therefore be resolved in a rigorous fashion, and moreover the formalism we introduce gives a set of reasoning tools for further analysis of Landauer erasure, which are provably consistent with the second law of thermodynamics.

  10. FPGA Based Intelligent Co-operative Processor in Memory Architecture

    DEFF Research Database (Denmark)

    Ahmed, Zaki; Sotudeh, Reza; Hussain, Dil Muhammad Akbar

    2011-01-01

    benefits of PIM, a concept of Co-operative Intelligent Memory (CIM) was developed by the intelligent system group of University of Hertfordshire, based on the previously developed Co-operative Pseudo Intelligent Memory (CPIM). This paper provides an overview on previous works (CPIM, CIM) and realization......In a continuing effort to improve computer system performance, Processor-In-Memory (PIM) architecture has emerged as an alternative solution. PIM architecture incorporates computational units and control logic directly on the memory to provide immediate access to the data. To exploit the potential...

  11. Formal characterizations of FA-based string processors

    CSIR Research Space (South Africa)

    Ngassam, EK

    2010-08-01

    Full Text Available stream_source_info Ngassam_2010.pdf.txt stream_content_type text/plain stream_size 7434 Content-Encoding UTF-8 stream_name Ngassam_2010.pdf.txt Content-Type text/plain; charset=UTF-8 Formal Characterizations of FA...-based String Processors Ernest Ketcha Ngassam1,2,?, Bruce W. Watson3, and Derrick G. Kourie3 1SAP Meraka UTD, Pretoria, South Africa 2School of Computing University of South Africa Pretoria 0001 ernest.ngassam@sap.com 3Department of Computer Science...

  12. Brain receptor single-photon emission computer tomography with 123I Datscan in Parkinson's disease

    International Nuclear Information System (INIS)

    Minchev, D.; Peshev, N.; Kostadinova, I.; Grigorova, O.; Trindev, P.; Shotekov, P.

    2005-01-01

    Clinical aspects of Parkinson's disease are not enough for the early diagnosis of the disease. Positron emission tomography and the receptor single - photon emission tomography can be used for imaging functional integrity of nigrostriatal dopaminergic structures. 24 patient (17 men and 7 women) were investigated. 20 of them are with Parkinson's disease and 4 are with essential tremor. The radiopharmaceutical - 123I-Datscan (ioflupane, bind with 123I) represent a cocaine analogue with selective affinity to dopamine transporters, located in the dopaminergic nigrostriatal terminals in the striatum. Single - photon emission computer tomography was performed with SPECT gamma camera (ADAC, SH Epic detector). The scintigraphic study was made 3 to 6 hours after intravenous injection of the radiopharmaceutical - 123I- Datscan in dose 185 MBq. 120 frames are registered with duration of each one 22 seconds and gamma camera rotation 360. After generation of transversal slices we generated two composites pictures. The first composite picture image the striatum, the second - the occipital region. Two ratios were calculated representing the uptake of the radiopharmaceutical in the left and right striatum. Qualitative and quantitative criteria were elaborated for evaluating the scintigraphic patterns. Decreased, nonhomogeneous and asymmetric uptake of the radiopharmaceutical coupled with low quantitative parameters in range from 1.44 to 2.87 represents the characteristic scintigraphic pattern for Parkinson's disease with clear clinical picture. Homogenous with high intensity and symmetric uptake of the radiopharmaceutical in the striatum coupled with his clear frontier and with quantitative parameters up to 4.40 represent the scintigraphic pattern in two patients with essential tremor. Receptor single - photon emission computer tomography with 123I - Datscan represents an accurate nuclear-medicine method for precise diagnosis of Parkinson's disease and for its differentiation from

  13. LASIP-III, a generalized processor for standard interface files

    International Nuclear Information System (INIS)

    Bosler, G.E.; O'Dell, R.D.; Resnik, W.M.

    1976-03-01

    The LASIP-III code was developed for processing Version III standard interface data files which have been specified by the Committee on Computer Code Coordination. This processor performs two distinct tasks, namely, transforming free-field format, BCD data into well-defined binary files and providing for printing and punching data in the binary files. While LASIP-III is exported as a complete free-standing code package, techniques are described for easily separating the processor into two modules, viz., one for creating the binary files and one for printing the files. The two modules can be separated into free-standing codes or they can be incorporated into other codes. Also, the LASIP-III code can be easily expanded for processing additional files, and procedures are described for such an expansion. 2 figures, 8 tables

  14. NMRFx Processor: a cross-platform NMR data processing program

    International Nuclear Information System (INIS)

    Norris, Michael; Fetler, Bayard; Marchant, Jan; Johnson, Bruce A.

    2016-01-01

    NMRFx Processor is a new program for the processing of NMR data. Written in the Java programming language, NMRFx Processor is a cross-platform application and runs on Linux, Mac OS X and Windows operating systems. The application can be run in both a graphical user interface (GUI) mode and from the command line. Processing scripts are written in the Python programming language and executed so that the low-level Java commands are automatically run in parallel on computers with multiple cores or CPUs. Processing scripts can be generated automatically from the parameters of NMR experiments or interactively constructed in the GUI. A wide variety of processing operations are provided, including methods for processing of non-uniformly sampled datasets using iterative soft thresholding. The interactive GUI also enables the use of the program as an educational tool for teaching basic and advanced techniques in NMR data analysis.

  15. Efficacy of Code Optimization on Cache-Based Processors

    Science.gov (United States)

    VanderWijngaart, Rob F.; Saphir, William C.; Chancellor, Marisa K. (Technical Monitor)

    1997-01-01

    In this paper a number of techniques for improving the cache performance of a representative piece of numerical software is presented. Target machines are popular processors from several vendors: MIPS R5000 (SGI Indy), MIPS R8000 (SGI PowerChallenge), MIPS R10000 (SGI Origin), DEC Alpha EV4 + EV5 (Cray T3D & T3E), IBM RS6000 (SP Wide-node), Intel PentiumPro (Ames' Whitney), Sun UltraSparc (NERSC's NOW). The optimizations all attempt to increase the locality of memory accesses. But they meet with rather varied and often counterintuitive success on the different computing platforms. We conclude that it may be genuinely impossible to obtain portable performance on the current generation of cache-based machines. At the least, it appears that the performance of modern commodity processors cannot be described with parameters defining the cache alone.

  16. NMRFx Processor: a cross-platform NMR data processing program

    Energy Technology Data Exchange (ETDEWEB)

    Norris, Michael; Fetler, Bayard [One Moon Scientific, Inc. (United States); Marchant, Jan [University of Maryland Baltimore County, Howard Hughes Medical Institute (United States); Johnson, Bruce A., E-mail: bruce.johnson@asrc.cuny.edu [One Moon Scientific, Inc. (United States)

    2016-08-15

    NMRFx Processor is a new program for the processing of NMR data. Written in the Java programming language, NMRFx Processor is a cross-platform application and runs on Linux, Mac OS X and Windows operating systems. The application can be run in both a graphical user interface (GUI) mode and from the command line. Processing scripts are written in the Python programming language and executed so that the low-level Java commands are automatically run in parallel on computers with multiple cores or CPUs. Processing scripts can be generated automatically from the parameters of NMR experiments or interactively constructed in the GUI. A wide variety of processing operations are provided, including methods for processing of non-uniformly sampled datasets using iterative soft thresholding. The interactive GUI also enables the use of the program as an educational tool for teaching basic and advanced techniques in NMR data analysis.

  17. A Time-Composable Operating System for the Patmos Processor

    DEFF Research Database (Denmark)

    Ziccardi, Marco; Schoeberl, Martin; Vardanega, Tullio

    2015-01-01

    -composable operating system, on top of a time-composable processor, facilitates incremental development, which is highly desirable for industry. This paper makes a twofold contribution. First, we present enhancements to the Patmos processor to allow achieving time composability at the operating system level. Second......, we extend an existing time-composable operating system, TiCOS, to make best use of advanced Patmos hardware features in the pursuit of time composability.......In the last couple of decades we have witnessed a steady growth in the complexity and widespread of real-time systems. In order to master the rising complexity in the timing behaviour of those systems, rightful attention has been given to the development of time-predictable computer architectures...

  18. A VLSI image processor via pseudo-mersenne transforms

    International Nuclear Information System (INIS)

    Sei, W.J.; Jagadeesh, J.M.

    1986-01-01

    The computational burden on image processing in medical fields where a large amount of information must be processed quickly and accurately has led to consideration of special-purpose image processor chip design for some time. The very large scale integration (VLSI) resolution has made it cost-effective and feasible to consider the design of special purpose chips for medical imaging fields. This paper describes a VLSI CMOS chip suitable for parallel implementation of image processing algorithms and cyclic convolutions by using Pseudo-Mersenne Number Transform (PMNT). The main advantages of the PMNT over the Fast Fourier Transform (FFT) are: (1) no multiplications are required; (2) integer arithmetic is used. The design and development of this processor, which operates on 32-point convolution or 5 x 5 window image, are described

  19. SPP: A data base processor data communications protocol

    Science.gov (United States)

    Fishwick, P. A.

    1983-01-01

    The design and implementation of a data communications protocol for the Intel Data Base Processor (DBP) is defined. The protocol is termed SPP (Service Port Protocol) since it enables data transfer between the host computer and the DBP service port. The protocol implementation is extensible in that it is explicitly layered and the protocol functionality is hierarchically organized. Extensive trace and performance capabilities have been supplied with the protocol software to permit optional efficient monitoring of the data transfer between the host and the Intel data base processor. Machine independence was considered to be an important attribute during the design and implementation of SPP. The protocol source is fully commented and is included in Appendix A of this report.

  20. Single-polymer dynamics under constraints: scaling theory and computer experiment

    International Nuclear Information System (INIS)

    Milchev, Andrey

    2011-01-01

    The relaxation, diffusion and translocation dynamics of single linear polymer chains in confinement is briefly reviewed with emphasis on the comparison between theoretical scaling predictions and observations from experiment or, most frequently, from computer simulations. Besides cylindrical, spherical and slit-like constraints, related problems such as the chain dynamics in a random medium and the translocation dynamics through a nanopore are also considered. Another particular kind of confinement is imposed by polymer adsorption on attractive surfaces or selective interfaces-a short overview of single-chain dynamics is also contained in this survey. While both theory and numerical experiments consider predominantly coarse-grained models of self-avoiding linear chain molecules with typically Rouse dynamics, we also note some recent studies which examine the impact of hydrodynamic interactions on polymer dynamics in confinement. In all of the aforementioned cases we focus mainly on the consequences of imposed geometric restrictions on single-chain dynamics and try to check our degree of understanding by assessing the agreement between theoretical predictions and observations. (topical review)

  1. Endoleak detection using single-acquisition split-bolus dual-energy computer tomography (DECT)

    Energy Technology Data Exchange (ETDEWEB)

    Javor, D.; Wressnegger, A.; Unterhumer, S.; Kollndorfer, K.; Nolz, R.; Beitzke, D.; Loewe, C. [Medical University of Vienna, Department of Biomedical Imaging and Image-guided Therapy, Vienna (Austria)

    2017-04-15

    To assess a single-phase, dual-energy computed tomography (DECT) with a split-bolus technique and reconstruction of virtual non-enhanced images for the detection of endoleaks after endovascular aneurysm repair (EVAR). Fifty patients referred for routine follow-up post-EVAR CT and a history of at least one post-EVAR follow-up CT examination using our standard biphasic (arterial and venous phase) routine protocol (which was used as the reference standard) were included in this prospective trial. An in-patient comparison and an analysis of the split-bolus protocol and the previously used double-phase protocol were performed with regard to differences in diagnostic accuracy, radiation dose, and image quality. The analysis showed a significant reduction of radiation dose of up to 42 %, using the single-acquisition split-bolus protocol, while maintaining a comparable diagnostic accuracy (primary endoleak detection rate of 96 %). Image quality between the two protocols was comparable and only slightly inferior for the split-bolus scan (2.5 vs. 2.4). Using the single-acquisition, split-bolus approach allows for a significant dose reduction while maintaining high image quality, resulting in effective endoleak identification. (orig.)

  2. Free vibration analysis of single-walled boron nitride nanotubes based on a computational mechanics framework

    Science.gov (United States)

    Yan, J. W.; Tong, L. H.; Xiang, Ping

    2017-12-01

    Free vibration behaviors of single-walled boron nitride nanotubes are investigated using a computational mechanics approach. Tersoff-Brenner potential is used to reflect atomic interaction between boron and nitrogen atoms. The higher-order Cauchy-Born rule is employed to establish the constitutive relationship for single-walled boron nitride nanotubes on the basis of higher-order gradient continuum theory. It bridges the gaps between the nanoscale lattice structures with a continuum body. A mesh-free modeling framework is constructed, using the moving Kriging interpolation which automatically satisfies the higher-order continuity, to implement numerical simulation in order to match the higher-order constitutive model. In comparison with conventional atomistic simulation methods, the established atomistic-continuum multi-scale approach possesses advantages in tackling atomic structures with high-accuracy and high-efficiency. Free vibration characteristics of single-walled boron nitride nanotubes with different boundary conditions, tube chiralities, lengths and radii are examined in case studies. In this research, it is pointed out that a critical radius exists for the evaluation of fundamental vibration frequencies of boron nitride nanotubes; opposite trends can be observed prior to and beyond the critical radius. Simulation results are presented and discussed.

  3. Analytical Bounds on the Threads in IXP1200 Network Processor

    OpenAIRE

    Ramakrishna, STGS; Jamadagni, HS

    2003-01-01

    Increasing link speeds have placed enormous burden on the processing requirements and the processors are expected to carry out a variety of tasks. Network Processors (NP) [1] [2] is the blanket name given to the processors, which are traded for flexibility and performance. Network Processors are offered by a number of vendors; to take the main burden of processing requirement of network related operations from the conventional processors. The Network Processors cover a spectrum of design trad...

  4. Software for the ACP [Advanced Computer Program] multiprocessor system

    International Nuclear Information System (INIS)

    Biel, J.; Areti, H.; Atac, R.

    1987-01-01

    Software has been developed for use with the Fermilab Advanced Computer Program (ACP) multiprocessor system. The software was designed to make a system of a hundred independent node processors as easy to use as a single, powerful CPU. Subroutines have been developed by which a user's host program can send data to and get results from the program running in each of his ACP node processors. Utility programs make it easy to compile and link host and node programs, to debug a node program on an ACP development system, and to submit a debugged program to an ACP production system

  5. [Treatment choice in dacryostenosis based on single-photon emission computed tomography and X-ray computed tomography findings].

    Science.gov (United States)

    At'kova, E L; Yartsev, V D; Tomashevskiy, I O; Krakhovetskiy, N N

    2016-01-01

    To develop surgical indications in dacryostenosis within the vertical portion of lacrimal pathways that would consider findings of single-photon emission computed tomography (SPECT) combined with X-ray computed tomography (CT). A total of 96 patients with isolated vertical-portion dacryostenosis (127 cases) were enrolled. The examination included collecting Munk's scores for epiphora, optical coherence tomography of the lower tear meniscus, lacrimal scintigraphy, and SPECT/CT. Group 1 (40 cases) was composed of patients with lacrimal obstruction on CT, group 2 (87 cases) - of those whose lacrimal pathways proved passable. There were also 3 patients (4 cases) from group 1, whose lacrimal pathways, despite being blocked on CT, were still passable on SPECT. Surgeries performed in group 1 were endoscopic endonasal dacryocystorhinostomy (DCR) (36 cases) and pathways recanalization with bicanalicular intubation and balloon dacryoplasty (DCP) (4 cases). In group 2, all patients (87 cases) underwent recanalization with bicanalicular intubation (supplemented with balloon DCP in 32 cases). Surgical results were evaluated 8-12 months after the treatment. In group 2, particular attention was paid to the concordance in locations of dacryostenosis provided by CT and SPECT scans. Favorable outcomes of endoscopic endonasal DCR were obtained in as many as 32 cases from group 1 (88.9%), while in 4 cases (12.1%) the condition relapsed. Of those patients whose stenosis was not complete on SPECT, 3 cases (75.0%) improved, 1 (25.0%) - relapsed. In group 2, favorable outcomes were obtained in 65 cases (74.7%), relapses were 22 (25.3%). A high concordance in stenosis locations by CT and SPECT was noted in 60 cases of those who improved (92.3%) and 3 cases of those who relapsed (13.6%). The value of information provided by SPECT/CT has proved high in patients with nasolacrimal duct stenosis or obstruction. A combined scan allows to establish causal relationships between anatomical changes

  6. Effect of processor temperature on film dosimetry

    International Nuclear Information System (INIS)

    Srivastava, Shiv P.; Das, Indra J.

    2012-01-01

    Optical density (OD) of a radiographic film plays an important role in radiation dosimetry, which depends on various parameters, including beam energy, depth, field size, film batch, dose, dose rate, air film interface, postexposure processing time, and temperature of the processor. Most of these parameters have been studied for Kodak XV and extended dose range (EDR) films used in radiation oncology. There is very limited information on processor temperature, which is investigated in this study. Multiple XV and EDR films were exposed in the reference condition (d max. , 10 × 10 cm 2 , 100 cm) to a given dose. An automatic film processor (X-Omat 5000) was used for processing films. The temperature of the processor was adjusted manually with increasing temperature. At each temperature, a set of films was processed to evaluate OD at a given dose. For both films, OD is a linear function of processor temperature in the range of 29.4–40.6°C (85–105°F) for various dose ranges. The changes in processor temperature are directly related to the dose by a quadratic function. A simple linear equation is provided for the changes in OD vs. processor temperature, which could be used for correcting dose in radiation dosimetry when film is used.

  7. Optical Associative Processors For Visual Perception"

    Science.gov (United States)

    Casasent, David; Telfer, Brian

    1988-05-01

    We consider various associative processor modifications required to allow these systems to be used for visual perception, scene analysis, and object recognition. For these applications, decisions on the class of the objects present in the input image are required and thus heteroassociative memories are necessary (rather than the autoassociative memories that have been given most attention). We analyze the performance of both associative processors and note that there is considerable difference between heteroassociative and autoassociative memories. We describe associative processors suitable for realizing functions such as: distortion invariance (using linear discriminant function memory synthesis techniques), noise and image processing performance (using autoassociative memories in cascade with with a heteroassociative processor and with a finite number of autoassociative memory iterations employed), shift invariance (achieved through the use of associative processors operating on feature space data), and the analysis of multiple objects in high noise (which is achieved using associative processing of the output from symbolic correlators). We detail and provide initial demonstrations of the use of associative processors operating on iconic, feature space and symbolic data, as well as adaptive associative processors.

  8. High performance parallel computers for science

    International Nuclear Information System (INIS)

    Nash, T.; Areti, H.; Atac, R.; Biel, J.; Cook, A.; Deppe, J.; Edel, M.; Fischler, M.; Gaines, I.; Hance, R.

    1989-01-01

    This paper reports that Fermilab's Advanced Computer Program (ACP) has been developing cost effective, yet practical, parallel computers for high energy physics since 1984. The ACP's latest developments are proceeding in two directions. A Second Generation ACP Multiprocessor System for experiments will include $3500 RISC processors each with performance over 15 VAX MIPS. To support such high performance, the new system allows parallel I/O, parallel interprocess communication, and parallel host processes. The ACP Multi-Array Processor, has been developed for theoretical physics. Each $4000 node is a FORTRAN or C programmable pipelined 20 Mflops (peak), 10 MByte single board computer. These are plugged into a 16 port crossbar switch crate which handles both inter and intra crate communication. The crates are connected in a hypercube. Site oriented applications like lattice gauge theory are supported by system software called CANOPY, which makes the hardware virtually transparent to users. A 256 node, 5 GFlop, system is under construction

  9. The ATLAS fast tracker processor design

    CERN Document Server

    Volpi, Guido; Albicocco, Pietro; Alison, John; Ancu, Lucian Stefan; Anderson, James; Andari, Nansi; Andreani, Alessandro; Andreazza, Attilio; Annovi, Alberto; Antonelli, Mario; Asbah, Needa; Atkinson, Markus; Baines, J; Barberio, Elisabetta; Beccherle, Roberto; Beretta, Matteo; Biesuz, Nicolo Vladi; Blair, R E; Bogdan, Mircea; Boveia, Antonio; Britzger, Daniel; Bryant, Partick; Burghgrave, Blake; Calderini, Giovanni; Camplani, Alessandra; Cavaliere, Viviana; Cavasinni, Vincenzo; Chakraborty, Dhiman; Chang, Philip; Cheng, Yangyang; Citraro, Saverio; Citterio, Mauro; Crescioli, Francesco; Dawe, Noel; Dell'Orso, Mauro; Donati, Simone; Dondero, Paolo; Drake, G; Gadomski, Szymon; Gatta, Mauro; Gentsos, Christos; Giannetti, Paola; Gkaitatzis, Stamatios; Gramling, Johanna; Howarth, James William; Iizawa, Tomoya; Ilic, Nikolina; Jiang, Zihao; Kaji, Toshiaki; Kasten, Michael; Kawaguchi, Yoshimasa; Kim, Young Kee; Kimura, Naoki; Klimkovich, Tatsiana; Kolb, Mathis; Kordas, K; Krizka, Karol; Kubota, T; Lanza, Agostino; Li, Ho Ling; Liberali, Valentino; Lisovyi, Mykhailo; Liu, Lulu; Love, Jeremy; Luciano, Pierluigi; Luongo, Carmela; Magalotti, Daniel; Maznas, Ioannis; Meroni, Chiara; Mitani, Takashi; Nasimi, Hikmat; Negri, Andrea; Neroutsos, Panos; Neubauer, Mark; Nikolaidis, Spiridon; Okumura, Y; Pandini, Carlo; Petridou, Chariclia; Piendibene, Marco; Proudfoot, James; Rados, Petar Kevin; Roda, Chiara; Rossi, Enrico; Sakurai, Yuki; Sampsonidis, Dimitrios; Saxon, James; Schmitt, Stefan; Schoening, Andre; Shochet, Mel; Shoijaii, Jafar; Soltveit, Hans Kristian; Sotiropoulou, Calliope-Louisa; Stabile, Alberto; Swiatlowski, Maximilian J; Tang, Fukun; Taylor, Pierre Thor Elliot; Testa, Marianna; Tompkins, Lauren; Vercesi, V; Wang, Rui; Watari, Ryutaro; Zhang, Jianhong; Zeng, Jian Cong; Zou, Rui; Bertolucci, Federico

    2015-01-01

    The extended use of tracking information at the trigger level in the LHC is crucial for the trigger and data acquisition (TDAQ) system to fulfill its task. Precise and fast tracking is important to identify specific decay products of the Higgs boson or new phenomena, as well as to distinguish the contributions coming from the many collisions that occur at every bunch crossing. However, track reconstruction is among the most demanding tasks performed by the TDAQ computing farm; in fact, complete reconstruction at full Level-1 trigger accept rate (100 kHz) is not possible. In order to overcome this limitation, the ATLAS experiment is planning the installation of a dedicated processor, the Fast Tracker (FTK), which is aimed at achieving this goal. The FTK is a pipeline of high performance electronics, based on custom and commercial devices, which is expected to reconstruct, with high resolution, the trajectories of charged-particle tracks with a transverse momentum above 1 GeV, using the ATLAS inner tracker info...

  10. Development of Innovative Design Processor

    International Nuclear Information System (INIS)

    Park, Y.S.; Park, C.O.

    2004-01-01

    The nuclear design analysis requires time-consuming and erroneous model-input preparation, code run, output analysis and quality assurance process. To reduce human effort and improve design quality and productivity, Innovative Design Processor (IDP) is being developed. Two basic principles of IDP are the document-oriented design and the web-based design. The document-oriented design is that, if the designer writes a design document called active document and feeds it to a special program, the final document with complete analysis, table and plots is made automatically. The active documents can be written with ordinary HTML editors or created automatically on the web, which is another framework of IDP. Using the proper mix-up of server side and client side programming under the LAMP (Linux/Apache/MySQL/PHP) environment, the design process on the web is modeled as a design wizard style so that even a novice designer makes the design document easily. This automation using the IDP is now being implemented for all the reload design of Korea Standard Nuclear Power Plant (KSNP) type PWRs. The introduction of this process will allow large reduction in all reload design efforts of KSNP and provide a platform for design and R and D tasks of KNFC. (authors)

  11. Onboard spectral imager data processor

    Science.gov (United States)

    Otten, Leonard J.; Meigs, Andrew D.; Franklin, Abraham J.; Sears, Robert D.; Robison, Mark W.; Rafert, J. Bruce; Fronterhouse, Donald C.; Grotbeck, Ronald L.

    1999-10-01

    Previous papers have described the concept behind the MightySat II.1 program, the satellite's Fourier Transform imaging spectrometer's optical design, the design for the spectral imaging payload, and its initial qualification testing. This paper discusses the on board data processing designed to reduce the amount of downloaded data by an order of magnitude and provide a demonstration of a smart spaceborne spectral imaging sensor. Two custom components, a spectral imager interface 6U VME card that moves data at over 30 MByte/sec, and four TI C-40 processors mounted to a second 6U VME and daughter card, are used to adapt the sensor to the spacecraft and provide the necessary high speed processing. A system architecture that offers both on board real time image processing and high-speed post data collection analysis of the spectral data has been developed. In addition to the on board processing of the raw data into a usable spectral data volume, one feature extraction technique has been incorporated. This algorithm operates on the basic interferometric data. The algorithm is integrated within the data compression process to search for uploadable feature descriptions.

  12. Novel memory architecture for video signal processor

    Science.gov (United States)

    Hung, Jen-Sheng; Lin, Chia-Hsing; Jen, Chein-Wei

    1993-11-01

    An on-chip memory architecture for video signal processor (VSP) is proposed. This memory structure is a two-level design for the different data locality in video applications. The upper level--Memory A provides enough storage capacity to reduce the impact on the limitation of chip I/O bandwidth, and the lower level--Memory B provides enough data parallelism and flexibility to meet the requirements of multiple reconfigurable pipeline function units in a single VSP chip. The needed memory size is decided by the memory usage analysis for video algorithms and the number of function units. Both levels of memory adopted a dual-port memory scheme to sustain the simultaneous read and write operations. Especially, Memory B uses multiple one-read-one-write memory banks to emulate the real multiport memory. Therefore, one can change the configuration of Memory B to several sets of memories with variable read/write ports by adjusting the bus switches. Then the numbers of read ports and write ports in proposed memory can meet requirement of data flow patterns in different video coding algorithms. We have finished the design of a prototype memory design using 1.2- micrometers SPDM SRAM technology and will fabricated it through TSMC, in Taiwan.

  13. Hardware trigger processor for the MDT system

    CERN Document Server

    AUTHOR|(SzGeCERN)757787; The ATLAS collaboration; Hazen, Eric; Butler, John; Black, Kevin; Gastler, Daniel Edward; Ntekas, Konstantinos; Taffard, Anyes; Martinez Outschoorn, Verena; Ishino, Masaya; Okumura, Yasuyuki

    2017-01-01

    We are developing a low-latency hardware trigger processor for the Monitored Drift Tube system in the Muon spectrometer. The processor will fit candidate Muon tracks in the drift tubes in real time, improving significantly the momentum resolution provided by the dedicated trigger chambers. We present a novel pure-FPGA implementation of a Legendre transform segment finder, an associative-memory alternative implementation, an ARM (Zynq) processor-based track fitter, and compact ATCA carrier board architecture. The ATCA architecture is designed to allow a modular, staged approach to deployment of the system and exploration of alternative technologies.

  14. A high-accuracy optical linear algebra processor for finite element applications

    Science.gov (United States)

    Casasent, D.; Taylor, B. K.

    1984-01-01

    Optical linear processors are computationally efficient computers for solving matrix-matrix and matrix-vector oriented problems. Optical system errors limit their dynamic range to 30-40 dB, which limits their accuray to 9-12 bits. Large problems, such as the finite element problem in structural mechanics (with tens or hundreds of thousands of variables) which can exploit the speed of optical processors, require the 32 bit accuracy obtainable from digital machines. To obtain this required 32 bit accuracy with an optical processor, the data can be digitally encoded, thereby reducing the dynamic range requirements of the optical system (i.e., decreasing the effect of optical errors on the data) while providing increased accuracy. This report describes a new digitally encoded optical linear algebra processor architecture for solving finite element and banded matrix-vector problems. A linear static plate bending case study is described which quantities the processor requirements. Multiplication by digital convolution is explained, and the digitally encoded optical processor architecture is advanced.

  15. Discrete Fourier transformation processor based on complex radix (−1 + j number system

    Directory of Open Access Journals (Sweden)

    Anidaphi Shadap

    2017-02-01

    Full Text Available Complex radix (−1 + j allows the arithmetic operations of complex numbers to be done without treating the divide and conquer rules, which offers the significant speed improvement of complex numbers computation circuitry. Design and hardware implementation of complex radix (−1 + j converter has been introduced in this paper. Extensive simulation results have been incorporated and an application of this converter towards the implementation of discrete Fourier transformation (DFT processor has been presented. The functionality of the DFT processor have been verified in Xilinx ISE design suite version 14.7 and performance parameters like propagation delay and dynamic switching power consumption have been calculated by Virtuoso platform in Cadence. The proposed DFT processor has been implemented through conversion, multiplication and addition. The performance parameter matrix in terms of delay and power consumption offered a significant improvement over other traditional implementation of DFT processor.

  16. A precise goniometer/tensiometer using a low cost single-board computer

    Science.gov (United States)

    Favier, Benoit; Chamakos, Nikolaos T.; Papathanasiou, Athanasios G.

    2017-12-01

    Measuring the surface tension and the Young contact angle of a droplet is extremely important for many industrial applications. Here, considering the booming interest for small and cheap but precise experimental instruments, we have constructed a low-cost contact angle goniometer/tensiometer, based on a single-board computer (Raspberry Pi). The device runs an axisymmetric drop shape analysis (ADSA) algorithm written in Python. The code, here named DropToolKit, was developed in-house. We initially present the mathematical framework of our algorithm and then we validate our software tool against other well-established ADSA packages, including the commercial ramé-hart DROPimage Advanced as well as the DropAnalysis plugin in ImageJ. After successfully testing for various combinations of liquids and solid surfaces, we concluded that our prototype device would be highly beneficial for industrial applications as well as for scientific research in wetting phenomena compared to the commercial solutions.

  17. Single photon emission computed tomography study of human pulmonary perfusion: preliminary findings

    Energy Technology Data Exchange (ETDEWEB)

    Carratu, L; Sofia, M [Naples Univ. (Italy). Facolta di Medicina e Chirurgia; Salvatore, M; Muto, P; Ariemma, G [Istituto Nazionale per la Prevenzione, Lo Studio e La Cura dei Tumori Fondazione Pascale, Naples (Italy); Lopez-Majano, V [Cook County Hospital, Chicago, IL (USA). Nuclear Medicine Div.

    1984-02-01

    Single photon emission computed tomography (SPECT) was performed with /sup 99/Tcsup(m)-albumin macroaggregates to study human pulmonary perfusion in healthy subjects and patients with respiratory diseases such as chronic obstructive pulmonary disease (COPD) and lung neoplasms. The reconstructed SPECT data was displayed in coronal, transverse, sagittal plane sections and compared to conventional perfusion scans. The SPECT data gave more complicated anatomical information about the extent of damage and morphology of the pulmonary vascular bed. In healthy subjects and COPD patients, qualitative and quantitative assessment of pulmonary perfusion could be obtained from serial SPECT scans with respect to distribution and relative concentration of the injected radiopharmaceutical. Furthermore, SPECT of pulmonary perfusion has been useful in detecting the extent of damage to the pulmonary circulation. This is useful for the preoperative evaluation and staging of lung cancer.

  18. A Single Camera Motion Capture System for Human-Computer Interaction

    Science.gov (United States)

    Okada, Ryuzo; Stenger, Björn

    This paper presents a method for markerless human motion capture using a single camera. It uses tree-based filtering to efficiently propagate a probability distribution over poses of a 3D body model. The pose vectors and associated shapes are arranged in a tree, which is constructed by hierarchical pairwise clustering, in order to efficiently evaluate the likelihood in each frame. Anew likelihood function based on silhouette matching is proposed that improves the pose estimation of thinner body parts, i. e. the limbs. The dynamic model takes self-occlusion into account by increasing the variance of occluded body-parts, thus allowing for recovery when the body part reappears. We present two applications of our method that work in real-time on a Cell Broadband Engine™: a computer game and a virtual clothing application.

  19. Evaluation of a 99Tcm bound brain scanning agent for single photon emission computed tomography

    DEFF Research Database (Denmark)

    Andersen, A R; Hasselbalch, S G; Paulson, O B

    1986-01-01

    D,L HM-PAO-99Tcm (PAO) is a lipophilic tracer complex which is avidly taken up by the brain. We have compared the regional distribution of PAO with regional cerebral blood flow (CBF). CBF was measured by single photon emission computed tomography (SPECT) by Tomomatic 64 after 133Xe inhalation in 41...... patients. With the same SPECT the distribution of PAO was measured after intravenous injection. High resolution (HR) and low resolution (LR) studies were performed yielding a resolution of 6-10 mm (HR) and 15-20 mm (LR). PAO images showed close resemblance to 133Xe CBF tomograms. Only 20 per cent...... of the (decay corrected) brain counts were lost during 24 hours....

  20. Single-particle and collective dynamics of methanol confined in carbon nanotubes: a computer simulation study

    International Nuclear Information System (INIS)

    Garberoglio, Giovanni

    2010-01-01

    We present the results of computer simulations of methanol confined in carbon nanotubes. Different levels of confinement were identified as a function of the nanotube radius and characterized using a pair-distribution function adapted to the cylindrical geometry of these systems. Dynamical properties of methanol were also analysed as a function of the nanotube size, both at the level of single-particle and collective properties. We found that confinement in narrow carbon nanotubes strongly affects the dynamical properties of methanol with respect to the bulk phase, due to the strong interaction with the carbon nanotube. In the other cases, confined methanol shows properties quite similar to those of the bulk phase. These phenomena are related to the peculiar hydrogen bonded network of methanol and are compared to the behaviour of water confined in similar conditions. The effect of nanotube flexibility on the dynamical properties of confined methanol is also discussed.

  1. Differentiation of malignant glioma and metastatic brain tumor by thallium-201 single photon emission computed tomography

    Energy Technology Data Exchange (ETDEWEB)

    Kojima, Yasuhiro; Kuwana, Nobumasa; Noji, Masato; Tosa, Junichi [Yokohama Minami Kyosai Hospital (Japan)

    1994-09-01

    The use of superdelayed thallium-201 single photon emission computed tomography ([sup 201]Tl SPECT) for differentiating malignant gliomas from cerebral metastases was investigated in 23 patients (7 with meningioma, 6 with glioma, 7 with cerebral metastasis, 1 with each of neurinoma, abscess, and necrosis). 4 mCi of [sup 201]Tl was injected intravenously, and gamma camera scans were performed after 10 minutes and 4, 24, 72, and 96 hours (superdelayed scan). The mean thallium index of meningiomas was significantly higher than those of gliomas and cerebral metastases after 10 minutes, while the mean thallium indices of meningiomas and gliomas were significantly higher than those of cerebral metastases after 96 hours. The combination of early and superdelayed [sup 201]Tl SPECT may be useful in differentiating malignant gliomas from cerebral metastases. (author).

  2. Hot water epilepsy: Phenotype and single photon emission computed tomography observations

    Directory of Open Access Journals (Sweden)

    Mehul Patel

    2014-01-01

    Full Text Available We studied the anatomical correlates of reflex hot water epilepsy (HWE using multimodality investigations viz. magnetic resonance imaging (MRI, electroencephalography (EEG, and single photon emission computed tomography (SPECT. Five men (mean age: 27.0 ΁ 5.8 years with HWE were subjected to MRI of brain, video-EEG studies, and SPECT scan. These were correlated with phenotypic presentations. Seizures could be precipitated in three patients with pouring of hot water over the head and semiology of seizures was suggestive of temporal lobe epilepsy. Ictal SPECT showed hyperperfusion in: left medial temporal - one, left lateral temporal - one, and right parietal - one. Interictal SPECT was normal in all five patients and did not help in localization. MRI and interictal EEG was normal in all the patients. The clinical and SPECT studies suggested temporal lobe as the seizure onset zone in some of the patients with HWE.

  3. Single-photon emission computed tomography in the clinical evaluation of dementia

    International Nuclear Information System (INIS)

    Jagust, W.J.; Reed, B.R.; Budinger, T.F.; Colina, M.

    1987-01-01

    Physiological imaging using positron emission tomography (PET) has been a useful tool in the investigation of dementia. In particular, patterns of cerebral glucose utilization appear to differentiate various types of dementia, with Alzheimer's disease (AD) demonstrating a propensity for hypometabolism to involve the temporoparietal cortex. Single-photon emission computed tomography (SPECT) using new tracers for the measurement of regional cerebral blood flow is a technique with potentially broader clinical availability than PET and thus may provide a practical method of routinely evaluating patients. The authors studied eight patients with AD, four healthy elderly controls, and one patient with multi-infarct dementia (MID) using the tracer 123 I-N-isopropyl-p-iodoamphetamine with SPECT

  4. Brain perfusion single photon emission computed tomography in major psychiatric disorders: From basics to clinical practice

    International Nuclear Information System (INIS)

    Santra, Amburanjan; Kumar, Rakesh

    2014-01-01

    Brain single photon emission computed tomography (SPECT) is a well-established and reliable method to assess brain function through measurement of regional cerebral blood flow (rCBF). It can be used to define a patient's pathophysiological status when neurological or psychiatric symptoms cannot be explained by anatomical neuroimaging findings. Though there is ample evidence validating brain SPECT as a technique to track human behavior and correlating psychiatric disorders with dysfunction of specific brain regions, only few psychiatrists have adopted brain SPECT in routine clinical practice. It can be utilized to evaluate the involvement of brain regions in a particular patient, to individualize treatment on basis of SPECT findings, to monitor the treatment response and modify treatment, if necessary. In this article, we have reviewed the available studies in this regard from existing literature and tried to present the evidence for establishing the clinical role of brain SPECT in major psychiatric illnesses

  5. Simulation Study of Single Photon Emission Computed Tomography for Industrial Applications

    International Nuclear Information System (INIS)

    Roy, Tushar; Sarkar, P. S.; Sinha, Amar

    2008-01-01

    SPECT (Single Photon Emission Computed Tomography) provides for an invaluable non-invasive technique for the characterization and activity distribution of the gamma-emitting source. For many applications of radioisotopes for medical and industrial application, not only the positional information of the distribution of radioisotopes is needed but also its strength. The well-established X-ray radiography or transmission tomography techniques do not yield sufficient quantitative information about these objects. Emission tomography is one of the important methods for such characterization. Application of parallel beam, fan beam and 3D cone beam emission tomography methods have been discussed in this paper. Simulation studies to test these algorithms have been carried out to validate the technique.

  6. Brain perfusion single photon emission computed tomography in children after acute encephalopathy

    International Nuclear Information System (INIS)

    Kurihara, Mana; Nakae, Yoichiro; Kohagizawa, Toshitaka; Eto, Yoshikatsu

    2005-01-01

    We studied single photon emission computed tomography (SPECT) of 15 children with acute encephalopathy after more than 1 year from the onset, using technetium-99 m-L, L-ethyl cystinate dimer ( 99m Tc-ECD) and a three-dementional stereotaxic region of interest template. Regional cerebral blood flow was evaluated and divided in three groups according to the severity of disability: absent or mild, moderate, and severe. There was no abnormality on SPECT in the patients without disability or with mild disability. Diffuse hypoperfusion was shown in the groups with moderate and severe disability. The patients with severe disability showed hypoperfusion in the pericallosal, frontal and central areas which was more pronounced than in the patients with moderate disability. (author)

  7. Detection of avascular necrosis in adults by single photon emission computed tomography

    International Nuclear Information System (INIS)

    Collier, B.D.; Johnston, R.P.; Carrera, G.; Isitman, A.T.; Hellman, R.S.; Zielonka, J.S.

    1984-01-01

    Twenty-one adult patients with the clinical diagnosis of avascular necrosis (AVN) of the femoral head were examined with planar bone scintigraphy (high resolution collimator) and single photon emission computed tomography (SPECT). The duration of hip pain ranged from 1 day to 18 months. Risk factors (including steroids, renal transplantation, alcoholism, and trauma) were present in 17 cases. A final diagnosis of AVN (20 hips), osteochondral facture, or stress fracture, was established for 17 patients. The 4 remaining patients, who were radiographically normal and did not complain of pain 3 months later, were thought to have no significant bone pathology. SPECT and planar bone scintigraphy were reported as positive for AVN only if a photopenic bony defect could be identified. In particular, uniformly increased activity throughout the femoral head was not considered to be diagnostic of AVN. The authors conclude that by identifying a photopenic defect which is not evident on planar bone scintigraphy, SPECT can contribute to accurate diagnosis of AVN

  8. Quantum computing with defects in diamond

    International Nuclear Information System (INIS)

    Jelezko, F.; Gaebel, T.; Popa, I.; Domhan, M.; Wittmann, C.; Wrachtrup, J.

    2005-01-01

    Full text: Single spins in semiconductors, in particular associated with defect centers, are promising candidates for practical and scalable implementation of quantum computing even at room temperature. Such an implementation may also use the reliable and well known gate constructions from bulk nuclear magnetic resonance (NMR) quantum computing. Progress in development of quantum processor based on defects in diamond will be discussed. By combining optical microscopy, and magnetic resonance techniques, the first quantum logical operations on single spins in a solid are now demonstrated. The system is perspective for room temperature operation because of a weak dependence of decoherence on temperature (author)

  9. "Defining Computer 'Speed': An Unsolved Challenge"

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    Abstract: The reason we use computers is their speed, and the reason we use parallel computers is that they're faster than single-processor computers. Yet, after 70 years of electronic digital computing, we still do not have a solid definition of what computer 'speed' means, or even what it means to be 'faster'. Unlike measures in physics, where the definition of speed is rigorous and unequivocal, in computing there is no definition of speed that is universally accepted. As a result, computer customers have made purchases misguided by dubious information, computer designers have optimized their designs for the wrong goals, and computer programmers have chosen methods that optimize the wrong things. This talk describes why some of the obvious and historical ways of defining 'speed' haven't served us well, and the things we've learned in the struggle to find a definition that works. Biography: Dr. John Gustafson is a Director ...

  10. Optimizing the performance of streaming numerical kernels on the IBM Blue Gene/P PowerPC 450 processor

    KAUST Repository

    Malas, Tareq Majed Yasin; Ahmadia, Aron; Brown, Jed; Gunnels, John A.; Keyes, David E.

    2012-01-01

    Several emerging petascale architectures use energy-efficient processors with vectorized computational units and in-order thread processing. On these architectures the sustained performance of streaming numerical kernels, ubiquitous in the solution

  11. Computational simulation of flow and heat transfer in single-phase natural circulation loops

    International Nuclear Information System (INIS)

    Pinheiro, Larissa Cunha

    2017-01-01

    Passive decay heat removal systems based on natural circulation are essential assets for the new Gen III+ nuclear power reactors and nuclear spent fuel pools. The aim of the present work is to study both laminar and turbulent flow and heat transfer in single-phase natural circulation systems through computational fluid dynamics simulations. The working fluid is considered to be incompressible with constant properties. In the way, the Boussinesq Natural Convection Hypothesis was applied. The model chosen for the turbulence closure problem was the k -- εThe commercial computational fluid dynamics code ANSYS CFX 15.0 was used to obtain the numerical solution of the governing equations. Two single-phase natural circulation circuits were studied, a 2D toroidal loop and a 3D rectangular loop, both with the same boundary conditions of: prescribed heat flux at the heater and fixed wall temperature at the cooler. The validation and verification was performed with the numerical data provided by DESRAYAUD et al. [1] and the experimental data provided by MISALE et al. [2] and KUMAR et al. [3]. An excellent agreement between the Reynolds number (Re) and the modified Grashof number (Gr_m), independently of Prandtl Pr number was observed. However, the convergence interval was observed to be variable with Pr, thus indicating that Pr is a stability governing parameter for natural circulation. Multiple steady states was obtained for Pr = 0,7. Finally, the effect of inclination was studied for the 3D circuit, both in-plane and out-of-plane inclinations were verified for the steady state laminar regime. As a conclusion, the Re for the out-of-plane inclination was in perfect agreement with the correlation found for the zero inclination system, while for the in-plane inclined system the results differ from that of the corresponding vertical loop. (author)

  12. Computational Analysis of Single Nucleotide Polymorphisms Associated with Altered Drug Responsiveness in Type 2 Diabetes

    Directory of Open Access Journals (Sweden)

    Valerio Costa

    2016-06-01

    Full Text Available Type 2 diabetes (T2D is one of the most frequent mortality causes in western countries, with rapidly increasing prevalence. Anti-diabetic drugs are the first therapeutic approach, although many patients develop drug resistance. Most drug responsiveness variability can be explained by genetic causes. Inter-individual variability is principally due to single nucleotide polymorphisms, and differential drug responsiveness has been correlated to alteration in genes involved in drug metabolism (CYP2C9 or insulin signaling (IRS1, ABCC8, KCNJ11 and PPARG. However, most genome-wide association studies did not provide clues about the contribution of DNA variations to impaired drug responsiveness. Thus, characterizing T2D drug responsiveness variants is needed to guide clinicians toward tailored therapeutic approaches. Here, we extensively investigated polymorphisms associated with altered drug response in T2D, predicting their effects in silico. Combining different computational approaches, we focused on the expression pattern of genes correlated to drug resistance and inferred evolutionary conservation of polymorphic residues, computationally predicting the biochemical properties of polymorphic proteins. Using RNA-Sequencing followed by targeted validation, we identified and experimentally confirmed that two nucleotide variations in the CAPN10 gene—currently annotated as intronic—fall within two new transcripts in this locus. Additionally, we found that a Single Nucleotide Polymorphism (SNP, currently reported as intergenic, maps to the intron of a new transcript, harboring CAPN10 and GPR35 genes, which undergoes non-sense mediated decay. Finally, we analyzed variants that fall into non-coding regulatory regions of yet underestimated functional significance, predicting that some of them can potentially affect gene expression and/or post-transcriptional regulation of mRNAs affecting the splicing.

  13. Sacrococcygeal chordoma: increased 99mTc methylene diphosphonate uptake on single photon emission computed tomography/computed tomography bone scintigraphy

    International Nuclear Information System (INIS)

    Kamaleshwaran, Koramadai Karuppuswamy; Bhattacharya, Anish; Harisankar, Chidambaram Natarajan Balasubramaniam; Mittal, Bhagwant Rai; Goni, Vijay

    2012-01-01

    Chordoma is a malignant tumor arising from the remnants of the notochord, and is the most frequent primitive tumor of the sacrum. While most sacral tumors show increased concentration of bone-seeking radiopharmaceuticals, chordomas usually exhibit decreased uptake. The authors present an image of a sacrococcygeal chordoma with osteolysis and increased uptake of 99m Tc methylene diphosphonate on planar and single photon emission computed tomography/computed tomography bone scintigraphy. (author)

  14. Kajian dan Implementasi Real Time Operating System pada Single Board Computer Berbasis Arm

    Directory of Open Access Journals (Sweden)

    Wiedjaja A

    2014-06-01

    Full Text Available Operating System is an important software in computer system. For personal and office use the operating system is sufficient. However, to critical mission applications such as nuclear power plants and braking system on the car (auto braking system which need a high level of reliability, it requires operating system which operates in real time. The study aims to assess the implementation of the Linux-based operating system on a Single Board Computer (SBC ARM-based, namely Pandaboard ES with the Dual-core ARM Cortex-A9, TI OMAP 4460 type. Research was conducted by the method of implementation of the General Purpose OS Ubuntu 12:04 OMAP4-armhf-RTOS and Linux 3.4.0-rt17 + on PandaBoard ES. Then research compared the latency value of each OS on no-load and with full-load condition. The results obtained show the maximum latency value of RTOS on full load condition is at 45 uS, much smaller than the maximum value of GPOS at full-load at 17.712 uS. The lower value of latency demontrates that the RTOS has ability to run the process in a certain period of time much better than the GPOS.

  15. Diversity of bilateral synaptic assemblies for binaural computation in midbrain single neurons.

    Science.gov (United States)

    He, Na; Kong, Lingzhi; Lin, Tao; Wang, Shaohui; Liu, Xiuping; Qi, Jiyao; Yan, Jun

    2017-11-01

    Binaural hearing confers many beneficial functions but our understanding of its underlying neural substrates is limited. This study examines the bilateral synaptic assemblies and binaural computation (or integration) in the central nucleus of the inferior colliculus (ICc) of the auditory midbrain, a key convergent center. Using in-vivo whole-cell patch-clamp, the excitatory and inhibitory postsynaptic potentials (EPSPs/IPSPs) of single ICc neurons to contralateral, ipsilateral and bilateral stimulation were recorded. According to the contralateral and ipsilateral EPSP/IPSP, 7 types of bilateral synaptic assemblies were identified. These include EPSP-EPSP (EE), E-IPSP (EI), E-no response (EO), II, IE, IO and complex-mode (CM) neurons. The CM neurons showed frequency- and/or amplitude-dependent EPSPs/IPSPs to contralateral or ipsilateral stimulation. Bilateral stimulation induced EPSPs/IPSPs that could be larger than (facilitation), similar to (ineffectiveness) or smaller than (suppression) those induced by contralateral stimulation. Our findings have allowed our group to characterize novel neural circuitry for binaural computation in the midbrain. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. Scintigraphic appearance of focal fatty infiltration of the liver using single-photon emission computed tomography

    International Nuclear Information System (INIS)

    Kudo, M.; Hirasa, M.; Ibuki, Y.

    1984-01-01

    Fatty infiltration of the liver had been considered to assume a uniform distribution until quite recently. However, the development of X-ray computed tomography (XCT) and the ultrasound (US) has proven that fatty infiltration of the liver may sometimes assume a nonuniform distribution (focal fatty infiltration (FFI)). This investigation was undertaken to evaluate the scintigraphic appearance of FFI using single-photon emission computed tomography (SPECT) with a GE Maxicamera 400T. Radionuclide images including SPECT were evaluated in 12 cases with FFI which were diagnosed by XCT and US. Most of them were histrogically confirmed to be positive fatty infiltration in the liver. The results were as follows. The fatty infiltrated area was visualized as a hot spot in one case, a defect in 2 cases, a low uptake in one case and a normal uptake in 8 cases. Radionuclide imaging of FFI shows a large variety of findings and it suggests that Kupffer cell function varies with the causes or stage of fatty infiltration. And one can understand the pathological state of FFI from a viewpoint of Kupffer cell function only by radionuclide imaging including SPECT, which is very useful to compare the images with XCT images

  17. The identification of spinal pathology in chronic low back pain using single photon emission computed tomography

    International Nuclear Information System (INIS)

    Ryan, R.J.; Gibson, T.; Fogelman, I.

    1992-01-01

    Single photon emission computed tomography (SPECT) findings were investigated in 80 consecutive patients (aged 18-70 years, median 44) referred to a rheumatology outpatient clinic with low back pain persisting for more than 3 months. Lesions of the lumbar spine were demonstrated in 60% of patients using SPECT but in only 35% with planar imaging. Fifty-one per cent of all lesions were only detected by SPECT, and lesions visualized on SPECT could be precisely localized to the vertebral body, or different parts of the posterior elements. Fifty per cent of lesions involved the facetal joints of which almost 60% were identified on SPECT alone. X-rays of the lumbar spine, with posterior oblique views, failed to demonstrate abnormalities corresponding to almost all SPECT posterior element lesions although it identified abnormalities corresponding to over 60% of anterior SPECT lesions. Computed tomography (CT) was performed in 30 patients with a SPECT lesion and sites of facetal joint activity corresponded to facetal osteoarthritis in 82%. (author)

  18. Statistical noise with the weighted backprojection method for single photon emission computed tomography

    International Nuclear Information System (INIS)

    Murayama, Hideo; Tanaka, Eiichi; Toyama, Hinako.

    1985-01-01

    The weighted backprojection (WBP) method and the radial post-correction (RPC) method were compared with other several attenuation correction methods for single photon emission computed tomography by computer simulation. These methods are the pre-correction method with arithmetic means of opposing projections, the post-correction method with a correction matrix, and the inverse attenuated Randon transform method. Statistical mean square noise in a reconstructed image was formulated, and was displayed two-dimensionally for typical simulated phantoms. The noise image for the WBP method was dependent on several parameters, namely, size of an attenuating object, distribution of activity, the attenuation coefficient, and choise of the reconstruction index, k and position of the reconstruction origin. The noise image for the WBP method with k=0 was almost the same for the RPC method. It has been shown that position of the reconstruction origin has to be chosen appropriately in order to improve the noise properties of the reconstructed image for the WBP method as well as the RPC method. Comparision of the different attenuation correction methods accomplished by using both the reconstructed images and the statistical noise images with the same mathematical phantom and convolving function concluded that the WBP method and the RPC method were more amenable to any radioisotope distributions than the other methods, and had the advantage of flexibility to improve image noise of any local positions. (author)

  19. Single photon emission computed tomography using a regularizing iterative method for attenuation correction

    International Nuclear Information System (INIS)

    Soussaline, Francoise; Cao, A.; Lecoq, G.

    1981-06-01

    An analytically exact solution to the attenuated tomographic operator is proposed. Such a technique called Regularizing Iterative Method (RIM) belongs to the iterative class of procedures where a priori knowledge can be introduced on the evaluation of the size and shape of the activity domain to be reconstructed, and on the exact attenuation distribution. The relaxation factor used is so named because it leads to fast convergence and provides noise filtering for a small number of iteractions. The effectiveness of such a method was tested in the Single Photon Emission Computed Tomography (SPECT) reconstruction problem, with the goal of precise correction for attenuation before quantitative study. Its implementation involves the use of a rotating scintillation camera based SPECT detector connected to a mini computer system. Mathematical simulations of cylindical uniformly attenuated phantoms indicate that in the range of a priori calculated relaxation factor a fast converging solution can always be found with a (contrast) accuracy of the order of 0.2 to 4% given that numerical errors and noise are or not, taken into account. The sensitivity of the (RIM) algorithm to errors in the size of the reconstructed object and in the value of the attenuation coefficient μ was studied, using the same simulation data. Extreme variations of +- 15% in these parameters will lead to errors of the order of +- 20% in the quantitative results. Physical phantoms representing a variety of geometrical situations were also studied

  20. Erica the Rhino: A Case Study in Using Raspberry Pi Single Board Computers for Interactive Art

    Directory of Open Access Journals (Sweden)

    Philip J. Basford

    2016-06-01

    Full Text Available Erica the Rhino is an interactive art exhibit created by the University of Southampton, UK. Erica was created as part of a city wide art trail in 2013 called “Go! Rhinos”, curated by Marwell Wildlife, to raise awareness of Rhino conservation. Erica arrived as a white fibreglass shell which was then painted and equipped with five Raspberry Pi Single Board Computers (SBC. These computers allowed the audience to interact with Erica through a range of sensors and actuators. In particular, the audience could feed and stroke her to prompt reactions, as well as send her Tweets to change her behaviour. Pi SBCs were chosen because of their ready availability and their educational pedigree. During the deployment, ‘coding clubs’ were run in the shopping centre where Erica was located, and these allowed children to experiment with and program the same components used in Erica. The experience gained through numerous deployments around the country has enabled Erica to be upgraded to increase reliability and ease of maintenance, whilst the release of the Pi 2 has allowed her responsiveness to be improved.

  1. Three-dimensional integration of nanotechnologies for computing and data storage on a single chip

    Science.gov (United States)

    Shulaker, Max M.; Hills, Gage; Park, Rebecca S.; Howe, Roger T.; Saraswat, Krishna; Wong, H.-S. Philip; Mitra, Subhasish

    2017-07-01

    The computing demands of future data-intensive applications will greatly exceed the capabilities of current electronics, and are unlikely to be met by isolated improvements in transistors, data storage technologies or integrated circuit architectures alone. Instead, transformative nanosystems, which use new nanotechnologies to simultaneously realize improved devices and new integrated circuit architectures, are required. Here we present a prototype of such a transformative nanosystem. It consists of more than one million resistive random-access memory cells and more than two million carbon-nanotube field-effect transistors—promising new nanotechnologies for use in energy-efficient digital logic circuits and for dense data storage—fabricated on vertically stacked layers in a single chip. Unlike conventional integrated circuit architectures, the layered fabrication realizes a three-dimensional integrated circuit architecture with fine-grained and dense vertical connectivity between layers of computing, data storage, and input and output (in this instance, sensing). As a result, our nanosystem can capture massive amounts of data every second, store it directly on-chip, perform in situ processing of the captured data, and produce ‘highly processed’ information. As a working prototype, our nanosystem senses and classifies ambient gases. Furthermore, because the layers are fabricated on top of silicon logic circuitry, our nanosystem is compatible with existing infrastructure for silicon-based technologies. Such complex nano-electronic systems will be essential for future high-performance and highly energy-efficient electronic systems.

  2. Computer-guided facility for the study of single crystals at the gamma diffractometer GADI

    International Nuclear Information System (INIS)

    Heer, H.; Bleichert, H.; Gruhn, W.; Moeller, R.

    1984-10-01

    In the study of solid-state properties it is in many cases necessary to work with single crystals. The increased requirement in the industry and research as well as the desire for better characterization by means of γ-diffractometry made it necessary to improve and to modernize the existing instrument. The advantages of a computer-guided facility against the conventional, semiautomatic operation are manifold. Not only the process guidance, but also the data acquisition and evaluation are performed by the computer. By a remote control the operator is able to find quickly a reflex and to drive the crystal in every desired measuring position. The complete protocollation of all important measuring parameters, the convenient data storage, as well as the automatic evaluation are much useful for the user. Finally the measuring time can be increased to practically 24 hours per day. By this the versed characterization by means of γ-diffractometry is put on a completely new level. (orig.) [de

  3. Benchmarking Further Single Board Computers for Building a Mini Supercomputer for Simulation of Telecommunication Systems

    Directory of Open Access Journals (Sweden)

    Gábor Lencse

    2016-01-01

    Full Text Available Parallel Discrete Event Simulation (PDES with the conservative synchronization method can be efficiently used for the performance analysis of telecommunication systems because of their good lookahead properties. For PDES, a cost effective execution platform may be built by using single board computers (SBCs, which offer relatively high computation capacity compared to their price or power consumption and especially to the space they take up. A benchmarking method is proposed and its operation is demonstrated by benchmarking ten different SBCs, namely Banana Pi, Beaglebone Black, Cubieboard2, Odroid-C1+, Odroid-U3+, Odroid-XU3 Lite, Orange Pi Plus, Radxa Rock Lite, Raspberry Pi Model B+, and Raspberry Pi 2 Model B+. Their benchmarking results are compared to find out which one should be used for building a mini supercomputer for parallel discrete-event simulation of telecommunication systems. The SBCs are also used to build a heterogeneous cluster and the performance of the cluster is tested, too.

  4. PERFORMANCE EVALUATION OF OR1200 PROCESSOR WITH EVOLUTIONARY PARALLEL HPRC USING GEP

    Directory of Open Access Journals (Sweden)

    R. Maheswari

    2012-04-01

    Full Text Available In this fast computing era, most of the embedded system requires more computing power to complete the complex function/ task at the lesser amount of time. One way to achieve this is by boosting up the processor performance which allows processor core to run faster. This paper presents a novel technique of increasing the performance by parallel HPRC (High Performance Reconfigurable Computing in the CPU/DSP (Digital Signal Processor unit of OR1200 (Open Reduced Instruction Set Computer (RISC 1200 using Gene Expression Programming (GEP an evolutionary programming model. OR1200 is a soft-core RISC processor of the Intellectual Property cores that can efficiently run any modern operating system. In the manufacturing process of OR1200 a parallel HPRC is placed internally in the Integer Execution Pipeline unit of the CPU/DSP core to increase the performance. The GEP Parallel HPRC is activated /deactivated by triggering the signals i HPRC_Gene_Start ii HPRC_Gene_End. A Verilog HDL(Hardware Description language functional code for Gene Expression Programming parallel HPRC is developed and synthesised using XILINX ISE in the former part of the work and a CoreMark processor core benchmark is used to test the performance of the OR1200 soft core in the later part of the work. The result of the implementation ensures the overall speed-up increased to 20.59% by GEP based parallel HPRC in the execution unit of OR1200.

  5. Quantitation of postexercise lung thallium-201 uptake during single photon emission computed tomography

    International Nuclear Information System (INIS)

    Kahn, J.K.; Carry, M.M.; McGhie, I.; Pippin, J.J.; Akers, M.S.; Corbett, J.R.

    1989-01-01

    To test the hypothesis that analysis of lung thallium uptake measured during single photon emission computed tomography (SPECT) yields supplementary clinical information as reported for planar imaging, quantitative analysis of lung thallium uptake following maximal exercise was performed in 40 clinically normal subjects (Group 1) and 15 angiographically normal subjects (Group 2). Lung thallium uptake was measured from anterior projection images using a ratio of heart-to-lung activities. Seventy subjects with coronary artery disease (CAD) (Group 3) determined by angiography (greater than or equal to 70% luminal stenosis) underwent thallium perfusion SPECT. Thirty-nine percent of these subjects had multivessel and 61% had single vessel CAD. Lung thallium uptake was elevated in 47 of 70 (67%) Group 3 subjects. Group 3 subjects with elevated lung thallium uptake did not differ from Group 3 subjects with normal lung thallium uptake with respect to extent or distribution of coronary artery disease, left ventricular function, or severity of myocardial ischemia as determined by exercise and redistribution thallium SPECT. Thus, the measurement of thallium lung uptake from anterior projection images obtained during SPECT frequently identifies patients with CAD, but it may not provide supplementary information regarding the extent of myocardial ischemia or ventricular dysfunction

  6. Diagnostic limitations of 10 mm thickness single-slice computed tomography for patients with suspected appendicitis

    International Nuclear Information System (INIS)

    Kaidu, Motoki; Oyamatu, Manabu; Sato, Kenji; Saitou, Akira; Yamamoto, Satoshi; Yoshimura, Norihiko; Sasai, Keisuke

    2008-01-01

    The aim of this retrospective analysis was to evaluate the accuracy of 10 mm thickness single helical computed tomography (CT) examination for confirming the diagnosis of appendicitis or providing a diagnosis other than appendicitis, including underlying periappendical neoplasms. From April 1, 2001 to March 30, 2005, a total of 272 patients with suspected appendicitis underwent CT examinations. Of the 272 patients, 106 (39%) underwent surgery. Seven CT examinations for seven patients were excluded because of inconsistency of the CT protocol. We therefore reviewed 99 CT images (99 patients) with correlation to surgical-pathological findings to clarify the diagnostic accuracy of CT examinations. We compared the postoperative diagnosis with the preoperative CT report. The final diagnoses were confirmed by macroscopic findings at surgery and pathological evaluations if necessary. Of the 99 patients, 87 had acute appendicitis at surgery. The sensitivity, specificity, and accuracy of CT were 98.9%, 75.0%, and 96.0%, respectively. The positive predictive value and negative predictive value were 96.6% and 90.0%, respectively. Among nine patients in the true-negative category, five had colon cancers; and among three patients in the false-positive category, two had cancer of the cecal-appendiceal region as the underlying disease. CT examination is useful for patients with suspected appendicitis, but radiologists should be aware of the limitation of thick-sliced single helical CT. They should also be aware of the possibility of other diseases, including coincident abdominal neoplasms and underlying cecal-appendiceal cancer. (author)

  7. Photonics and Fiber Optics Processor Lab

    Data.gov (United States)

    Federal Laboratory Consortium — The Photonics and Fiber Optics Processor Lab develops, tests and evaluates high speed fiber optic network components as well as network protocols. In addition, this...

  8. Broadband set-top box using MAP-CA processor

    Science.gov (United States)

    Bush, John E.; Lee, Woobin; Basoglu, Chris

    2001-12-01

    Advances in broadband access are expected to exert a profound impact in our everyday life. It will be the key to the digital convergence of communication, computer and consumer equipment. A common thread that facilitates this convergence comprises digital media and Internet. To address this market, Equator Technologies, Inc., is developing the Dolphin broadband set-top box reference platform using its MAP-CA Broadband Signal ProcessorT chip. The Dolphin reference platform is a universal media platform for display and presentation of digital contents on end-user entertainment systems. The objective of the Dolphin reference platform is to provide a complete set-top box system based on the MAP-CA processor. It includes all the necessary hardware and software components for the emerging broadcast and the broadband digital media market based on IP protocols. Such reference design requires a broadband Internet access and high-performance digital signal processing. By using the MAP-CA processor, the Dolphin reference platform is completely programmable, allowing various codecs to be implemented in software, such as MPEG-2, MPEG-4, H.263 and proprietary codecs. The software implementation also enables field upgrades to keep pace with evolving technology and industry demands.

  9. A fast inner product processor based on equal alignments

    Energy Technology Data Exchange (ETDEWEB)

    Smith, S.P.; Torng, H.C.

    1985-11-01

    Inner product computation is an important operation, invoked repeatedly in matrix multiplications. A high-speed inner product processor can be very useful (among many possible applications) in real-time signal processing. This paper presents the design of a fast inner product processor, with appreciably reduced latency and cost. The inner product processor is implemented with a tree of carry-propagate or carry-save adders; this structure is obtained with the incorporation of three innovations in the conventional multiply/add tree: The leaf-multipliers are expanded into adder subtrees, thus achieving an O(log Nb) latency, where N denotes the number of elements in a vector and b the number of bits in each element. The partial products, to be summed in producing an inner product, are reordered according to their ''minimum alignments.'' This reordering brings approximately a 20% savings in hardware-including adders and data paths. The reduction in adder widths also yields savings in carry propagation time for carry-propagate adders. For trees implemented with carry-save adders, the partial product reordering also serves to truncate the carry propagation chain in the final propagation stage by 2 log b - 1 positions, thus significantly reducing the latency further. A form of the Baugh and Wooley algorithm is adopted to implement two's complement notation with changes only in peripheral hardware.

  10. A single exercise bout and locomotor learning after stroke: physiological, behavioural, and computational outcomes.

    Science.gov (United States)

    Charalambous, Charalambos C; Alcantara, Carolina C; French, Margaret A; Li, Xin; Matt, Kathleen S; Kim, Hyosub E; Morton, Susanne M; Reisman, Darcy S

    2018-05-15

    Previous work demonstrated an effect of a single high-intensity exercise bout coupled with motor practice on the retention of a newly acquired skilled arm movement, in both neurologically intact and impaired adults. In the present study, using behavioural and computational analyses we demonstrated that a single exercise bout, regardless of its intensity and timing, did not increase the retention of a novel locomotor task after stroke. Considering both present and previous work, we postulate that the benefits of exercise effect may depend on the type of motor learning (e.g. skill learning, sensorimotor adaptation) and/or task (e.g. arm accuracy-tracking task, walking). Acute high-intensity exercise coupled with motor practice improves the retention of motor learning in neurologically intact adults. However, whether exercise could improve the retention of locomotor learning after stroke is still unknown. Here, we investigated the effect of exercise intensity and timing on the retention of a novel locomotor learning task (i.e. split-belt treadmill walking) after stroke. Thirty-seven people post stroke participated in two sessions, 24 h apart, and were allocated to active control (CON), treadmill walking (TMW), or total body exercise on a cycle ergometer (TBE). In session 1, all groups exercised for a short bout (∼5 min) at low (CON) or high (TMW and TBE) intensity and before (CON and TMW) or after (TBE) the locomotor learning task. In both sessions, the locomotor learning task was to walk on a split-belt treadmill in a 2:1 speed ratio (100% and 50% fast-comfortable walking speed) for 15 min. To test the effect of exercise on 24 h retention, we applied behavioural and computational analyses. Behavioural data showed that neither high-intensity group showed greater 24 h retention compared to CON, and computational data showed that 24 h retention was attributable to a slow learning process for sensorimotor adaptation. Our findings demonstrated that acute exercise

  11. Further computer appreciation

    CERN Document Server

    Fry, T F

    2014-01-01

    Further Computer Appreciation is a comprehensive cover of the principles and aspects in computer appreciation. The book starts by describing the development of computers from the first to the third computer generations, to the development of processors and storage systems, up to the present position of computers and future trends. The text tackles the basic elements, concepts and functions of digital computers, computer arithmetic, input media and devices, and computer output. The basic central processor functions, data storage and the organization of data by classification of computer files,

  12. Point and track-finding processors for multiwire chambers

    CERN Document Server

    Hansroul, M

    1973-01-01

    The hardware processors described below are designed to be used in conjunction with multi-wire chambers. They have the characteristic of being based on computational methods in contrast to analogue procedures. In a sense, they are hardware implementations of computer programs. But, being specially designed for their purpose, they are free of the restrictions imposed by the architecture of the computer on which the equivalent program is to run. The parallelism inherent in the algorithms can thus be fully exploited. Combined with the use of fast access scratch-pad memories and the non-sequential nature of the control program, the parallelism accounts for the fact that these processors are expected to execute 2-3 orders of magnitude faster than the equivalent Fortran programs on a CDC 7600 or 6600. As a consequence, methods which are simple and straightforward, but which are impractical because they require an exorbitant amount of computer time can on the contrary be very attractive for hardware implementation. ...

  13. Keystone Business Models for Network Security Processors

    OpenAIRE

    Arthur Low; Steven Muegge

    2013-01-01

    Network security processors are critical components of high-performance systems built for cybersecurity. Development of a network security processor requires multi-domain experience in semiconductors and complex software security applications, and multiple iterations of both software and hardware implementations. Limited by the business models in use today, such an arduous task can be undertaken only by large incumbent companies and government organizations. Neither the “fabless semiconductor...

  14. Real time monitoring of electron processors

    International Nuclear Information System (INIS)

    Nablo, S.V.; Kneeland, D.R.; McLaughlin, W.L.

    1995-01-01

    A real time radiation monitor (RTRM) has been developed for monitoring the dose rate (current density) of electron beam processors. The system provides continuous monitoring of processor output, electron beam uniformity, and an independent measure of operating voltage or electron energy. In view of the device's ability to replace labor-intensive dosimetry in verification of machine performance on a real-time basis, its application to providing archival performance data for in-line processing is discussed. (author)

  15. Advanced topics in security computer system design

    International Nuclear Information System (INIS)

    Stachniak, D.E.; Lamb, W.R.

    1989-01-01

    The capability, performance, and speed of contemporary computer processors, plus the associated performance capability of the operating systems accommodating the processors, have enormously expanded the scope of possibilities for designers of nuclear power plant security computer systems. This paper addresses the choices that could be made by a designer of security computer systems working with contemporary computers and describes the improvement in functionality of contemporary security computer systems based on an optimally chosen design. Primary initial considerations concern the selection of (a) the computer hardware and (b) the operating system. Considerations for hardware selection concern processor and memory word length, memory capacity, and numerous processor features

  16. A Geometric Algebra Co-Processor for Color Edge Detection

    Directory of Open Access Journals (Sweden)

    Biswajit Mishra

    2015-01-01

    Full Text Available This paper describes advancement in color edge detection, using a dedicated Geometric Algebra (GA co-processor implemented on an Application Specific Integrated Circuit (ASIC. GA provides a rich set of geometric operations, giving the advantage that many signal and image processing operations become straightforward and the algorithms intuitive to design. The use of GA allows images to be represented with the three R, G, B color channels defined as a single entity, rather than separate quantities. A novel custom ASIC is proposed and fabricated that directly targets GA operations and results in significant performance improvement for color edge detection. Use of the hardware described in this paper also shows that the convolution operation with the rotor masks within GA belongs to a class of linear vector filters and can be applied to image or speech signals. The contribution of the proposed approach has been demonstrated by implementing three different types of edge detection schemes on the proposed hardware. The overall performance gains using the proposed GA Co-Processor over existing software approaches are more than 3.2× faster than GAIGEN and more than 2800× faster than GABLE. The performance of the fabricated GA co-processor is approximately an order of magnitude faster than previously published results for hardware implementations.

  17. The hardware track finder processor in CMS at CERN

    International Nuclear Information System (INIS)

    Kluge, A.

    1997-07-01

    The work covers the design of the Track Finder Processor in the high energy experiment CMS at CERN/Geneva. The task of this processor is to identify muons and to measure their transverse momentum. The Track Finder makes it possible to determine the physical relevance of each high energetic collision and to forward only interesting data to the data analysis units. Data of more than two hundred thousand detector cells are used to determine the location of muons and to measure their transverse momentum. Each 25 ns a new data set is generated. Measurement of location and transverse momentum of the muons can be terminated within 350 ns by using an ASIC. The classical method in high energy physics experiments is to employ a pattern comparison method. The predefined patterns are compared to the found patterns. The high number of data channels and the complex requirements to the spatial detector resolution do not permit to employ a pattern comparison method. A so called track following algorithm was designed, which is able to assemble complete tracks through the whole detector starting from single track segments. Instead of storing a high number of track patterns the problem is brought back to the algorithm level. Comprehensive simulations, employing the hardware simulation language VHDL, were conducted in order to optimize the algorithm and its hardware implementation. A FPGA (field program able gate array)-prototype was designed. A feasibility study to implement the track finder processor employing ASICs was conducted. (author)

  18. Computational atomic and nuclear physics

    International Nuclear Information System (INIS)

    Bottcher, C.; Strayer, M.R.; McGrory, J.B.

    1990-01-01

    The evolution of parallel processor supercomputers in recent years provides opportunities to investigate in detail many complex problems, in many branches of physics, which were considered to be intractable only a few years ago. But to take advantage of these new machines, one must have a better understanding of how the computers organize their work than was necessary with previous single processor machines. Equally important, the scientist must have this understanding as well as a good understanding of the structure of the physics problem under study. In brief, a new field of computational physics is evolving, which will be led by investigators who are highly literate both computationally and physically. A Center for Computationally Intensive Problems has been established with the collaboration of the University of Tennessee Science Alliance, Vanderbilt University, and the Oak Ridge National Laboratory. The objective of this Center is to carry out forefront research in computationally intensive areas of atomic, nuclear, particle, and condensed matter physics. An important part of this effort is the appropriate training of students. An early effort of this Center was to conduct a Summer School of Computational Atomic and Nuclear Physics. A distinguished faculty of scientists in atomic, nuclear, and particle physics gave lectures on the status of present understanding of a number of topics at the leading edge in these fields, and emphasized those areas where computational physics was in a position to make a major contribution. In addition, there were lectures on numerical techniques which are particularly appropriate for implementation on parallel processor computers and which are of wide applicability in many branches of science

  19. MP CBM-Z V1.0: design for a new CBM-Z gas-phase chemical mechanism architecture for next generation processors

    OpenAIRE

    Wang, Hui; Lin, Junmin; Wu, Qizhong; Chen, Huansheng; Tang, Xiao; Wang, Zifa; Chen, Xueshun; Cheng, Huaqiong; Wang, Lanning

    2018-01-01

    Precise and rapid air quality simulation and forecasting are limited by the computation performance of the air quality model, and the gas-phase chemistry module is the most time-consuming function in the air quality model. In this study, we designed a new framework for the widely used Carbon Bond Mechanism Z (CBM-Z) gas-phase chemical kinetics kernel to adapt the Single Instruction Multiple Data (SIMD) technology in the next-generation processors for improving its calculation performance. The...

  20. Accuracy Limitations in Optical Linear Algebra Processors

    Science.gov (United States)

    Batsell, Stephen Gordon

    1990-01-01

    One of the limiting factors in applying optical linear algebra processors (OLAPs) to real-world problems has been the poor achievable accuracy of these processors. Little previous research has been done on determining noise sources from a systems perspective which would include noise generated in the multiplication and addition operations, noise from spatial variations across arrays, and from crosstalk. In this dissertation, we propose a second-order statistical model for an OLAP which incorporates all these system noise sources. We now apply this knowledge to determining upper and lower bounds on the achievable accuracy. This is accomplished by first translating the standard definition of accuracy used in electronic digital processors to analog optical processors. We then employ our second-order statistical model. Having determined a general accuracy equation, we consider limiting cases such as for ideal and noisy components. From the ideal case, we find the fundamental limitations on improving analog processor accuracy. From the noisy case, we determine the practical limitations based on both device and system noise sources. These bounds allow system trade-offs to be made both in the choice of architecture and in individual components in such a way as to maximize the accuracy of the processor. Finally, by determining the fundamental limitations, we show the system engineer when the accuracy desired can be achieved from hardware or architecture improvements and when it must come from signal pre-processing and/or post-processing techniques.