WorldWideScience

Sample records for single processor computers

  1. Multiple Embedded Processors for Fault-Tolerant Computing

    Science.gov (United States)

    Bolotin, Gary; Watson, Robert; Katanyoutanant, Sunant; Burke, Gary; Wang, Mandy

    2005-01-01

    A fault-tolerant computer architecture has been conceived in an effort to reduce vulnerability to single-event upsets (spurious bit flips caused by impingement of energetic ionizing particles or photons). As in some prior fault-tolerant architectures, the redundancy needed for fault tolerance is obtained by use of multiple processors in one computer. Unlike prior architectures, the multiple processors are embedded in a single field-programmable gate array (FPGA). What makes this new approach practical is the recent commercial availability of FPGAs that are capable of having multiple embedded processors. A working prototype (see figure) consists of two embedded IBM PowerPC 405 processor cores and a comparator built on a Xilinx Virtex-II Pro FPGA. This relatively simple instantiation of the architecture implements an error-detection scheme. A planned future version, incorporating four processors and two comparators, would correct some errors in addition to detecting them.

  2. Vector and parallel processors in computational science

    International Nuclear Information System (INIS)

    Duff, I.S.; Reid, J.K.

    1985-01-01

    These proceedings contain the articles presented at the named conference. These concern hardware and software for vector and parallel processors, numerical methods and algorithms for the computation on such processors, as well as applications of such methods to different fields of physics and related sciences. See hints under the relevant topics. (HSI)

  3. Optimal processor assignment for pipeline computations

    Science.gov (United States)

    Nicol, David M.; Simha, Rahul; Choudhury, Alok N.; Narahari, Bhagirath

    1991-01-01

    The availability of large scale multitasked parallel architectures introduces the following processor assignment problem for pipelined computations. Given a set of tasks and their precedence constraints, along with their experimentally determined individual responses times for different processor sizes, find an assignment of processor to tasks. Two objectives are of interest: minimal response given a throughput requirement, and maximal throughput given a response time requirement. These assignment problems differ considerably from the classical mapping problem in which several tasks share a processor; instead, it is assumed that a large number of processors are to be assigned to a relatively small number of tasks. Efficient assignment algorithms were developed for different classes of task structures. For a p processor system and a series parallel precedence graph with n constituent tasks, an O(np2) algorithm is provided that finds the optimal assignment for the response time optimization problem; it was found that the assignment optimizing the constrained throughput in O(np2log p) time. Special cases of linear, independent, and tree graphs are also considered.

  4. Aspects of computation on asynchronous parallel processors

    International Nuclear Information System (INIS)

    Wright, M.

    1989-01-01

    The increasing availability of asynchronous parallel processors has provided opportunities for original and useful work in scientific computing. However, the field of parallel computing is still in a highly volatile state, and researchers display a wide range of opinion about many fundamental questions such as models of parallelism, approaches for detecting and analyzing parallelism of algorithms, and tools that allow software developers and users to make effective use of diverse forms of complex hardware. This volume collects the work of researchers specializing in different aspects of parallel computing, who met to discuss the framework and the mechanics of numerical computing. The far-reaching impact of high-performance asynchronous systems is reflected in the wide variety of topics, which include scientific applications (e.g. linear algebra, lattice gauge simulation, ordinary and partial differential equations), models of parallelism, parallel language features, task scheduling, automatic parallelization techniques, tools for algorithm development in parallel environments, and system design issues

  5. Scientific Computing Kernels on the Cell Processor

    Energy Technology Data Exchange (ETDEWEB)

    Williams, Samuel W.; Shalf, John; Oliker, Leonid; Kamil, Shoaib; Husbands, Parry; Yelick, Katherine

    2007-04-04

    The slowing pace of commodity microprocessor performance improvements combined with ever-increasing chip power demands has become of utmost concern to computational scientists. As a result, the high performance computing community is examining alternative architectures that address the limitations of modern cache-based designs. In this work, we examine the potential of using the recently-released STI Cell processor as a building block for future high-end computing systems. Our work contains several novel contributions. First, we introduce a performance model for Cell and apply it to several key scientific computing kernels: dense matrix multiply, sparse matrix vector multiply, stencil computations, and 1D/2D FFTs. The difficulty of programming Cell, which requires assembly level intrinsics for the best performance, makes this model useful as an initial step in algorithm design and evaluation. Next, we validate the accuracy of our model by comparing results against published hardware results, as well as our own implementations on a 3.2GHz Cell blade. Additionally, we compare Cell performance to benchmarks run on leading superscalar (AMD Opteron), VLIW (Intel Itanium2), and vector (Cray X1E) architectures. Our work also explores several different mappings of the kernels and demonstrates a simple and effective programming model for Cell's unique architecture. Finally, we propose modest microarchitectural modifications that could significantly increase the efficiency of double-precision calculations. Overall results demonstrate the tremendous potential of the Cell architecture for scientific computations in terms of both raw performance and power efficiency.

  6. Multiple core computer processor with globally-accessible local memories

    Science.gov (United States)

    Shalf, John; Donofrio, David; Oliker, Leonid

    2016-09-20

    A multi-core computer processor including a plurality of processor cores interconnected in a Network-on-Chip (NoC) architecture, a plurality of caches, each of the plurality of caches being associated with one and only one of the plurality of processor cores, and a plurality of memories, each of the plurality of memories being associated with a different set of at least one of the plurality of processor cores and each of the plurality of memories being configured to be visible in a global memory address space such that the plurality of memories are visible to two or more of the plurality of processor cores.

  7. Fast parallel computation of polynomials using few processors

    DEFF Research Database (Denmark)

    Valiant, Leslie; Skyum, Sven

    1981-01-01

    It is shown that any multivariate polynomial that can be computed sequentially in C steps and has degree d can be computed in parallel in 0((log d) (log C + log d)) steps using only (Cd)0(1) processors.......It is shown that any multivariate polynomial that can be computed sequentially in C steps and has degree d can be computed in parallel in 0((log d) (log C + log d)) steps using only (Cd)0(1) processors....

  8. Fast Parallel Computation of Polynomials Using Few Processors

    DEFF Research Database (Denmark)

    Valiant, Leslie G.; Skyum, Sven; Berkowitz, S.

    1983-01-01

    It is shown that any multivariate polynomial of degree $d$ that can be computed sequentially in $C$ steps can be computed in parallel in $O((\\log d)(\\log C + \\log d))$ steps using only $(Cd)^{O(1)} $ processors.......It is shown that any multivariate polynomial of degree $d$ that can be computed sequentially in $C$ steps can be computed in parallel in $O((\\log d)(\\log C + \\log d))$ steps using only $(Cd)^{O(1)} $ processors....

  9. Performance evaluation of throughput computing workloads using multi-core processors and graphics processors

    Science.gov (United States)

    Dave, Gaurav P.; Sureshkumar, N.; Blessy Trencia Lincy, S. S.

    2017-11-01

    Current trend in processor manufacturing focuses on multi-core architectures rather than increasing the clock speed for performance improvement. Graphic processors have become as commodity hardware for providing fast co-processing in computer systems. Developments in IoT, social networking web applications, big data created huge demand for data processing activities and such kind of throughput intensive applications inherently contains data level parallelism which is more suited for SIMD architecture based GPU. This paper reviews the architectural aspects of multi/many core processors and graphics processors. Different case studies are taken to compare performance of throughput computing applications using shared memory programming in OpenMP and CUDA API based programming.

  10. Vector and parallel processors in computational science

    International Nuclear Information System (INIS)

    Duff, I.S.; Reid, J.K.

    1985-01-01

    This book presents the papers given at a conference which reviewed the new developments in parallel and vector processing. Topics considered at the conference included hardware (array processors, supercomputers), programming languages, software aids, numerical methods (e.g., Monte Carlo algorithms, iterative methods, finite elements, optimization), and applications (e.g., neutron transport theory, meteorology, image processing)

  11. Embedded Data Processor and Portable Computer Technology testbeds

    Science.gov (United States)

    Alena, Richard; Liu, Yuan-Kwei; Goforth, Andre; Fernquist, Alan R.

    1993-01-01

    Attention is given to current activities in the Embedded Data Processor and Portable Computer Technology testbed configurations that are part of the Advanced Data Systems Architectures Testbed at the Information Sciences Division at NASA Ames Research Center. The Embedded Data Processor Testbed evaluates advanced microprocessors for potential use in mission and payload applications within the Space Station Freedom Program. The Portable Computer Technology (PCT) Testbed integrates and demonstrates advanced portable computing devices and data system architectures. The PCT Testbed uses both commercial and custom-developed devices to demonstrate the feasibility of functional expansion and networking for portable computers in flight missions.

  12. Environment-adaptive speech enhancement for bilateral cochlear implants using a single processor.

    Science.gov (United States)

    Mirzahasanloo, Taher S; Kehtarnavaz, Nasser; Gopalakrishna, Vanishree; Loizou, Philipos C

    2013-05-01

    A computationally efficient speech enhancement pipeline in noisy environments based on a single-processor implementation is developed for utilization in bilateral cochlear implant systems. A two-channel joint objective function is defined and a closed form solution is obtained based on the weighted-Euclidean distortion measure. The computational efficiency and no need for synchronization aspects of this pipeline make it a suitable solution for real-time deployment. A speech quality measure is used to show its effectiveness in six different noisy environments as compared to a similar one-channel enhancement pipeline when using two separate processors or when using independent sequential processing.

  13. Launching applications on compute and service processors running under different operating systems in scalable network of processor boards with routers

    Science.gov (United States)

    Tomkins, James L [Albuquerque, NM; Camp, William J [Albuquerque, NM

    2009-03-17

    A multiple processor computing apparatus includes a physical interconnect structure that is flexibly configurable to support selective segregation of classified and unclassified users. The physical interconnect structure also permits easy physical scalability of the computing apparatus. The computing apparatus can include an emulator which permits applications from the same job to be launched on processors that use different operating systems.

  14. Evaluation of a simplified version of KENO V.a on a parallel processors computer

    International Nuclear Information System (INIS)

    Ugolini, D.; Petrie, L.M.; Dodds, H.L. Jr.

    1987-01-01

    KENO V.a is a widely used Monte Carlo criticality code developed by Oak Ridge National Laboratory for use primarily on large single processor mainframe computers. The code can be very costly to use if a large number of histories is required because the histories are performed sequentially via the single processor. With the advent of parallel processor computers, it should be possible to reduce computing costs (i.e., computer run time) by performing the histories in parallel. The purposes of this work is to implement KENO V.a on a parallel processor computer, specifically the NCUBE and then to compare results obtained on the NCUBE (i.e., accuracy and computing time) with results obtained on a large mainframe computer (IBM 3033). The NCUBE is a message-passing machine with no shared memory. A simplified version of KENO V.a was developed for this study because the standard version was too large to compile on the NCUBE. In addition, a special 1-group cross-section library, reduced from the standard 16-group Hansen Roach Library, was also used. The sample problem used in this study was an 18-cm-diam sphere of 235 U at 0.05 atom/b x cm

  15. Writing Teaching and the Word Processor. A Computer Discussion Paper.

    Science.gov (United States)

    Walshe, R. D., Ed.

    Designed for use by elementary school teachers, this discussion paper discusses the use of the word processor in the teaching of writing. The paper examines both the positive and negative aspects of computer use. After comparing the writing process with the problem solving process, the newsletter provides articles relating teachers' experiences…

  16. Programs for Testing Processor-in-Memory Computing Systems

    Science.gov (United States)

    Katz, Daniel S.

    2006-01-01

    The Multithreaded Microbenchmarks for Processor-In-Memory (PIM) Compilers, Simulators, and Hardware are computer programs arranged in a series for use in testing the performances of PIM computing systems, including compilers, simulators, and hardware. The programs at the beginning of the series test basic functionality; the programs at subsequent positions in the series test increasingly complex functionality. The programs are intended to be used while designing a PIM system, and can be used to verify that compilers, simulators, and hardware work correctly. The programs can also be used to enable designers of these system components to examine tradeoffs in implementation. Finally, these programs can be run on non-PIM hardware (either single-threaded or multithreaded) using the POSIX pthreads standard to verify that the benchmarks themselves operate correctly. [POSIX (Portable Operating System Interface for UNIX) is a set of standards that define how programs and operating systems interact with each other. pthreads is a library of pre-emptive thread routines that comply with one of the POSIX standards.

  17. Graphics processor efficiency for realization of rapid tabular computations

    International Nuclear Information System (INIS)

    Dudnik, V.A.; Kudryavtsev, V.I.; Us, S.A.; Shestakov, M.V.

    2016-01-01

    Capabilities of graphics processing units (GPU) and central processing units (CPU) have been investigated for realization of fast-calculation algorithms with the use of tabulated functions. The realization of tabulated functions is exemplified by the GPU/CPU architecture-based processors. Comparison is made between the operating efficiencies of GPU and CPU, employed for tabular calculations at different conditions of use. Recommendations are formulated for the use of graphical and central processors to speed up scientific and engineering computations through the use of tabulated functions

  18. Optimal neural computations require analog processors

    Energy Technology Data Exchange (ETDEWEB)

    Beiu, V.

    1998-12-31

    This paper discusses some of the limitations of hardware implementations of neural networks. The authors start by presenting neural structures and their biological inspirations, while mentioning the simplifications leading to artificial neural networks. Further, the focus will be on hardware imposed constraints. They will present recent results for three different alternatives of parallel implementations of neural networks: digital circuits, threshold gate circuits, and analog circuits. The area and the delay will be related to the neurons` fan-in and to the precision of their synaptic weights. The main conclusion is that hardware-efficient solutions require analog computations, and suggests the following two alternatives: (i) cope with the limitations imposed by silicon, by speeding up the computation of the elementary silicon neurons; (2) investigate solutions which would allow the use of the third dimension (e.g. using optical interconnections).

  19. Silicon auditory processors as computer peripherals.

    Science.gov (United States)

    Lazzaro, J; Wawrzynek, J; Mahowald, M; Sivilotti, M; Gillespie, D

    1993-01-01

    Several research groups are implementing analog integrated circuit models of biological auditory processing. The outputs of these circuit models have taken several forms, including video format for monitor display, simple scanned output for oscilloscope display, and parallel analog outputs suitable for data-acquisition systems. Here, an alternative output method for silicon auditory models, suitable for direct interface to digital computers, is described. As a prototype of this method, an integrated circuit model of temporal adaptation in the auditory nerve that functions as a peripheral to a workstation running Unix is described. Data from a working hybrid system that includes the auditory model, a digital interface, and asynchronous software are given. This system produces a real-time X-window display of the response of the auditory nerve model.

  20. Massive affordable computing using ARM processors in high energy physics

    Science.gov (United States)

    Smith, J. W.; Hamilton, A.

    2015-05-01

    High Performance Computing is relevant in many applications around the world, particularly high energy physics. Experiments such as ATLAS, CMS, ALICE and LHCb generate huge amounts of data which need to be stored and analyzed at server farms located on site at CERN and around the world. Apart from the initial cost of setting up an effective server farm the cost of power consumption and cooling are significant. The proposed solution to reduce costs without losing performance is to make use of ARM® processors found in nearly all smartphones and tablet computers. Their low power consumption, low cost and respectable processing speed makes them an interesting choice for future large scale parallel data processing centers. Benchmarks on the CortexTM-A series of ARM® processors including the HPL and PMBW suites will be presented as well as preliminary results from the PROOF benchmark in the context of high energy physics will be analyzed.

  1. Image matrix processor for fast multi-dimensional computations

    Science.gov (United States)

    Roberson, George P.; Skeate, Michael F.

    1996-01-01

    An apparatus for multi-dimensional computation which comprises a computation engine, including a plurality of processing modules. The processing modules are configured in parallel and compute respective contributions to a computed multi-dimensional image of respective two dimensional data sets. A high-speed, parallel access storage system is provided which stores the multi-dimensional data sets, and a switching circuit routes the data among the processing modules in the computation engine and the storage system. A data acquisition port receives the two dimensional data sets representing projections through an image, for reconstruction algorithms such as encountered in computerized tomography. The processing modules include a programmable local host, by which they may be configured to execute a plurality of different types of multi-dimensional algorithms. The processing modules thus include an image manipulation processor, which includes a source cache, a target cache, a coefficient table, and control software for executing image transformation routines using data in the source cache and the coefficient table and loading resulting data in the target cache. The local host processor operates to load the source cache with a two dimensional data set, loads the coefficient table, and transfers resulting data out of the target cache to the storage system, or to another destination.

  2. Data-link of autonomous CAMAC processor systems with a computer

    International Nuclear Information System (INIS)

    Brehmer, W.

    1978-08-01

    Employing CAMAC processor systems, a data-link, connecting the processor system and a host-computer, is often required. The functions for the data-link has been defined. The implementation of the data-link between a DEC-Computer PDP 11/4phi and an INCAA CAMAC processor system CAPRO-1 is described. The data-link includes procedures for dialog and datatransfer integrated into the executive of the processor system CAPRO-1. (orig.) [de

  3. MPC Related Computational Capabilities of ARMv7A Processors

    DEFF Research Database (Denmark)

    Frison, Gianluca; Jørgensen, John Bagterp

    2015-01-01

    and A15 and show how to exploit the unique features of each processor to obtain the best performance, in the context of a novel implementation method for the linear-algebra routines used in MPC solvers. This method adapts high-performance computing techniques to the needs of embedded MPC. In particular......, we investigate the performance of matrix-matrix and matrix-vector multiplications, which are the backbones of second- and first-order methods for convex optimization. Finally, we test the performance of MPC solvers implemented using these optimized linear-algebra routines....

  4. Processor Instructions Execution Models in Computer Systems Supporting Hardware Virtualization When an Intruder Takes Detection Countermeasures

    OpenAIRE

    A. E. Zhukov; I. Y. Korkin; B. M. Sukhinin

    2012-01-01

    We are discussing processor modes switching schemes and analyzing processor instructions execution in the cases when a hypervisor is present in the computer or not. We determine processor instructions execution latency statistics which are applicable for these hypervisors detection when an intruder is modifying time stamp counter.

  5. Architecture and VHDL behavioural validation of a parallel processor dedicated to computer vision

    International Nuclear Information System (INIS)

    Collette, Thierry

    1992-01-01

    Speeding up image processing is mainly obtained using parallel computers; SIMD processors (single instruction stream, multiple data stream) have been developed, and have proven highly efficient regarding low-level image processing operations. Nevertheless, their performances drop for most intermediate of high level operations, mainly when random data reorganisations in processor memories are involved. The aim of this thesis was to extend the SIMD computer capabilities to allow it to perform more efficiently at the image processing intermediate level. The study of some representative algorithms of this class, points out the limits of this computer. Nevertheless, these limits can be erased by architectural modifications. This leads us to propose SYMPATIX, a new SIMD parallel computer. To valid its new concept, a behavioural model written in VHDL - Hardware Description Language - has been elaborated. With this model, the new computer performances have been estimated running image processing algorithm simulations. VHDL modeling approach allows to perform the system top down electronic design giving an easy coupling between system architectural modifications and their electronic cost. The obtained results show SYMPATIX to be an efficient computer for low and intermediate level image processing. It can be connected to a high level computer, opening up the development of new computer vision applications. This thesis also presents, a top down design method, based on the VHDL, intended for electronic system architects. (author) [fr

  6. The Potential of the Cell Processor for Scientific Computing

    Energy Technology Data Exchange (ETDEWEB)

    Williams, Samuel; Shalf, John; Oliker, Leonid; Husbands, Parry; Kamil, Shoaib; Yelick, Katherine

    2005-10-14

    The slowing pace of commodity microprocessor performance improvements combined with ever-increasing chip power demands has become of utmost concern to computational scientists. As a result, the high performance computing community is examining alternative architectures that address the limitations of modern cache-based designs. In this work, we examine the potential of the using the forth coming STI Cell processor as a building block for future high-end computing systems. Our work contains several novel contributions. We are the first to present quantitative Cell performance data on scientific kernels and show direct comparisons against leading superscalar (AMD Opteron), VLIW (IntelItanium2), and vector (Cray X1) architectures. Since neither Cell hardware nor cycle-accurate simulators are currently publicly available, we develop both analytical models and simulators to predict kernel performance. Our work also explores the complexity of mapping several important scientific algorithms onto the Cells unique architecture. Additionally, we propose modest microarchitectural modifications that could significantly increase the efficiency of double-precision calculations. Overall results demonstrate the tremendous potential of the Cell architecture for scientific computations in terms of both raw performance and power efficiency.

  7. Highway traffic simulation on multi-processor computers

    Energy Technology Data Exchange (ETDEWEB)

    Hanebutte, U.R.; Doss, E.; Tentner, A.M.

    1997-04-01

    A computer model has been developed to simulate highway traffic for various degrees of automation with a high level of fidelity in regard to driver control and vehicle characteristics. The model simulates vehicle maneuvering in a multi-lane highway traffic system and allows for the use of Intelligent Transportation System (ITS) technologies such as an Automated Intelligent Cruise Control (AICC). The structure of the computer model facilitates the use of parallel computers for the highway traffic simulation, since domain decomposition techniques can be applied in a straight forward fashion. In this model, the highway system (i.e. a network of road links) is divided into multiple regions; each region is controlled by a separate link manager residing on an individual processor. A graphical user interface augments the computer model kv allowing for real-time interactive simulation control and interaction with each individual vehicle and road side infrastructure element on each link. Average speed and traffic volume data is collected at user-specified loop detector locations. Further, as a measure of safety the so- called Time To Collision (TTC) parameter is being recorded.

  8. Algorithms for computational fluid dynamics n parallel processors

    International Nuclear Information System (INIS)

    Van de Velde, E.F.

    1986-01-01

    A study of parallel algorithms for the numerical solution of partial differential equations arising in computational fluid dynamics is presented. The actual implementation on parallel processors of shared and nonshared memory design is discussed. The performance of these algorithms is analyzed in terms of machine efficiency, communication time, bottlenecks and software development costs. For elliptic equations, a parallel preconditioned conjugate gradient method is described, which has been used to solve pressure equations discretized with high order finite elements on irregular grids. A parallel full multigrid method and a parallel fast Poisson solver are also presented. Hyperbolic conservation laws were discretized with parallel versions of finite difference methods like the Lax-Wendroff scheme and with the Random Choice method. Techniques are developed for comparing the behavior of an algorithm on different architectures as a function of problem size and local computational effort. Effective use of these advanced architecture machines requires the use of machine dependent programming. It is shown that the portability problems can be minimized by introducing high level operations on vectors and matrices structured into program libraries

  9. Efficient Backprojection-Based Synthetic Aperture Radar Computation with Many-Core Processors

    Directory of Open Access Journals (Sweden)

    Jongsoo Park

    2013-01-01

    Full Text Available Tackling computationally challenging problems with high efficiency often requires the combination of algorithmic innovation, advanced architecture, and thorough exploitation of parallelism. We demonstrate this synergy through synthetic aperture radar (SAR via backprojection, an image reconstruction method that can require hundreds of TFLOPS. Computation cost is significantly reduced by our new algorithm of approximate strength reduction; data movement cost is economized by software locality optimizations facilitated by advanced architecture support; parallelism is fully harnessed in various patterns and granularities. We deliver over 35 billion backprojections per second throughput per compute node on an Intel® Xeon® processor E5-2670-based cluster, equipped with Intel® Xeon Phi™ coprocessors. This corresponds to processing a 3K×3K image within a second using a single node. Our study can be extended to other settings: backprojection is applicable elsewhere including medical imaging, approximate strength reduction is a general code transformation technique, and many-core processors are emerging as a solution to energy-efficient computing.

  10. Single neuron computation

    CERN Document Server

    McKenna, Thomas M; Zornetzer, Steven F

    1992-01-01

    This book contains twenty-two original contributions that provide a comprehensive overview of computational approaches to understanding a single neuron structure. The focus on cellular-level processes is twofold. From a computational neuroscience perspective, a thorough understanding of the information processing performed by single neurons leads to an understanding of circuit- and systems-level activity. From the standpoint of artificial neural networks (ANNs), a single real neuron is as complex an operational unit as an entire ANN, and formalizing the complex computations performed by real n

  11. Hearing performance in single-sided deaf cochlear implant users after upgrade to a single-unit speech processor.

    Science.gov (United States)

    Mertens, Griet; Hofkens, Anouk; Punte, Andrea Kleine; De Bodt, Marc; Van de Heyning, Paul

    2015-01-01

    Single-sided deaf (SSD) patients report multiple benefits after cochlear implantation (CI), such as tinnitus suppression, speech perception, and sound localization. The first single-unit speech processor, the RONDO, was launched recently. Both the RONDO and the well-known behind-the-ear (BTE) speech processor work on the same audio processor platform. However, in contrast to the BTE, the microphone placement on the RONDO is different. The aim of this study was to evaluate the hearing performances using the BTE speech processor versus using the single-unit speech processor. Subjective and objective outcomes in SSD CI patients with a BTE speech processor and a single-unit speech processor, with particular focus on spatial hearing, were compared. Ten adults with unilateral incapacitating tinnitus resulting from ipsilateral sensorineural deafness were enrolled in the study. The mean age at enrollment in the study was 56 (standard deviation, 13) years. The subjects were cochlear implanted at a mean age of 48 (standard deviation, 14) years and had on average 8 years' experience with their CI (range, 4-11 yr). At the first test interval (T0), testing was conducted using the subject's BTE speech processor, with which they were already familiar. Aided free-field audiometry, speech reception in noise, and sound localization testing were performed. Self-administered questionnaires on subjective evaluation consisted of HISQUI-NL, SSQ5, SHQ, and a Visual Analogue Scale to assess tinnitus loudness and disturbance. All 10 subjects were upgraded to the single-unit processor and retested after 28 days (T28) with the same fitting map. At T28, an additional single-unit questionnaire was administered to determine qualitative experiences and the effect of the position of the microphone on the new speech processor. Equal hearing outcomes were found between the single-unit speech processor: median PTA(single-unit) (0.5, 1, 2 kHz) = 40 (range, 33-48) dB HL; median Speech Reception

  12. Compiler for Fast, Accurate Mathematical Computing on Integer Processors, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — The proposers will develop a computer language compiler to enable inexpensive, low-power, integer-only processors to carry our mathematically-intensive comptutations...

  13. Input data requirements for special processors in the computation system containing the VENTURE neutronics code

    International Nuclear Information System (INIS)

    Vondy, D.R.; Fowler, T.B.; Cunningham, G.W.

    1979-07-01

    User input data requirements are presented for certain special processors in a nuclear reactor computation system. These processors generally read data in formatted form and generate binary interface data files. Some data processing is done to convert from the user oriented form to the interface file forms. The VENTURE diffusion theory neutronics code and other computation modules in this system use the interface data files which are generated

  14. Computational Particle Dynamic Simulations on Multicore Processors (CPDMu) Final Report Phase I

    Energy Technology Data Exchange (ETDEWEB)

    Schmalz, Mark S

    2011-07-24

    Statement of Problem - Department of Energy has many legacy codes for simulation of computational particle dynamics and computational fluid dynamics applications that are designed to run on sequential processors and are not easily parallelized. Emerging high-performance computing architectures employ massively parallel multicore architectures (e.g., graphics processing units) to increase throughput. Parallelization of legacy simulation codes is a high priority, to achieve compatibility, efficiency, accuracy, and extensibility. General Statement of Solution - A legacy simulation application designed for implementation on mainly-sequential processors has been represented as a graph G. Mathematical transformations, applied to G, produce a graph representation {und G} for a high-performance architecture. Key computational and data movement kernels of the application were analyzed/optimized for parallel execution using the mapping G {yields} {und G}, which can be performed semi-automatically. This approach is widely applicable to many types of high-performance computing systems, such as graphics processing units or clusters comprised of nodes that contain one or more such units. Phase I Accomplishments - Phase I research decomposed/profiled computational particle dynamics simulation code for rocket fuel combustion into low and high computational cost regions (respectively, mainly sequential and mainly parallel kernels), with analysis of space and time complexity. Using the research team's expertise in algorithm-to-architecture mappings, the high-cost kernels were transformed, parallelized, and implemented on Nvidia Fermi GPUs. Measured speedups (GPU with respect to single-core CPU) were approximately 20-32X for realistic model parameters, without final optimization. Error analysis showed no loss of computational accuracy. Commercial Applications and Other Benefits - The proposed research will constitute a breakthrough in solution of problems related to efficient

  15. Multithreaded Processors

    Indian Academy of Sciences (India)

    IAS Admin

    processor architecture. Venkat Arun is a 3rd year. BTech Computer Science student at IIT Guwahati. He is currently working on congestion control in computer networks. In this article, we describe the constraints faced by modern computer designers due to operating speed mismatch between processors and mem- ory units ...

  16. The Square Kilometre Array Science Data Processor. Preliminary compute platform design

    International Nuclear Information System (INIS)

    Broekema, P.C.; Nieuwpoort, R.V. van; Bal, H.E.

    2015-01-01

    The Square Kilometre Array is a next-generation radio-telescope, to be built in South Africa and Western Australia. It is currently in its detailed design phase, with procurement and construction scheduled to start in 2017. The SKA Science Data Processor is the high-performance computing element of the instrument, responsible for producing science-ready data. This is a major IT project, with the Science Data Processor expected to challenge the computing state-of-the art even in 2020. In this paper we introduce the preliminary Science Data Processor design and the principles that guide the design process, as well as the constraints to the design. We introduce a highly scalable and flexible system architecture capable of handling the SDP workload

  17. The Fermilab Advanced Computer Program multi-array processor system (ACPMAPS): A site oriented supercomputer for theoretical physics

    International Nuclear Information System (INIS)

    Nash, T.; Areti, H.; Atac, R.

    1988-08-01

    The ACP Multi-Array Processor System (ACPMAPS) is a highly cost effective, local memory parallel computer designed for floating point intensive grid based problems. The processing nodes of the system are single board array processors based on the FORTRAN and C programmable Weitek XL chip set. The nodes are connected by a network of very high bandwidth 16 port crossbar switches. The architecture is designed to achieve the highest possible cost effectiveness while maintaining a high level of programmability. The primary application of the machine at Fermilab will be lattice gauge theory. The hardware is supported by a transparent site oriented software system called CANOPY which shields theorist users from the underlying node structure. 4 refs., 2 figs

  18. Learning to use a word processor with concurrent computer-assisted instruction

    NARCIS (Netherlands)

    Simons, P.R.J.; Biemans, H.J.A.

    1992-01-01

    In this study the effects of 7embedding regulation questions and regulation hints in a concurrent computer-assisted instruction (CAI) program aimed at learning to use a word processor were examined. This instructional shell WP-DAGOGUE controlled the interaction between the subject and the word

  19. Restricted access processor - An application of computer security technology

    Science.gov (United States)

    Mcmahon, E. M.

    1985-01-01

    This paper describes a security guard device that is currently being developed by Computer Sciences Corporation (CSC). The methods used to provide assurance that the system meets its security requirements include the system architecture, a system security evaluation, and the application of formal and informal verification techniques. The combination of state-of-the-art technology and the incorporation of new verification procedures results in a demonstration of the feasibility of computer security technology for operational applications.

  20. Reducing the computational requirements for simulating tunnel fires by combining multiscale modelling and multiple processor calculation

    DEFF Research Database (Denmark)

    Vermesi, Izabella; Rein, Guillermo; Colella, Francesco

    2017-01-01

    in FDS version 6.0, a widely used fire-specific, open source CFD software. Furthermore, it compares the reduction in simulation time given by multiscale modelling with the one given by the use of multiple processor calculation. This was done using a 1200m long tunnel with a rectangular cross...... processor calculation (97% faster when using a single mesh and multiscale modelling; only 46% faster when using the full tunnel and multiple meshes). In summary, it was found that multiscale modelling with FDS v.6.0 is feasible, and the combination of multiple meshes and multiscale modelling was established...

  1. Survey of ANL organization plans for word processors, personal computers, workstations, and associated software

    Energy Technology Data Exchange (ETDEWEB)

    Fenske, K.R.

    1991-11-01

    The Computing and Telecommunications Division (CTD) has compiled this Survey of ANL Organization Plans for Word Processors, Personal Computers, Workstations, and Associated Software to provide DOE and Argonne with a record of recent growth in the acquisition and use of personal computers, microcomputers, and word processors at ANL. Laboratory planners, service providers, and people involved in office automation may find the Survey useful. It is for internal use only, and any unauthorized use is prohibited. Readers of the Survey should use it as a reference that documents the plans of each organization for office automation, identifies appropriate planners and other contact people in those organizations, and encourages the sharing of this information among those people making plans for organizations and decisions about office automation. The Survey supplements information in both the ANL Statement of Site Strategy for Computing Workstations and the ANL Site Response for the DOE Information Technology Resources Long-Range Plan.

  2. Survey of ANL organization plans for word processors, personal computers, workstations, and associated software. Revision 3

    Energy Technology Data Exchange (ETDEWEB)

    Fenske, K.R.

    1991-11-01

    The Computing and Telecommunications Division (CTD) has compiled this Survey of ANL Organization Plans for Word Processors, Personal Computers, Workstations, and Associated Software to provide DOE and Argonne with a record of recent growth in the acquisition and use of personal computers, microcomputers, and word processors at ANL. Laboratory planners, service providers, and people involved in office automation may find the Survey useful. It is for internal use only, and any unauthorized use is prohibited. Readers of the Survey should use it as a reference that documents the plans of each organization for office automation, identifies appropriate planners and other contact people in those organizations, and encourages the sharing of this information among those people making plans for organizations and decisions about office automation. The Survey supplements information in both the ANL Statement of Site Strategy for Computing Workstations and the ANL Site Response for the DOE Information Technology Resources Long-Range Plan.

  3. Computer Aided Grid Interface: An Interactive CFD Pre-Processor

    Science.gov (United States)

    Soni, Bharat K.

    1997-01-01

    NASA maintains an applications oriented computational fluid dynamics (CFD) efforts complementary to and in support of the aerodynamic-propulsion design and test activities. This is especially true at NASA/MSFC where the goal is to advance and optimize present and future liquid-fueled rocket engines. Numerical grid generation plays a significant role in the fluid flow simulations utilizing CFD. An overall goal of the current project was to develop a geometry-grid generation tool that will help engineers, scientists and CFD practitioners to analyze design problems involving complex geometries in a timely fashion. This goal is accomplished by developing the CAGI: Computer Aided Grid Interface system. The CAGI system is developed by integrating CAD/CAM (Computer Aided Design/Computer Aided Manufacturing) geometric system output and/or Initial Graphics Exchange Specification (IGES) files (including all the NASA-IGES entities), geometry manipulations and generations associated with grid constructions, and robust grid generation methodologies. This report describes the development process of the CAGI system.

  4. PHANTOM: Practical Oblivious Computation in a Secure Processor

    Science.gov (United States)

    2014-05-16

    competitive relationship between the cloud provider and the client. For example, Netflix is hosted on Amazon’s EC2 cloud infrastructure, while Amazon is a...unilever/ (visited on 03/21/2013). [7] Charles Babcock. Cloud Connect: Netflix Finds Home In Amazon EC2. Retrieved Sep 16, 2013. url: http...www.informationweek.com/cloud-computing/infrast ructure/cloud-connect- netflix -finds-home-in-amaz/229300547. [8] Jonathan Bachrach et al. “Chisel: Constructing

  5. Distributed processor allocation for launching applications in a massively connected processors complex

    Science.gov (United States)

    Pedretti, Kevin

    2008-11-18

    A compute processor allocator architecture for allocating compute processors to run applications in a multiple processor computing apparatus is distributed among a subset of processors within the computing apparatus. Each processor of the subset includes a compute processor allocator. The compute processor allocators can share a common database of information pertinent to compute processor allocation. A communication path permits retrieval of information from the database independently of the compute processor allocators.

  6. Speech Intelligibility in Noise With a Single-Unit Cochlear Implant Audio Processor.

    Science.gov (United States)

    Wimmer, Wilhelm; Caversaccio, Marco; Kompis, Martin

    2015-08-01

    The Rondo is a single-unit cochlear implant (CI) audio processor comprising the identical components as its behind-the-ear predecessor, the Opus 2. An interchange of the Opus 2 with the Rondo leads to a shift of the microphone position toward the back of the head. This study aimed to investigate the influence of the Rondo wearing position on speech intelligibility in noise. Speech intelligibility in noise was measured in 4 spatial configurations with 12 experienced CI users using the German adaptive Oldenburg sentence test. A physical model and a numerical model were used to enable a comparison of the observations. No statistically significant differences of the speech intelligibility were found in the situations in which the signal came from the front and the noise came from the frontal, ipsilateral, or contralateral side. The signal-to-noise ratio (SNR) was significantly better with the Opus 2 in the case with the noise presented from the back (4.4 dB, p processors placed further behind the ear than closer to the ear. The study indicates that CI users with the receiver/stimulator implanted in positions further behind the ear are expected to have higher difficulties in noisy situations when wearing the single-unit audio processor.

  7. The Development of Word Processor-Mainframe Computer Interaction

    Science.gov (United States)

    Cain, M.; Stocker, T.

    1983-01-01

    This paper addresses how peripheral word processing units have been modified into total workstations, enabling a user to perform multiple functions. Manuscripts can be prepared and edited, information can be passed to and extracted from the mainframe computer, and the mainframe's superior processing capabilities can be utilized. This is a viable alternative to the so-called “Heinz 57” approach to information systems characterized in many institutions. Presented here will be a short background of data processing at The Children's Hospital, necessary to show the explosive growth of data processing in a medical institution, a description of how word processing configuration played a determining role in equipment selection, and a sampling of actual word processing—mainframe applications that have been accomplished.

  8. Analysis of the computational requirements of a pulse-doppler radar signal processor

    CSIR Research Space (South Africa)

    Broich, R

    2012-05-01

    Full Text Available H z to 10 GH z Fig. 1. Radar signal processor (RSP) flow of operations purpose computer architectures [3]. An abstract machine, in which only memory reads, writes, additions and multiplica- tions are considered to be significant operations..., is chosen for the model of computation. For each algorithm, a pseudo-code listing is used to find an expression for the required number of additions/subtractions, multiplications/divisions, as well as memory reads and writes. Based on the parameters...

  9. Single event effects and performance predictions for space applications of RISC processors

    Energy Technology Data Exchange (ETDEWEB)

    Kimbrough, J.R.; Colella, N.J.; Denton, S.M.; Shaeffer, D.L.; Shih, D.; Wilburn, J.W. (Lawrence Livermore National Lab., CA (United States)); Coakley, P.G. (JAYCOR, San Diego, CA (United States)); Casteneda, C. (Crocker Nuclar Lab., Davis, CA (United States)); Koga, R. (Aerospace Corp., El Segundo, CA (United States)); Clark, D.A.; Ullmann, J.L. (Los Alamos National Lab., NM (United States))

    1994-12-01

    Proton and ion Single Event Phenomena (SEP) tests were performed on 32-b processors including R3000A's from all commercial manufacturers along with the Performance PR3400 family, Integrated Device Technology Inc. 79R3081, LSI Logic Corporation LR33000HC, and Intel i80960MX parts. The microprocessors had acceptable upset rates for operation in a low earth orbit or a lunar mission such as CLEMENTINE with a wide range in proton total dose failure. Even though R3000A devices are 60% smaller in physical area than R3000 devices, there was a 340% increase in device Single Event Upset (SEU) cross section. Software tests of varying complexity demonstrate that registers and other functional blocks using register architecture dominate the cross section. The current approach of giving a single upset cross section can lead to erroneous upset rates depending on the application software.

  10. Survey of ANL organization plans for word processors, personal computers, workstations, and associated software

    Energy Technology Data Exchange (ETDEWEB)

    Fenske, K.R.; Rockwell, V.S.

    1992-08-01

    The Computing and Telecommunications Division (CTD) has compiled this Survey of ANL Organization plans for Word Processors, Personal Computers, Workstations, and Associated Software (ANL/TM, Revision 4) to provide DOE and Argonne with a record of recent growth in the acquisition and use of personal computers, microcomputers, and word processors at ANL. Laboratory planners, service providers, and people involved in office automation may find the Survey useful. It is for internal use only, and any unauthorized use is prohibited. Readers of the Survey should use it as a reference document that (1) documents the plans of each organization for office automation, (2) identifies appropriate planners and other contact people in those organizations and (3) encourages the sharing of this information among those people making plans for organizations and decisions about office automation. The Survey supplements information in both the ANL Statement of Site Strategy for Computing Workstations (ANL/TM 458) and the ANL Site Response for the DOE Information Technology Resources Long-Range Plan (ANL/TM 466).

  11. Survey of ANL organization plans for word processors, personal computers, workstations, and associated software. Revision 4

    Energy Technology Data Exchange (ETDEWEB)

    Fenske, K.R.; Rockwell, V.S.

    1992-08-01

    The Computing and Telecommunications Division (CTD) has compiled this Survey of ANL Organization plans for Word Processors, Personal Computers, Workstations, and Associated Software (ANL/TM, Revision 4) to provide DOE and Argonne with a record of recent growth in the acquisition and use of personal computers, microcomputers, and word processors at ANL. Laboratory planners, service providers, and people involved in office automation may find the Survey useful. It is for internal use only, and any unauthorized use is prohibited. Readers of the Survey should use it as a reference document that (1) documents the plans of each organization for office automation, (2) identifies appropriate planners and other contact people in those organizations and (3) encourages the sharing of this information among those people making plans for organizations and decisions about office automation. The Survey supplements information in both the ANL Statement of Site Strategy for Computing Workstations (ANL/TM 458) and the ANL Site Response for the DOE Information Technology Resources Long-Range Plan (ANL/TM 466).

  12. TMS320C25 Digital Signal Processor For 2-Dimensional Fast Fourier Transform Computation

    International Nuclear Information System (INIS)

    Ardisasmita, M. Syamsa

    1996-01-01

    The Fourier transform is one of the most important mathematical tool in signal processing and analysis, which converts information from the time/spatial domain into the frequency domain. Even with implementation of the Fast Fourier Transform algorithms in imaging data, the discrete Fourier transform execution consume a lot of time. Digital signal processors are designed specifically to perform computation intensive digital signal processing algorithms. By taking advantage of the advanced architecture. parallel processing, and dedicated digital signal processing (DSP) instruction sets. This device can execute million of DSP operations per second. The device architecture, characteristics and feature suitable for fast Fourier transform application and speed-up are discussed

  13. Animated computer graphics models of space and earth sciences data generated via the massively parallel processor

    Science.gov (United States)

    Treinish, Lloyd A.; Gough, Michael L.; Wildenhain, W. David

    1987-01-01

    The capability was developed of rapidly producing visual representations of large, complex, multi-dimensional space and earth sciences data sets via the implementation of computer graphics modeling techniques on the Massively Parallel Processor (MPP) by employing techniques recently developed for typically non-scientific applications. Such capabilities can provide a new and valuable tool for the understanding of complex scientific data, and a new application of parallel computing via the MPP. A prototype system with such capabilities was developed and integrated into the National Space Science Data Center's (NSSDC) Pilot Climate Data System (PCDS) data-independent environment for computer graphics data display to provide easy access to users. While developing these capabilities, several problems had to be solved independently of the actual use of the MPP, all of which are outlined.

  14. Single event upset tests of a RISC-based fault-tolerant computer

    Energy Technology Data Exchange (ETDEWEB)

    Kimbrough, J.R.; Butner, D.N.; Colella, N.J.; Kaschmitter, J.L.; Shaeffer, D.L.; McKnett, C.L.; Coakley, P.G.; Casteneda, C.

    1996-03-23

    The project successfully demonstrated that dual lock-step comparison of commercial RISC processors is a viable fault-tolerant approach to handling SEU in space environment. The fault tolerant approach on orbit error rate was 38 times less than the single processor error rate. The random nature of the upsets and appearance in critical code section show it is essential to incorporate both hardware and software in the design and operation of fault-tolerant computers.

  15. Computer simulation of a space SAR using a range-sequential processor for soil moisture mapping

    Science.gov (United States)

    Fujita, M.; Ulaby, F. (Principal Investigator)

    1982-01-01

    The ability of a spaceborne synthetic aperture radar (SAR) to detect soil moisture was evaluated by means of a computer simulation technique. The computer simulation package includes coherent processing of the SAR data using a range-sequential processor, which can be set up through hardware implementations, thereby reducing the amount of telemetry involved. With such a processing approach, it is possible to monitor the earth's surface on a continuous basis, since data storage requirements can be easily met through the use of currently available technology. The Development of the simulation package is described, followed by an examination of the application of the technique to actual environments. The results indicate that in estimating soil moisture content with a four-look processor, the difference between the assumed and estimated values of soil moisture is within + or - 20% of field capacity for 62% of the pixels for agricultural terrain and for 53% of the pixels for hilly terrain. The estimation accuracy for soil moisture may be improved by reducing the effect of fading through non-coherent averaging.

  16. Multi-Core Processor Memory Contention Benchmark Analysis Case Study

    Science.gov (United States)

    Simon, Tyler; McGalliard, James

    2009-01-01

    Multi-core processors dominate current mainframe, server, and high performance computing (HPC) systems. This paper provides synthetic kernel and natural benchmark results from an HPC system at the NASA Goddard Space Flight Center that illustrate the performance impacts of multi-core (dual- and quad-core) vs. single core processor systems. Analysis of processor design, application source code, and synthetic and natural test results all indicate that multi-core processors can suffer from significant memory subsystem contention compared to similar single-core processors.

  17. Neuron splitting in compute-bound parallel network simulations enables runtime scaling with twice as many processors.

    Science.gov (United States)

    Hines, Michael L; Eichner, Hubert; Schürmann, Felix

    2008-08-01

    Neuron tree topology equations can be split into two subtrees and solved on different processors with no change in accuracy, stability, or computational effort; communication costs involve only sending and receiving two double precision values by each subtree at each time step. Splitting cells is useful in attaining load balance in neural network simulations, especially when there is a wide range of cell sizes and the number of cells is about the same as the number of processors. For compute-bound simulations load balance results in almost ideal runtime scaling. Application of the cell splitting method to two published network models exhibits good runtime scaling on twice as many processors as could be effectively used with whole-cell balancing.

  18. Spaceborne Processor Array

    Science.gov (United States)

    Chow, Edward T.; Schatzel, Donald V.; Whitaker, William D.; Sterling, Thomas

    2008-01-01

    A Spaceborne Processor Array in Multifunctional Structure (SPAMS) can lower the total mass of the electronic and structural overhead of spacecraft, resulting in reduced launch costs, while increasing the science return through dynamic onboard computing. SPAMS integrates the multifunctional structure (MFS) and the Gilgamesh Memory, Intelligence, and Network Device (MIND) multi-core in-memory computer architecture into a single-system super-architecture. This transforms every inch of a spacecraft into a sharable, interconnected, smart computing element to increase computing performance while simultaneously reducing mass. The MIND in-memory architecture provides a foundation for high-performance, low-power, and fault-tolerant computing. The MIND chip has an internal structure that includes memory, processing, and communication functionality. The Gilgamesh is a scalable system comprising multiple MIND chips interconnected to operate as a single, tightly coupled, parallel computer. The array of MIND components shares a global, virtual name space for program variables and tasks that are allocated at run time to the distributed physical memory and processing resources. Individual processor- memory nodes can be activated or powered down at run time to provide active power management and to configure around faults. A SPAMS system is comprised of a distributed Gilgamesh array built into MFS, interfaces into instrument and communication subsystems, a mass storage interface, and a radiation-hardened flight computer.

  19. Embedded Processor Laboratory

    Data.gov (United States)

    Federal Laboratory Consortium — The Embedded Processor Laboratory provides the means to design, develop, fabricate, and test embedded computers for missile guidance electronics systems in support...

  20. Long-term subjective benefit with a bone conduction implant sound processor in 44 patients with single-sided deafness.

    Science.gov (United States)

    Desmet, Jolien; Wouters, Kristien; De Bodt, Marc; Van de Heyning, Paul

    2014-07-01

    Studies that investigate the subjective benefit from a bone conduction implant (BCI) sound processor in patients with single-sided sensorineural deafness (SSD) have been limited to examining short- and mid-term benefit. In the current study, we performed a survey among 44 SSD BCI users with a median follow-up time of 50 months. Forty-four experienced SSD BCI users participated in the survey, which consisted of the Abbreviated Profile of Hearing Aid Benefit, the Single-Sided Deafness Questionnaire, the Short Hearing Handicap Inventory for Adults, and a self-made user questionnaire. For patients with tinnitus, the Tinnitus Questionnaire was also completed. The results of the survey were correlated with contralateral hearing loss, age at implantation, duration of the hearing loss at the time of implantation, duration of BCI use, and the presence and burden of tinnitus. In total, 86% of the patients still used their sound processor. The Abbreviated Profile of Hearing Aid Benefit and the Short Hearing Handicap Inventory for Adults show a statistically significant overall improvement with the BCI. The Single-Sided Deafness Questionnaire and the user questionnaire showed that almost 40% of the patients reported daily use of the sound processor. However, the survey of daily use reveals benefit only in certain circumstances. Speech understanding in noisy situations is rated rather low, and 58% of all patients reported that their BCI benefit was less than expected. The majority of the patients reported an overall improvement from using their BCI. However, the number of users decreases during a longer follow-up time and patients get less enthusiastic about the device after an extended period of use, especially in noisy situations. However, diminished satisfaction because of time-related reductions in processor function could not be ruled out.

  1. Icarus: A 2-D Direct Simulation Monte Carlo (DSMC) Code for Multi-Processor Computers

    International Nuclear Information System (INIS)

    BARTEL, TIMOTHY J.; PLIMPTON, STEVEN J.; GALLIS, MICHAIL A.

    2001-01-01

    Icarus is a 2D Direct Simulation Monte Carlo (DSMC) code which has been optimized for the parallel computing environment. The code is based on the DSMC method of Bird[11.1] and models from free-molecular to continuum flowfields in either cartesian (x, y) or axisymmetric (z, r) coordinates. Computational particles, representing a given number of molecules or atoms, are tracked as they have collisions with other particles or surfaces. Multiple species, internal energy modes (rotation and vibration), chemistry, and ion transport are modeled. A new trace species methodology for collisions and chemistry is used to obtain statistics for small species concentrations. Gas phase chemistry is modeled using steric factors derived from Arrhenius reaction rates or in a manner similar to continuum modeling. Surface chemistry is modeled with surface reaction probabilities; an optional site density, energy dependent, coverage model is included. Electrons are modeled by either a local charge neutrality assumption or as discrete simulational particles. Ion chemistry is modeled with electron impact chemistry rates and charge exchange reactions. Coulomb collision cross-sections are used instead of Variable Hard Sphere values for ion-ion interactions. The electro-static fields can either be: externally input, a Langmuir-Tonks model or from a Green's Function (Boundary Element) based Poison Solver. Icarus has been used for subsonic to hypersonic, chemically reacting, and plasma flows. The Icarus software package includes the grid generation, parallel processor decomposition, post-processing, and restart software. The commercial graphics package, Tecplot, is used for graphics display. All of the software packages are written in standard Fortran

  2. XL-100S microprogrammable processor

    International Nuclear Information System (INIS)

    Gorbunov, N.V.; Guzik, Z.; Sutulin, V.A.; Forytski, A.

    1983-01-01

    The XL-100S microprogrammable processor providing the multiprocessor operation mode in the XL system crate is described. The processor meets the EUR 6500 CAMAC standards, address up to 4 Mbyte memory, and interacts with 7 CAMAC branchas. Eight external requests initiate operations preset by a sequence of microcommands in a memory of the capacity up to 64 kwords of 32-Git. The microprocessor architecture allows one to emulate commands of the majority of mini- or micro-computers, including floating point operations. The XL-100S processor may be used in various branches of experimental physics: for physical experiment apparatus control, fast selection of useful physical events, organization of the of input/output operations, organization of direct assess to memory included, etc. The Am2900 microprocessor set is used as an elementary base. The device is made in the form of a single width CAMAC module

  3. Computations on the massively parallel processor at the Goddard Space Flight Center

    Science.gov (United States)

    Strong, James P.

    1991-01-01

    Described are four significant algorithms implemented on the massively parallel processor (MPP) at the Goddard Space Flight Center. Two are in the area of image analysis. Of the other two, one is a mathematical simulation experiment and the other deals with the efficient transfer of data between distantly separated processors in the MPP array. The first algorithm presented is the automatic determination of elevations from stereo pairs. The second algorithm solves mathematical logistic equations capable of producing both ordered and chaotic (or random) solutions. This work can potentially lead to the simulation of artificial life processes. The third algorithm is the automatic segmentation of images into reasonable regions based on some similarity criterion, while the fourth is an implementation of a bitonic sort of data which significantly overcomes the nearest neighbor interconnection constraints on the MPP for transferring data between distant processors.

  4. Image Matrix Processor for Volumetric Computations Final Report CRADA No. TSB-1148-95

    Energy Technology Data Exchange (ETDEWEB)

    Roberson, G. Patrick [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Browne, Jolyon [Advanced Research & Applications Corporation, Sunnyvale, CA (United States)

    2018-01-22

    The development of an Image Matrix Processor (IMP) was proposed that would provide an economical means to perform rapid ray-tracing processes on volume "Giga Voxel" data sets. This was a multi-phased project. The objective of the first phase of the IMP project was to evaluate the practicality of implementing a workstation-based Image Matrix Processor for use in volumetric reconstruction and rendering using hardware simulation techniques. Additionally, ARACOR and LLNL worked together to identify and pursue further funding sources to complete a second phase of this project.

  5. Parallel quantum computing in a single ensemble quantum computer

    International Nuclear Information System (INIS)

    Long Guilu; Xiao, L.

    2004-01-01

    We propose a parallel quantum computing mode for ensemble quantum computer. In this mode, some qubits are in pure states while other qubits are in mixed states. It enables a single ensemble quantum computer to perform 'single-instruction-multidata' type of parallel computation. Parallel quantum computing can provide additional speedup in Grover's algorithm and Shor's algorithm. In addition, it also makes a fuller use of qubit resources in an ensemble quantum computer. As a result, some qubits discarded in the preparation of an effective pure state in the Schulman-Varizani and the Cleve-DiVincenzo algorithms can be reutilized

  6. Computation of induced dipoles in molecular mechanics simulations using graphics processors.

    Science.gov (United States)

    Pratas, Frederico; Sousa, Leonel; Dieterich, Johannes M; Mata, Ricardo A

    2012-05-25

    In this work, we present a tentative step toward the efficient implementation of polarizable molecular mechanics force fields with GPU acceleration. The computational bottleneck of such applications is found in the treatment of electrostatics, where higher-order multipoles and a self-consistent treatment of polarization effects are needed. We have implemented a GPU accelerated code, based on the Tinker program suite, for the computation of induced dipoles. The largest test system used shows a speedup factor of over 20 for a single precision GPU implementation, when comparing to the serial CPU version. A discussion of the optimization and parametrization steps is included. Comparison between different graphic cards and CPU-GPU embedding is also given. The current work demonstrates the potential usefulness of GPU programming in accelerating this field of applications.

  7. Computation of Molecular Spectra on a Quantum Processor with an Error-Resilient Algorithm

    Science.gov (United States)

    Colless, J. I.; Ramasesh, V. V.; Dahlen, D.; Blok, M. S.; Kimchi-Schwartz, M. E.; McClean, J. R.; Carter, J.; de Jong, W. A.; Siddiqi, I.

    2018-02-01

    Harnessing the full power of nascent quantum processors requires the efficient management of a limited number of quantum bits with finite coherent lifetimes. Hybrid algorithms, such as the variational quantum eigensolver (VQE), leverage classical resources to reduce the required number of quantum gates. Experimental demonstrations of VQE have resulted in calculation of Hamiltonian ground states, and a new theoretical approach based on a quantum subspace expansion (QSE) has outlined a procedure for determining excited states that are central to dynamical processes. We use a superconducting-qubit-based processor to apply the QSE approach to the H2 molecule, extracting both ground and excited states without the need for auxiliary qubits or additional minimization. Further, we show that this extended protocol can mitigate the effects of incoherent errors, potentially enabling larger-scale quantum simulations without the need for complex error-correction techniques.

  8. Computation of Molecular Spectra on a Quantum Processor with an Error-Resilient Algorithm

    Directory of Open Access Journals (Sweden)

    J. I. Colless

    2018-02-01

    Full Text Available Harnessing the full power of nascent quantum processors requires the efficient management of a limited number of quantum bits with finite coherent lifetimes. Hybrid algorithms, such as the variational quantum eigensolver (VQE, leverage classical resources to reduce the required number of quantum gates. Experimental demonstrations of VQE have resulted in calculation of Hamiltonian ground states, and a new theoretical approach based on a quantum subspace expansion (QSE has outlined a procedure for determining excited states that are central to dynamical processes. We use a superconducting-qubit-based processor to apply the QSE approach to the H_{2} molecule, extracting both ground and excited states without the need for auxiliary qubits or additional minimization. Further, we show that this extended protocol can mitigate the effects of incoherent errors, potentially enabling larger-scale quantum simulations without the need for complex error-correction techniques.

  9. Solution to the influence of the MSSW propagating velocity on the bandwidths of the single-scale wavelet-transform processor using MSSW device.

    Science.gov (United States)

    Lu, Wenke; Zhu, Changchun; Kuang, Lun; Zhang, Ting; Zhang, Jingduan

    2012-01-01

    The objective of this research was to investigate the possibility of solving the influence of the magnetostatic surface wave (MSSW) propagating velocity on the bandwidths of the single-scale wavelet transform processor using MSSW device. The motivation for this work was prompted by the processor that -3dB bandwidth varies as the propagating velocity of MSSW changes. In this paper, we present the influence of the magnetostatic surface wave (MSSW) propagating velocity on the bandwidths as the key problem of the single-scale wavelet transform processor using MSSW device. The solution to the problem is achieved in this study. we derived the function between the propagating velocity of MSSW and the -3dB bandwidth, so we know from the function that -3dB bandwidth of the single-scale wavelet transform processor using MSSW device varies as the propagating velocity of MSSW changes. Through adjusting the distance and orientation of the permanent magnet, we can implement the control of the MSSW propagating velocity, so that the influence of the MSSW propagating velocity on the bandwidths of the single-scale wavelet transform processor using MSSW device is solved. Copyright © 2011 Elsevier B.V. All rights reserved.

  10. High performance graphics processor based computed tomography reconstruction algorithms for nuclear and other large scale applications.

    Energy Technology Data Exchange (ETDEWEB)

    Jimenez, Edward S. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Orr, Laurel J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Thompson, Kyle R. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2013-09-01

    The goal of this work is to develop a fast computed tomography (CT) reconstruction algorithm based on graphics processing units (GPU) that achieves significant improvement over traditional central processing unit (CPU) based implementations. The main challenge in developing a CT algorithm that is capable of handling very large datasets is parallelizing the algorithm in such a way that data transfer does not hinder performance of the reconstruction algorithm. General Purpose Graphics Processing (GPGPU) is a new technology that the Science and Technology (S&T) community is starting to adopt in many fields where CPU-based computing is the norm. GPGPU programming requires a new approach to algorithm development that utilizes massively multi-threaded environments. Multi-threaded algorithms in general are difficult to optimize since performance bottlenecks occur that are non-existent in single-threaded algorithms such as memory latencies. If an efficient GPU-based CT reconstruction algorithm can be developed; computational times could be improved by a factor of 20. Additionally, cost benefits will be realized as commodity graphics hardware could potentially replace expensive supercomputers and high-end workstations. This project will take advantage of the CUDA programming environment and attempt to parallelize the task in such a way that multiple slices of the reconstruction volume are computed simultaneously. This work will also take advantage of the GPU memory by utilizing asynchronous memory transfers, GPU texture memory, and (when possible) pinned host memory so that the memory transfer bottleneck inherent to GPGPU is amortized. Additionally, this work will take advantage of GPU-specific hardware (i.e. fast texture memory, pixel-pipelines, hardware interpolators, and varying memory hierarchy) that will allow for additional performance improvements.

  11. Supercomputers and parallel computation. Based on the proceedings of a workshop on progress in the use of vector and array processors organised by the Institute of Mathematics and its Applications and held in Bristol, 2-3 September 1982

    International Nuclear Information System (INIS)

    Paddon, D.J.

    1984-01-01

    This book is based on the proceedings of a conference on parallel computing held in 1982. There are 18 papers which cover the following topics: VLSI parallel architectures, the theory of parallel computing and vector and array processor computing. One paper on 'Tough Problems in Reactor Design' is indexed separately. All the contributions are on research done in the United Kingdom. Although much of the experience in array processor computing is associated with the ICL distributed array processor (DAP) and this is reflected in the contributions, the research relating to the ICL DAP is relevant to all types of array processors. (UK)

  12. Gilgamesh: A Multithreaded Processor-In-Memory Architecture for Petaflops Computing

    Science.gov (United States)

    Sterling, T. L.; Zima, H. P.

    2002-01-01

    Processor-in-Memory (PIM) architectures avoid the von Neumann bottleneck in conventional machines by integrating high-density DRAM and CMOS logic on the same chip. Parallel systems based on this new technology are expected to provide higher scalability, adaptability, robustness, fault tolerance and lower power consumption than current MPPs or commodity clusters. In this paper we describe the design of Gilgamesh, a PIM-based massively parallel architecture, and elements of its execution model. Gilgamesh extends existing PIM capabilities by incorporating advanced mechanisms for virtualizing tasks and data and providing adaptive resource management for load balancing and latency tolerance. The Gilgamesh execution model is based on macroservers, a middleware layer which supports object-based runtime management of data and threads allowing explicit and dynamic control of locality and load balancing. The paper concludes with a discussion of related research activities and an outlook to future work.

  13. High-speed, automatic controller design considerations for integrating array processor, multi-microprocessor, and host computer system architectures

    Science.gov (United States)

    Jacklin, S. A.; Leyland, J. A.; Warmbrodt, W.

    1985-01-01

    Modern control systems must typically perform real-time identification and control, as well as coordinate a host of other activities related to user interaction, online graphics, and file management. This paper discusses five global design considerations which are useful to integrate array processor, multimicroprocessor, and host computer system architectures into versatile, high-speed controllers. Such controllers are capable of very high control throughput, and can maintain constant interaction with the nonreal-time or user environment. As an application example, the architecture of a high-speed, closed-loop controller used to actively control helicopter vibration is briefly discussed. Although this system has been designed for use as the controller for real-time rotorcraft dynamics and control studies in a wind tunnel environment, the controller architecture can generally be applied to a wide range of automatic control applications.

  14. The Heidelberg POLYP - a flexible and fault-tolerant poly-processor

    International Nuclear Information System (INIS)

    Maenner, R.; Deluigi, B.

    1981-01-01

    The Heidelberg poly-processor system POLYP is described. It is intended to be used in nuclear physics for reprocessing of experimental data, in high energy physics as second-stage trigger processor, and generally in other applications requiring high-computing power. The POLYP system consists of any number of I/O-processors, processor modules (eventually of different types), global memory segments, and a host processor. All modules (up to several hundred) are connected by a multiple common-data-bus system; all processors, additionally, by a multiple sync bus system for processor/task-scheduling. All hard- and software is designed to be decentralized and free of bottle-necks. Most hardware-faults like single-bit errors in memory or multi-bit errors during transfers are automatically corrected. Defective modules, buses, etc., can be removed with only a graceful degradation of the system-throughput. (orig.)

  15. Distributed quantum computing with single photon sources

    International Nuclear Information System (INIS)

    Beige, A.; Kwek, L.C.

    2005-01-01

    Full text: Distributed quantum computing requires the ability to perform nonlocal gate operations between the distant nodes (stationary qubits) of a large network. To achieve this, it has been proposed to interconvert stationary qubits with flying qubits. In contrast to this, we show that distributed quantum computing only requires the ability to encode stationary qubits into flying qubits but not the conversion of flying qubits into stationary qubits. We describe a scheme for the realization of an eventually deterministic controlled phase gate by performing measurements on pairs of flying qubits. Our scheme could be implemented with a linear optics quantum computing setup including sources for the generation of single photons on demand, linear optics elements and photon detectors. In the presence of photon loss and finite detector efficiencies, the scheme could be used to build large cluster states for one way quantum computing with a high fidelity. (author)

  16. Quantum simulation of superconductors on quantum computers. Toward the first applications of quantum processors

    Energy Technology Data Exchange (ETDEWEB)

    Dallaire-Demers, Pierre-Luc

    2016-10-07

    Quantum computers are the ideal platform for quantum simulations. Given enough coherent operations and qubits, such machines can be leveraged to simulate strongly correlated materials, where intricate quantum effects give rise to counter-intuitive macroscopic phenomena such as high-temperature superconductivity. Many phenomena of strongly correlated materials are encapsulated in the Fermi-Hubbard model. In general, no closed-form solution is known for lattices of more than one spatial dimension, but they can be numerically approximated using cluster methods. To model long-range effects such as order parameters, a powerful method to compute the cluster's Green's function consists in finding its self-energy through a variational principle. As is shown in this thesis, this allows the possibility of studying various phase transitions at finite temperature in the Fermi-Hubbard model. However, a classical cluster solver quickly hits an exponential wall in the memory (or computation time) required to store the computation variables. We show theoretically that the cluster solver can be mapped to a subroutine on a quantum computer whose quantum memory usage scales linearly with the number of orbitals in the simulated cluster and the number of measurements scales quadratically. We also provide a gate decomposition of the cluster Hamiltonian and a simple planar architecture for a quantum simulator that can also be used to simulate more general fermionic systems. We briefly analyze the Trotter-Suzuki errors and estimate the scaling properties of the algorithm for more complex applications. A quantum computer with a few tens of qubits could therefore simulate the thermodynamic properties of complex fermionic lattices inaccessible to classical supercomputers.

  17. Quantum simulation of superconductors on quantum computers. Toward the first applications of quantum processors

    International Nuclear Information System (INIS)

    Dallaire-Demers, Pierre-Luc

    2016-01-01

    Quantum computers are the ideal platform for quantum simulations. Given enough coherent operations and qubits, such machines can be leveraged to simulate strongly correlated materials, where intricate quantum effects give rise to counter-intuitive macroscopic phenomena such as high-temperature superconductivity. Many phenomena of strongly correlated materials are encapsulated in the Fermi-Hubbard model. In general, no closed-form solution is known for lattices of more than one spatial dimension, but they can be numerically approximated using cluster methods. To model long-range effects such as order parameters, a powerful method to compute the cluster's Green's function consists in finding its self-energy through a variational principle. As is shown in this thesis, this allows the possibility of studying various phase transitions at finite temperature in the Fermi-Hubbard model. However, a classical cluster solver quickly hits an exponential wall in the memory (or computation time) required to store the computation variables. We show theoretically that the cluster solver can be mapped to a subroutine on a quantum computer whose quantum memory usage scales linearly with the number of orbitals in the simulated cluster and the number of measurements scales quadratically. We also provide a gate decomposition of the cluster Hamiltonian and a simple planar architecture for a quantum simulator that can also be used to simulate more general fermionic systems. We briefly analyze the Trotter-Suzuki errors and estimate the scaling properties of the algorithm for more complex applications. A quantum computer with a few tens of qubits could therefore simulate the thermodynamic properties of complex fermionic lattices inaccessible to classical supercomputers.

  18. 3081/E processor

    International Nuclear Information System (INIS)

    Kunz, P.F.; Gravina, M.; Oxoby, G.

    1984-04-01

    The 3081/E project was formed to prepare a much improved IBM mainframe emulator for the future. Its design is based on a large amount of experience in using the 168/E processor to increase available CPU power in both online and offline environments. The processor will be at least equal to the execution speed of a 370/168 and up to 1.5 times faster for heavy floating point code. A single processor will thus be at least four times more powerful than the VAX 11/780, and five processors on a system would equal at least the performance of the IBM 3081K. With its large memory space and simple but flexible high speed interface, the 3081/E is well suited for the online and offline needs of high energy physics in the future

  19. An analysis of simple computational strategies to facilitate the design of functional molecular information processors.

    Science.gov (United States)

    Lee, Yiling; Roslan, Rozieffa; Azizan, Shariza; Firdaus-Raih, Mohd; Ramlan, Effirul I

    2016-10-28

    Biological macromolecules (DNA, RNA and proteins) are capable of processing physical or chemical inputs to generate outputs that parallel conventional Boolean logical operators. However, the design of functional modules that will enable these macromolecules to operate as synthetic molecular computing devices is challenging. Using three simple heuristics, we designed RNA sensors that can mimic the function of a seven-segment display (SSD). Ten independent and orthogonal sensors representing the numerals 0 to 9 are designed and constructed. Each sensor has its own unique oligonucleotide binding site region that is activated uniquely by a specific input. Each operator was subjected to a stringent in silico filtering. Random sensors were selected and functionally validated via ribozyme self cleavage assays that were visualized via electrophoresis. By utilising simple permutation and randomisation in the sequence design phase, we have developed functional RNA sensors thus demonstrating that even the simplest of computational methods can greatly aid the design phase for constructing functional molecular devices.

  20. Computer program SUPAN: a post processor for the analysis of beam type piping supports

    International Nuclear Information System (INIS)

    Nitzel, M.E.

    1979-01-01

    Beam-type piping supports may be fabricated from structural members with various cross sectional geometries for situations where standard vendor piping supports may not be adequate or economical. The analysis of these supports in conformance to ASME Code rules is often tedious and time consuming due to the complex loading conditions and numerous support geometries often encountered. Computer program SUPAN has been developed to facilitate the efficient analysis of a large number of supports subject to multiple loading and service conditions. The capabilities of the SUPAN computer program are described. A general discussion of ASME Code requirements and the methodology used for stress calculations are presented. A description of the calculational flow through the program with discussions of program input, output, and programmed error messages is included. Program assets and advantages include increased analyst efficiency, ease of operation, low cost and run times, automated data checking, and tabular output for inclusion in reports

  1. AMD's 64-bit Opteron processor

    CERN Multimedia

    CERN. Geneva

    2003-01-01

    This talk concentrates on issues that relate to obtaining peak performance from the Opteron processor. Compiler options, memory layout, MPI issues in multi-processor configurations and the use of a NUMA kernel will be covered. A discussion of recent benchmarking projects and results will also be included.BiographiesDavid RichDavid directs AMD's efforts in high performance computing and also in the use of Opteron processors...

  2. Array processor architecture

    Science.gov (United States)

    Barnes, George H. (Inventor); Lundstrom, Stephen F. (Inventor); Shafer, Philip E. (Inventor)

    1983-01-01

    A high speed parallel array data processing architecture fashioned under a computational envelope approach includes a data base memory for secondary storage of programs and data, and a plurality of memory modules interconnected to a plurality of processing modules by a connection network of the Omega gender. Programs and data are fed from the data base memory to the plurality of memory modules and from hence the programs are fed through the connection network to the array of processors (one copy of each program for each processor). Execution of the programs occur with the processors operating normally quite independently of each other in a multiprocessing fashion. For data dependent operations and other suitable operations, all processors are instructed to finish one given task or program branch before all are instructed to proceed in parallel processing fashion on the next instruction. Even when functioning in the parallel processing mode however, the processors are not locked-step but execute their own copy of the program individually unless or until another overall processor array synchronization instruction is issued.

  3. Multi-dimensional characterization of electrostatic surface potential computation on graphics processors.

    Science.gov (United States)

    Daga, Mayank; Feng, Wu-Chun

    2012-04-12

    Calculating the electrostatic surface potential (ESP) of a biomolecule is critical towards understanding biomolecular function. Because of its quadratic computational complexity (as a function of the number of atoms in a molecule), there have been continual efforts to reduce its complexity either by improving the algorithm or the underlying hardware on which the calculations are performed. We present the combined effect of (i) a multi-scale approximation algorithm, known as hierarchical charge partitioning (HCP), when applied to the calculation of ESP and (ii) its mapping onto a graphics processing unit (GPU). To date, most molecular modeling algorithms perform an artificial partitioning of biomolecules into a grid/lattice on the GPU. In contrast, HCP takes advantage of the natural partitioning in biomolecules, which in turn, better facilitates its mapping onto the GPU. Specifically, we characterize the effect of known GPU optimization techniques like use of shared memory. In addition, we demonstrate how the cost of divergent branching on a GPU can be amortized across algorithms like HCP in order to deliver a massive performance boon. We accelerated the calculation of ESP by 25-fold solely by parallelization on the GPU. Combining GPU and HCP, resulted in a speedup of at most 1,860-fold for our largest molecular structure. The baseline for these speedups is an implementation that has been hand-tuned SSE-optimized and parallelized across 16 cores on the CPU. The use of GPU does not deteriorate the accuracy of our results.

  4. Massively Parallel Computing at Sandia and Its Application to National Defense

    National Research Council Canada - National Science Library

    Dosanjh, Sudip

    1991-01-01

    Two years ago, researchers at Sandia National Laboratories showed that a massively parallel computer with 1024 processors could solve scientific problems more than 1000 times faster than a single processor...

  5. Multithreaded Processors

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 20; Issue 9. Multithreaded Processors. Venkat Arun. General Article Volume 20 Issue 9 September 2015 pp 844-855. Fulltext. Click here to view fulltext PDF. Permanent link: http://www.ias.ac.in/article/fulltext/reso/020/09/0844-0855. Keywords.

  6. PVM Enhancement for Beowulf Multiple-Processor Nodes

    Science.gov (United States)

    Springer, Paul

    2006-01-01

    A recent version of the Parallel Virtual Machine (PVM) computer program has been enhanced to enable use of multiple processors in a single node of a Beowulf system (a cluster of personal computers that runs the Linux operating system). A previous version of PVM had been enhanced by addition of a software port, denoted BEOLIN, that enables the incorporation of a Beowulf system into a larger parallel processing system administered by PVM, as though the Beowulf system were a single computer in the larger system. BEOLIN spawns tasks on (that is, automatically assigns tasks to) individual nodes within the cluster. However, BEOLIN does not enable the use of multiple processors in a single node. The present enhancement adds support for a parameter in the PVM command line that enables the user to specify which Internet Protocol host address the code should use in communicating with other Beowulf nodes. This enhancement also provides for the case in which each node in a Beowulf system contains multiple processors. In this case, by making multiple references to a single node, the user can cause the software to spawn multiple tasks on the multiple processors in that node.

  7. Aeroflex Single Board Computers and Instrument Circuit Cards for Nuclear Environments Measuring and Monitoring

    International Nuclear Information System (INIS)

    Stratton, Sam; Stevenson, Dave; Magnifico, Mateo

    2013-06-01

    A Single Board Computer (SBC) is an entire computer including all of the required components and I/O interfaces built on a single circuit board. SBC's are used across numerous industrial, military and space flight applications. In the case of military and space implementations, SBC's employ advanced high reliability processors designed for rugged thermal, mechanical and even radiation environments. These processors, in turn, rely on equally advanced support components such as memory, interface, and digital logic. When all of these components are put together on a printed circuit card, the result is a highly reliable Single Board Computer that can perform a wide variety of tasks in very harsh environments. In the area of instrumentation, peripheral circuit cards can be developed that directly interface to the SBC and various radiation measuring devices and systems. Designers use signal conditioning and high reliability Analog to Digital Converters (ADC's) to convert the measuring device signals to digital data suitable for a microprocessor. The data can then be sent to the SBC via high speed communication protocols such as Ethernet or similar type of serial bus. Data received by the SBC can then be manipulated and processed into a form readily available to users. Recent events are causing some in the NPP industry to consider devices and systems with better radiation and temperature performance capability. Systems designed for space application are designed for the harsh environment of space which under certain conditions would be similar to what the electronics will see during a severe nuclear reactor event. The NPP industry should be considering higher reliability electronics for certain critical applications. (authors)

  8. A Course on Reconfigurable Processors

    Science.gov (United States)

    Shoufan, Abdulhadi; Huss, Sorin A.

    2010-01-01

    Reconfigurable computing is an established field in computer science. Teaching this field to computer science students demands special attention due to limited student experience in electronics and digital system design. This article presents a compact course on reconfigurable processors, which was offered at the Technische Universitat Darmstadt,…

  9. Processor, method and computer program for processing an audio signal using truncated analysis or synthesis window overlap portions

    OpenAIRE

    2016-01-01

    A processor for processing an audio signal (200), comprises: an analyzer (202) for deriving a window control signal (204) from the audio signal (200) indicating a change from a first asymmetric window (1400) to a second window (1402), or indicating a change from a third window (1450) to a fourth asymmetric window (1452), wherein the second window (1402) is shorter than the first window (1400), or wherein the third window (1450) is shorter than the fourth window (1452); a window constructor (2...

  10. Rapid prototyping and evaluation of programmable SIMD SDR processors in LISA

    Science.gov (United States)

    Chen, Ting; Liu, Hengzhu; Zhang, Botao; Liu, Dongpei

    2013-03-01

    With the development of international wireless communication standards, there is an increase in computational requirement for baseband signal processors. Time-to-market pressure makes it impossible to completely redesign new processors for the evolving standards. Due to its high flexibility and low power, software defined radio (SDR) digital signal processors have been proposed as promising technology to replace traditional ASIC and FPGA fashions. In addition, there are large numbers of parallel data processed in computation-intensive functions, which fosters the development of single instruction multiple data (SIMD) architecture in SDR platform. So a new way must be found to prototype the SDR processors efficiently. In this paper we present a bit-and-cycle accurate model of programmable SIMD SDR processors in a machine description language LISA. LISA is a language for instruction set architecture which can gain rapid model at architectural level. In order to evaluate the availability of our proposed processor, three common baseband functions, FFT, FIR digital filter and matrix multiplication have been mapped on the SDR platform. Analytical results showed that the SDR processor achieved the maximum of 47.1% performance boost relative to the opponent processor.

  11. A* Algorithm for Graphics Processors

    OpenAIRE

    Inam, Rafia; Cederman, Daniel; Tsigas, Philippas

    2010-01-01

    Today's computer games have thousands of agents moving at the same time in areas inhabited by a large number of obstacles. In such an environment it is important to be able to calculate multiple shortest paths concurrently in an efficient manner. The highly parallel nature of the graphics processor suits this scenario perfectly. We have implemented a graphics processor based version of the A* path finding algorithm together with three algorithmic improvements that allow it to work faster and ...

  12. Communications systems and methods for subsea processors

    Science.gov (United States)

    Gutierrez, Jose; Pereira, Luis

    2016-04-26

    A subsea processor may be located near the seabed of a drilling site and used to coordinate operations of underwater drilling components. The subsea processor may be enclosed in a single interchangeable unit that fits a receptor on an underwater drilling component, such as a blow-out preventer (BOP). The subsea processor may issue commands to control the BOP and receive measurements from sensors located throughout the BOP. A shared communications bus may interconnect the subsea processor and underwater components and the subsea processor and a surface or onshore network. The shared communications bus may be operated according to a time division multiple access (TDMA) scheme.

  13. An interactive parallel processor for data analysis

    International Nuclear Information System (INIS)

    Mong, J.; Logan, D.; Maples, C.; Rathbun, W.; Weaver, D.

    1984-01-01

    A parallel array of eight minicomputers has been assembled in an attempt to deal with kiloparameter data events. By exporting computer system functions to a separate processor, the authors have been able to achieve computer amplification linearly proportional to the number of executing processors

  14. Single Event Upset Analysis: On-orbit performance of the Alpha Magnetic Spectrometer Digital Signal Processor Memory aboard the International Space Station

    Science.gov (United States)

    Li, Jiaqiang; Choutko, Vitaly; Xiao, Liyi

    2018-03-01

    Based on the collection of error data from the Alpha Magnetic Spectrometer (AMS) Digital Signal Processors (DSP), on-orbit Single Event Upsets (SEUs) of the DSP program memory are analyzed. The daily error distribution and time intervals between errors are calculated to evaluate the reliability of the system. The particle density distribution of International Space Station (ISS) orbit is presented and the effects from the South Atlantic Anomaly (SAA) and the geomagnetic poles are analyzed. The impact of solar events on the DSP program memory is carried out combining data analysis and Monte Carlo simulation (MC). From the analysis and simulation results, it is concluded that the area corresponding to the SAA is the main source of errors on the ISS orbit. Solar events can also cause errors on DSP program memory, but the effect depends on the on-orbit particle density.

  15. Modcomp MAX IV System Processors reference guide

    Energy Technology Data Exchange (ETDEWEB)

    Cummings, J.

    1990-10-01

    A user almost always faces a big problem when having to learn to use a new computer system. The information necessary to use the system is often scattered throughout many different manuals. The user also faces the problem of extracting the information really needed from each manual. Very few computer vendors supply a single Users Guide or even a manual to help the new user locate the necessary manuals. Modcomp is no exception to this, Modcomp MAX IV requires that the user be familiar with the system file usage which adds to the problem. At General Atomics there is an ever increasing need for new users to learn how to use the Modcomp computers. This paper was written to provide a condensed Users Reference Guide'' for Modcomp computer users. This manual should be of value not only to new users but any users that are not Modcomp computer systems experts. This Users Reference Guide'' is intended to provided the basic information for the use of the various Modcomp System Processors necessary to, create, compile, link-edit, and catalog a program. Only the information necessary to provide the user with a basic understanding of the Systems Processors is included. This document provides enough information for the majority of programmers to use the Modcomp computers without having to refer to any other manuals. A lot of emphasis has been placed on the file description and usage for each of the System Processors. This allows the user to understand how Modcomp MAX IV does things rather than just learning the system commands.

  16. Implementation of kernels on the Maestro processor

    Science.gov (United States)

    Suh, Jinwoo; Kang, D. I. D.; Crago, S. P.

    Currently, most microprocessors use multiple cores to increase performance while limiting power usage. Some processors use not just a few cores, but tens of cores or even 100 cores. One such many-core microprocessor is the Maestro processor, which is based on Tilera's TILE64 processor. The Maestro chip is a 49-core, general-purpose, radiation-hardened processor designed for space applications. The Maestro processor, unlike the TILE64, has a floating point unit (FPU) in each core for improved floating point performance. The Maestro processor runs at 342 MHz clock frequency. On the Maestro processor, we implemented several widely used kernels: matrix multiplication, vector add, FIR filter, and FFT. We measured and analyzed the performance of these kernels. The achieved performance was up to 5.7 GFLOPS, and the speedup compared to single tile was up to 49 using 49 tiles.

  17. An Efficient Solution Method for Multibody Systems with Loops Using Multiple Processors

    Science.gov (United States)

    Ghosh, Tushar K.; Nguyen, Luong A.; Quiocho, Leslie J.

    2015-01-01

    This paper describes a multibody dynamics algorithm formulated for parallel implementation on multiprocessor computing platforms using the divide-and-conquer approach. The system of interest is a general topology of rigid and elastic articulated bodies with or without loops. The algorithm divides the multibody system into a number of smaller sets of bodies in chain or tree structures, called "branches" at convenient joints called "connection points", and uses an Order-N (O (N)) approach to formulate the dynamics of each branch in terms of the unknown spatial connection forces. The equations of motion for the branches, leaving the connection forces as unknowns, are implemented in separate processors in parallel for computational efficiency, and the equations for all the unknown connection forces are synthesized and solved in one or several processors. The performances of two implementations of this divide-and-conquer algorithm in multiple processors are compared with an existing method implemented on a single processor.

  18. Design Principles for Synthesizable Processor Cores

    DEFF Research Database (Denmark)

    Schleuniger, Pascal; McKee, Sally A.; Karlsson, Sven

    2012-01-01

    As FPGAs get more competitive, synthesizable processor cores become an attractive choice for embedded computing. Currently popular commercial processor cores do not fully exploit current FPGA architectures. In this paper, we propose general design principles to increase instruction throughput...... through the use of micro-benchmarks that our principles guide the design of a processor core that improves performance by an average of 38% over a similar Xilinx MicroBlaze configuration....

  19. Accuracies Of Optical Processors For Adaptive Optics

    Science.gov (United States)

    Downie, John D.; Goodman, Joseph W.

    1992-01-01

    Paper presents analysis of accuracies and requirements concerning accuracies of optical linear-algebra processors (OLAP's) in adaptive-optics imaging systems. Much faster than digital electronic processor and eliminate some residual distortion. Question whether errors introduced by analog processing of OLAP overcome advantage of greater speed. Paper addresses issue by presenting estimate of accuracy required in general OLAP that yields smaller average residual aberration of wave front than digital electronic processor computing at given speed.

  20. The LASS hardware processor

    International Nuclear Information System (INIS)

    Kunz, P.F.

    1976-01-01

    The problems of data analysis with hardware processors are reviewed and a description is given of a programmable processor. This processor, the 168/E, has been designed for use in the LASS multi-processor system; it has an execution speed comparable to the IBM 370/168 and uses the subset of IBM 370 instructions appropriate to the LASS analysis task. (Auth.)

  1. Development of small scale cluster computer for numerical analysis

    Science.gov (United States)

    Zulkifli, N. H. N.; Sapit, A.; Mohammed, A. N.

    2017-09-01

    In this study, two units of personal computer were successfully networked together to form a small scale cluster. Each of the processor involved are multicore processor which has four cores in it, thus made this cluster to have eight processors. Here, the cluster incorporate Ubuntu 14.04 LINUX environment with MPI implementation (MPICH2). Two main tests were conducted in order to test the cluster, which is communication test and performance test. The communication test was done to make sure that the computers are able to pass the required information without any problem and were done by using simple MPI Hello Program where the program written in C language. Additional, performance test was also done to prove that this cluster calculation performance is much better than single CPU computer. In this performance test, four tests were done by running the same code by using single node, 2 processors, 4 processors, and 8 processors. The result shows that with additional processors, the time required to solve the problem decrease. Time required for the calculation shorten to half when we double the processors. To conclude, we successfully develop a small scale cluster computer using common hardware which capable of higher computing power when compare to single CPU processor, and this can be beneficial for research that require high computing power especially numerical analysis such as finite element analysis, computational fluid dynamics, and computational physics analysis.

  2. Video frame processor

    International Nuclear Information System (INIS)

    Joshi, V.M.; Agashe, Alok; Bairi, B.R.

    1993-01-01

    This report provides technical description regarding the Video Frame Processor (VFP) developed at Bhabha Atomic Research Centre. The instrument provides capture of video images available in CCIR format. Two memory planes each with a capacity of 512 x 512 x 8 bit data enable storage of two video image frames. The stored image can be processed on-line and on-line image subtraction can also be carried out for image comparisons. The VFP is a PC Add-on board and is I/O mapped within the host IBM PC/AT compatible computer. (author). 9 refs., 4 figs., 19 photographs

  3. The study of Kruskal's and Prim's algorithms on the Multiple Instruction and Single Data stream computer system

    Directory of Open Access Journals (Sweden)

    A. Yu. Popov

    2015-01-01

    Full Text Available Bauman Moscow State Technical University is implementing a project to develop operating principles of computer system having radically new architecture. A developed working model of the system allowed us to evaluate an efficiency of developed hardware and software. The experimental results presented in previous studies, as well as the analysis of operating principles of new computer system permit to draw conclusions regarding its efficiency in solving discrete optimization problems related to processing of sets.The new architecture is based on a direct hardware support of operations of discrete mathematics, which is reflected in using the special facilities for processing of sets and data structures. Within the framework of the project a special device was designed, i.e. a structure processor (SP, which improved the performance, without limiting the scope of applications of such a computer system.The previous works presented the basic principles of the computational process organization in MISD (Multiple Instructions, Single Data system, showed the structure and features of the structure processor and the general principles to solve discrete optimization problems on graphs.This paper examines two search algorithms of the minimum spanning tree, namely Kruskal's and Prim's algorithms. It studies the implementations of algorithms for two SP operation modes: coprocessor mode and MISD one. The paper presents results of experimental comparison of MISD system performance in coprocessor mode with mainframes.

  4. Computer systems for annotation of single molecule fragments

    Science.gov (United States)

    Schwartz, David Charles; Severin, Jessica

    2016-07-19

    There are provided computer systems for visualizing and annotating single molecule images. Annotation systems in accordance with this disclosure allow a user to mark and annotate single molecules of interest and their restriction enzyme cut sites thereby determining the restriction fragments of single nucleic acid molecules. The markings and annotations may be automatically generated by the system in certain embodiments and they may be overlaid translucently onto the single molecule images. An image caching system may be implemented in the computer annotation systems to reduce image processing time. The annotation systems include one or more connectors connecting to one or more databases capable of storing single molecule data as well as other biomedical data. Such diverse array of data can be retrieved and used to validate the markings and annotations. The annotation systems may be implemented and deployed over a computer network. They may be ergonomically optimized to facilitate user interactions.

  5. Array processors based on Gaussian fraction-free method

    Energy Technology Data Exchange (ETDEWEB)

    Peng, S.; Sedukhin, S. [Aizu Univ., Aizuwakamatsu, Fukushima (Japan); Sedukhin, I.

    1998-03-01

    The design of algorithmic array processors for solving linear systems of equations using fraction-free Gaussian elimination method is presented. The design is based on a formal approach which constructs a family of planar array processors systematically. These array processors are synthesized and analyzed. It is shown that some array processors are optimal in the framework of linear allocation of computations and in terms of number of processing elements and computing time. (author)

  6. Efficient Multicriteria Protein Structure Comparison on Modern Processor Architectures

    Science.gov (United States)

    Manolakos, Elias S.

    2015-01-01

    Fast increasing computational demand for all-to-all protein structures comparison (PSC) is a result of three confounding factors: rapidly expanding structural proteomics databases, high computational complexity of pairwise protein comparison algorithms, and the trend in the domain towards using multiple criteria for protein structures comparison (MCPSC) and combining results. We have developed a software framework that exploits many-core and multicore CPUs to implement efficient parallel MCPSC in modern processors based on three popular PSC methods, namely, TMalign, CE, and USM. We evaluate and compare the performance and efficiency of the two parallel MCPSC implementations using Intel's experimental many-core Single-Chip Cloud Computer (SCC) as well as Intel's Core i7 multicore processor. We show that the 48-core SCC is more efficient than the latest generation Core i7, achieving a speedup factor of 42 (efficiency of 0.9), making many-core processors an exciting emerging technology for large-scale structural proteomics. We compare and contrast the performance of the two processors on several datasets and also show that MCPSC outperforms its component methods in grouping related domains, achieving a high F-measure of 0.91 on the benchmark CK34 dataset. The software implementation for protein structure comparison using the three methods and combined MCPSC, along with the developed underlying rckskel algorithmic skeletons library, is available via GitHub. PMID:26605332

  7. A Computational Experiment on Single-Walled Carbon Nanotubes

    Science.gov (United States)

    Simpson, Scott; Lonie, David C.; Chen, Jiechen; Zurek, Eva

    2013-01-01

    A computational experiment that investigates single-walled carbon nanotubes (SWNTs) has been developed and employed in an upper-level undergraduate physical chemistry laboratory course. Computations were carried out to determine the electronic structure, radial breathing modes, and the influence of the nanotube's diameter on the…

  8. Program computes single-point failures in critical system designs

    Science.gov (United States)

    Brown, W. R.

    1967-01-01

    Computer program analyzes the designs of critical systems that will either prove the design is free of single-point failures or detect each member of the population of single-point failures inherent in a system design. This program should find application in the checkout of redundant circuits and digital systems.

  9. Cache Energy Optimization Techniques For Modern Processors

    Energy Technology Data Exchange (ETDEWEB)

    Mittal, Sparsh [ORNL

    2013-01-01

    Modern multicore processors are employing large last-level caches, for example Intel's E7-8800 processor uses 24MB L3 cache. Further, with each CMOS technology generation, leakage energy has been dramatically increasing and hence, leakage energy is expected to become a major source of energy dissipation, especially in last-level caches (LLCs). The conventional schemes of cache energy saving either aim at saving dynamic energy or are based on properties specific to first-level caches, and thus these schemes have limited utility for last-level caches. Further, several other techniques require offline profiling or per-application tuning and hence are not suitable for product systems. In this book, we present novel cache leakage energy saving schemes for single-core and multicore systems; desktop, QoS, real-time and server systems. Also, we present cache energy saving techniques for caches designed with both conventional SRAM devices and emerging non-volatile devices such as STT-RAM (spin-torque transfer RAM). We present software-controlled, hardware-assisted techniques which use dynamic cache reconfiguration to configure the cache to the most energy efficient configuration while keeping the performance loss bounded. To profile and test a large number of potential configurations, we utilize low-overhead, micro-architecture components, which can be easily integrated into modern processor chips. We adopt a system-wide approach to save energy to ensure that cache reconfiguration does not increase energy consumption of other components of the processor. We have compared our techniques with state-of-the-art techniques and have found that our techniques outperform them in terms of energy efficiency and other relevant metrics. The techniques presented in this book have important applications in improving energy-efficiency of higher-end embedded, desktop, QoS, real-time, server processors and multitasking systems. This book is intended to be a valuable guide for both

  10. CoNNeCT Baseband Processor Module

    Science.gov (United States)

    Yamamoto, Clifford K; Jedrey, Thomas C.; Gutrich, Daniel G.; Goodpasture, Richard L.

    2011-01-01

    A document describes the CoNNeCT Baseband Processor Module (BPM) based on an updated processor, memory technology, and field-programmable gate arrays (FPGAs). The BPM was developed from a requirement to provide sufficient computing power and memory storage to conduct experiments for a Software Defined Radio (SDR) to be implemented. The flight SDR uses the AT697 SPARC processor with on-chip data and instruction cache. The non-volatile memory has been increased from a 20-Mbit EEPROM (electrically erasable programmable read only memory) to a 4-Gbit Flash, managed by the RTAX2000 Housekeeper, allowing more programs and FPGA bit-files to be stored. The volatile memory has been increased from a 20-Mbit SRAM (static random access memory) to a 1.25-Gbit SDRAM (synchronous dynamic random access memory), providing additional memory space for more complex operating systems and programs to be executed on the SPARC. All memory is EDAC (error detection and correction) protected, while the SPARC processor implements fault protection via TMR (triple modular redundancy) architecture. Further capability over prior BPM designs includes the addition of a second FPGA to implement features beyond the resources of a single FPGA. Both FPGAs are implemented with Xilinx Virtex-II and are interconnected by a 96-bit bus to facilitate data exchange. Dedicated 1.25- Gbit SDRAMs are wired to each Xilinx FPGA to accommodate high rate data buffering for SDR applications as well as independent SpaceWire interfaces. The RTAX2000 manages scrub and configuration of each Xilinx.

  11. Towards a Process Algebra for Shared Processors

    DEFF Research Database (Denmark)

    Buchholtz, Mikael; Andersen, Jacob; Løvengreen, Hans Henrik

    2002-01-01

    We present initial work on a timed process algebra that models sharing of processor resources allowing preemption at arbitrary points in time. This enables us to model both the functional and the timely behaviour of concurrent processes executed on a single processor. We give a refinement relation...

  12. Very Long Instruction Word Processors

    Indian Academy of Sciences (India)

    Explicitly Parallel Instruction Computing (EPIC) is an instruction processing paradigm that has been in the spot- light due to its adoption by the next generation of Intel. Processors starting with the IA-64. The EPIC processing paradigm is an evolution of the Very Long Instruction. Word (VLIW) paradigm. This article gives an ...

  13. Development of an Advanced Digital Reactor Protection System Using Diverse Dual Processors to Prevent Common-Mode Failure

    International Nuclear Information System (INIS)

    Shin, Hyun Kook; Nam, Sang Ku; Sohn, Se Do; Chang, Hoon Seon

    2003-01-01

    The advanced digital reactor protection system (ADRPS) with diverse dual processors has been developed to prevent common-mode failure (CMF). The principle of diversity is applied to both hardware design and software design. For hardware diversity, two different types of CPUs are used for the bistable processor and local coincidence logic (LCL) processor. The Versa Module Eurocard-based single board computers are used for the CPU hardware platforms. The QNX operating system and the VxWorks operating system were selected for software diversity. Functional diversity is also applied to the input and output modules, and to the algorithm in the bistable processors and LCL processors. The characteristics of the newly developed digital protection system are described together with the preventive capability against CMF. Also, system reliability analysis is discussed. The evaluation results show that the ADRPS has a good preventive capability against the CMF and is a highly reliable reactor protection system

  14. Computer Architecture A Quantitative Approach

    CERN Document Server

    Hennessy, John L

    2007-01-01

    The era of seemingly unlimited growth in processor performance is over: single chip architectures can no longer overcome the performance limitations imposed by the power they consume and the heat they generate. Today, Intel and other semiconductor firms are abandoning the single fast processor model in favor of multi-core microprocessors--chips that combine two or more processors in a single package. In the fourth edition of Computer Architecture, the authors focus on this historic shift, increasing their coverage of multiprocessors and exploring the most effective ways of achieving parallelis

  15. Diagnosis of dementia with single photon emission computed tomography

    International Nuclear Information System (INIS)

    Jagust, W.J.; Budinger, T.F.; Reed, B.R.

    1987-01-01

    Single photon emission computed tomography is a practical modality for the study of physiologic cerebral activity in vivo. We utilized single photon emission computed tomography and N-isopropyl-p-iodoamphetamine iodine 123 to evaluate regional cerebral blood flow in nine patients with Alzheimer's disease (AD), five healthy elderly control subjects, and two patients with multi-infarct dementia. We found that all subjects with AD demonstrated flow deficits in temporoparietal cortex bilaterally, and that the ratio of activity in bilateral temporoparietal cortex to activity in the whole slice allowed the differentiation of all patients with AD from both the controls and from the patients with multi-infarct dementia. Furthermore, this ratio showed a strong correlation with disease severity in the AD group. Single photon emission computed tomography appears to be useful in the differential diagnosis of dementia and reflects clinical features of the disease

  16. Libera Electron Beam Position Processor

    CERN Document Server

    Ursic, Rok

    2005-01-01

    Libera is a product family delivering unprecedented possibilities for either building powerful single station solutions or architecting complex feedback systems in the field of accelerator instrumentation and controls. This paper presents functionality and field performance of its first member, the electron beam position processor. It offers superior performance with multiple measurement channels delivering simultaneously position measurements in digital format with MHz kHz and Hz bandwidths. This all-in-one product, facilitating pulsed and CW measurements, is much more than simply a high performance beam position measuring device delivering micrometer level reproducibility with sub-micrometer resolution. Rich connectivity options and innate processing power make it a powerful feedback building block. By interconnecting multiple Libera electron beam position processors one can build a low-latency high throughput orbit feedback system without adding additional hardware. Libera electron beam position processor ...

  17. Dynamic Reconfiguration for Adaptive Computing Systems (DRACS)

    National Research Council Canada - National Science Library

    Zaino, John

    2002-01-01

    ...) driven and one additional host-driven demonstration. This report describes how these can be used to develop a "virtual co-processor" that supports multiple reconfigurable computing applications residing in a single piece of hardware...

  18. A single-chip computer analysis system for liquid fluorescence

    International Nuclear Information System (INIS)

    Zhang Yongming; Wu Ruisheng; Li Bin

    1998-01-01

    The single-chip computer analysis system for liquid fluorescence is an intelligent analytic instrument, which is based on the principle that the liquid containing hydrocarbons can give out several characteristic fluorescences when irradiated by strong light. Besides a single-chip computer, the system makes use of the keyboard and the calculation and printing functions of a CASIO printing calculator. It combines optics, mechanism and electronics into one, and is small, light and practical, so it can be used for surface water sample analysis in oil field and impurity analysis of other materials

  19. OSL sensitivity changes during single aliquot procedures: Computer simulations

    DEFF Research Database (Denmark)

    McKeever, S.W.S.; Agersnap Larsen, N.; Bøtter-Jensen, L.

    1997-01-01

    We present computer simulations of sensitivity changes obtained during single aliquot, regeneration procedures. The simulations indicate that the sensitivity changes are the combined result of shallow trap and deep trap effects. Four separate processes have been identified. Although procedures can...... dose used and the natural dose. However, the sensitivity changes appear only weakly dependent upon added dose, suggesting that the SARA single aliquot technique may be a suitable method to overcome the sensitivity changes. (C) 1997 Elsevier Science Ltd....

  20. Numeric algorithms for parallel processors computer architectures with applications to the few-groups neutron diffusion equations

    International Nuclear Information System (INIS)

    Zee, S.K.

    1987-01-01

    A numeric algorithm and an associated computer code were developed for the rapid solution of the finite-difference method representation of the few-group neutron-diffusion equations on parallel computers. Applications of the numeric algorithm on both SIMD (vector pipeline) and MIMD/SIMD (multi-CUP/vector pipeline) architectures were explored. The algorithm was successfully implemented in the two-group, 3-D neutron diffusion computer code named DIFPAR3D (DIFfusion PARallel 3-Dimension). Numerical-solution techniques used in the code include the Chebyshev polynomial acceleration technique in conjunction with the power method of outer iteration. For inner iterations, a parallel form of red-black (cyclic) line SOR with automated determination of group dependent relaxation factors and iteration numbers required to achieve specified inner iteration error tolerance is incorporated. The code employs a macroscopic depletion model with trace capability for selected fission products' transients and critical boron. In addition to this, moderator and fuel temperature feedback models are also incorporated into the DIFPAR3D code, for realistic simulation of power reactor cores. The physics models used were proven acceptable in separate benchmarking studies

  1. Shuttle orbit IMU alignment. Single-precision computation error

    Science.gov (United States)

    Mcclain, C. R.

    1980-01-01

    The source of computational error in the inertial measurement unit (IMU) onorbit alignment software was investigated. Simulation runs were made on the IBM 360/70 computer with the IMU orbit alignment software coded in hal/s. The results indicate that for small IMU misalignment angles (less than 600 arc seconds), single precision computations in combination with the arc cosine method of eigen rotation angle extraction introduces an additional misalignment error of up to 230 arc seconds per axis. Use of the arc sine method, however, produced negligible misalignment error. As a result of this study, the arc sine method was recommended for use in the IMU onorbit alignment software.

  2. Computational Modeling of Photonic Crystal Microcavity Single-Photon Emitters

    Science.gov (United States)

    Saulnier, Nicole A.

    Conventional cryptography is based on algorithms that are mathematically complex and difficult to solve, such as factoring large numbers. The advent of a quantum computer would render these schemes useless. As scientists work to develop a quantum computer, cryptographers are developing new schemes for unconditionally secure cryptography. Quantum key distribution has emerged as one of the potential replacements of classical cryptography. It relics on the fact that measurement of a quantum bit changes the state of the bit and undetected eavesdropping is impossible. Single polarized photons can be used as the quantum bits, such that a quantum system would in some ways mirror the classical communication scheme. The quantum key distribution system would include components that create, transmit and detect single polarized photons. The focus of this work is on the development of an efficient single-photon source. This source is comprised of a single quantum dot inside of a photonic crystal microcavity. To better understand the physics behind the device, a computational model is developed. The model uses Finite-Difference Time-Domain methods to analyze the electromagnetic field distribution in photonic crystal microcavities. It uses an 8-band k · p perturbation theory to compute the energy band structure of the epitaxially grown quantum dots. We discuss a method that combines the results of these two calculations for determining the spontaneous emission lifetime of a quantum dot in bulk material or in a microcavity. The computational models developed in this thesis are used to identify and characterize microcavities for potential use in a single-photon source. The computational tools developed are also used to investigate novel photonic crystal microcavities that incorporate 1D distributed Bragg reflectors for vertical confinement. It is found that the spontaneous emission enhancement in the quasi-3D cavities can be significantly greater than in traditional suspended slab

  3. Satellite on-board real-time SAR processor prototype

    Science.gov (United States)

    Bergeron, Alain; Doucet, Michel; Harnisch, Bernd; Suess, Martin; Marchese, Linda; Bourqui, Pascal; Desnoyers, Nicholas; Legros, Mathieu; Guillot, Ludovic; Mercier, Luc; Châteauneuf, François

    2017-11-01

    A Compact Real-Time Optronic SAR Processor has been successfully developed and tested up to a Technology Readiness Level of 4 (TRL4), the breadboard validation in a laboratory environment. SAR, or Synthetic Aperture Radar, is an active system allowing day and night imaging independent of the cloud coverage of the planet. The SAR raw data is a set of complex data for range and azimuth, which cannot be compressed. Specifically, for planetary missions and unmanned aerial vehicle (UAV) systems with limited communication data rates this is a clear disadvantage. SAR images are typically processed electronically applying dedicated Fourier transformations. This, however, can also be performed optically in real-time. Originally the first SAR images were optically processed. The optical Fourier processor architecture provides inherent parallel computing capabilities allowing real-time SAR data processing and thus the ability for compression and strongly reduced communication bandwidth requirements for the satellite. SAR signal return data are in general complex data. Both amplitude and phase must be combined optically in the SAR processor for each range and azimuth pixel. Amplitude and phase are generated by dedicated spatial light modulators and superimposed by an optical relay set-up. The spatial light modulators display the full complex raw data information over a two-dimensional format, one for the azimuth and one for the range. Since the entire signal history is displayed at once, the processor operates in parallel yielding real-time performances, i.e. without resulting bottleneck. Processing of both azimuth and range information is performed in a single pass. This paper focuses on the onboard capabilities of the compact optical SAR processor prototype that allows in-orbit processing of SAR images. Examples of processed ENVISAT ASAR images are presented. Various SAR processor parameters such as processing capabilities, image quality (point target analysis), weight and

  4. DEBROS: design and use of a Linux-like RTOS on an inexpensive 8-bit single board computer

    International Nuclear Information System (INIS)

    Davis, M.A.

    2012-01-01

    As the power, complexity, and capabilities of embedded processors continue to grow, it is easy to forget just how much can be done with inexpensive Single Board Computers (SBCs) based on 8-bit processors. When the proprietary, non-standard tools from the vendor for one such embedded computer became a major roadblock, I embarked on a project to expand my own knowledge and provide a more flexible, standards based alternative. Inspired by the early work done on operating systems such as UNIX TM , Linux, and Minix, I wrote DEBROS (the Davis Embedded Baby Real-time Operating System), which is a fully preemptive, priority-based OS with soft real-time capabilities that provides a subset of standard Linux/UNIX compatible system calls such as stdio, BSD sockets, pipes, semaphores, etc. The end result was a much more flexible, standards-based development environment which allowed me to simplify my programming model, expand diagnostic capabilities, and reduce the time spent monitoring and applying updates to the hundreds of devices in the lab currently using such inexpensive hardware. (author)

  5. Computing magnetic anisotropy constants of single molecule magnets

    Indian Academy of Sciences (India)

    Administrator

    Abstract. We present here a theoretical approach to compute the molecular magnetic anisotropy parameters, DM and EM for single molecule magnets in any given spin eigenstate of exchange spin Hami- ltonian. We first describe a hybrid constant MS-valence bond (VB) technique of solving spin Hamilto- nians employing ...

  6. Computing magnetic anisotropy constants of single molecule magnets

    Indian Academy of Sciences (India)

    We present here a theoretical approach to compute the molecular magnetic anisotropy parameters, and for single molecule magnets in any given spin eigenstate of exchange spin Hamiltonian. We first describe a hybrid constant -valence bond (VB) technique of solving spin Hamiltonians employing full spatial ...

  7. On the implementation of the Ford | Fulkerson algorithm on the Multiple Instruction and Single Data computer system

    Directory of Open Access Journals (Sweden)

    A. Yu. Popov

    2014-01-01

    Full Text Available Algorithms of optimization in networks and direct graphs find a broad application when solving the practical tasks. However, along with large-scale introduction of information technologies in human activity, requirements for volumes of input data and retrieval rate of solution are aggravated. In spite of the fact that by now the large number of algorithms for the various models of computers and computing systems have been studied and implemented, the solution of key problems of optimization for real dimensions of tasks remains difficult. In this regard search of new and more efficient computing structures, as well as update of known algorithms are of great current interest.The work considers an implementation of the search-end algorithm of the maximum flow on the direct graph for multiple instructions and single data computer system (MISD developed in BMSTU. Key feature of this architecture is deep hardware support of operations over sets and structures of data. Functions of storage and access to them are realized on the specialized processor of structures processing (SP which is capable to perform at the hardware level such operations as: add, delete, search, intersect, complete, merge, and others. Advantage of such system is possibility of parallel execution of parts of the computing tasks regarding the access to the sets to data structures simultaneously with arithmetic and logical processing of information.The previous works present the general principles of the computing process arrangement and features of programs implemented in MISD system, describe the structure and principles of functioning the processor of structures processing, show the general principles of the graph task solutions in such system, and experimentally study the efficiency of the received algorithms.The work gives command formats of the SP processor, offers the technique to update the algorithms realized in MISD system, suggests the option of Ford-Falkersona algorithm

  8. Preservice Teachers' Computer Use in Single Computer Training Courses; Relationships and Predictions

    Science.gov (United States)

    Zogheib, Salah

    2015-01-01

    Single computer courses offered at colleges of education are expected to provide preservice teachers with the skills and expertise needed to adopt computer technology in their future classrooms. However, preservice teachers still find difficulty adopting such technology. This research paper investigated relationships among preservice teachers'…

  9. Performing stencil computations

    Energy Technology Data Exchange (ETDEWEB)

    Donofrio, David

    2018-01-16

    A method and apparatus for performing stencil computations efficiently are disclosed. In one embodiment, a processor receives an offset, and in response, retrieves a value from a memory via a single instruction, where the retrieving comprises: identifying, based on the offset, one of a plurality of registers of the processor; loading an address stored in the identified register; and retrieving from the memory the value at the address.

  10. Linear optical quantum computing in a single spatial mode.

    Science.gov (United States)

    Humphreys, Peter C; Metcalf, Benjamin J; Spring, Justin B; Moore, Merritt; Jin, Xian-Min; Barbieri, Marco; Kolthammer, W Steven; Walmsley, Ian A

    2013-10-11

    We present a scheme for linear optical quantum computing using time-bin-encoded qubits in a single spatial mode. We show methods for single-qubit operations and heralded controlled-phase (cphase) gates, providing a sufficient set of operations for universal quantum computing with the Knill-Laflamme-Milburn [Nature (London) 409, 46 (2001)] scheme. Our protocol is suited to currently available photonic devices and ideally allows arbitrary numbers of qubits to be encoded in the same spatial mode, demonstrating the potential for time-frequency modes to dramatically increase the quantum information capacity of fixed spatial resources. As a test of our scheme, we demonstrate the first entirely single spatial mode implementation of a two-qubit quantum gate and show its operation with an average fidelity of 0.84±0.07.

  11. Parallel Computation on Multicore Processors Using Explicit Form of the Finite Element Method and C++ Standard Libraries

    Directory of Open Access Journals (Sweden)

    Rek Václav

    2016-11-01

    Full Text Available In this paper, the form of modifications of the existing sequential code written in C or C++ programming language for the calculation of various kind of structures using the explicit form of the Finite Element Method (Dynamic Relaxation Method, Explicit Dynamics in the NEXX system is introduced. The NEXX system is the core of engineering software NEXIS, Scia Engineer, RFEM and RENEX. It has the possibilities of multithreaded running, which can now be supported at the level of native C++ programming language using standard libraries. Thanks to the high degree of abstraction that a contemporary C++ programming language provides, a respective library created in this way can be very generalized for other purposes of usage of parallelism in computational mechanics.

  12. Invasive tightly coupled processor arrays

    CERN Document Server

    LARI, VAHID

    2016-01-01

    This book introduces new massively parallel computer (MPSoC) architectures called invasive tightly coupled processor arrays. It proposes strategies, architecture designs, and programming interfaces for invasive TCPAs that allow invading and subsequently executing loop programs with strict requirements or guarantees of non-functional execution qualities such as performance, power consumption, and reliability. For the first time, such a configurable processor array architecture consisting of locally interconnected VLIW processing elements can be claimed by programs, either in full or in part, using the principle of invasive computing. Invasive TCPAs provide unprecedented energy efficiency for the parallel execution of nested loop programs by avoiding any global memory access such as GPUs and may even support loops with complex dependencies such as loop-carried dependencies that are not amenable to parallel execution on GPUs. For this purpose, the book proposes different invasion strategies for claiming a desire...

  13. Enabling Future Robotic Missions with Multicore Processors

    Science.gov (United States)

    Powell, Wesley A.; Johnson, Michael A.; Wilmot, Jonathan; Some, Raphael; Gostelow, Kim P.; Reeves, Glenn; Doyle, Richard J.

    2011-01-01

    Recent commercial developments in multicore processors (e.g. Tilera, Clearspeed, HyperX) have provided an option for high performance embedded computing that rivals the performance attainable with FPGA-based reconfigurable computing architectures. Furthermore, these processors offer more straightforward and streamlined application development by allowing the use of conventional programming languages and software tools in lieu of hardware design languages such as VHDL and Verilog. With these advantages, multicore processors can significantly enhance the capabilities of future robotic space missions. This paper will discuss these benefits, along with onboard processing applications where multicore processing can offer advantages over existing or competing approaches. This paper will also discuss the key artchitecural features of current commercial multicore processors. In comparison to the current art, the features and advancements necessary for spaceflight multicore processors will be identified. These include power reduction, radiation hardening, inherent fault tolerance, and support for common spacecraft bus interfaces. Lastly, this paper will explore how multicore processors might evolve with advances in electronics technology and how avionics architectures might evolve once multicore processors are inserted into NASA robotic spacecraft.

  14. Discussion paper for a highly parallel array processor-based machine

    International Nuclear Information System (INIS)

    Hagstrom, R.; Bolotin, G.; Dawson, J.

    1984-01-01

    The architectural plant for a quickly realizable implementation of a highly parallel special-purpose computer system with peak performance in the range of 6 billion floating point operations per second is discussed. The architecture is suitable to Lattice Gauge theoretical computations of fundamental physics interest and may be applicable to a range of other problems which deal with numerically intensive computational problems. The plan is quickly realizable because it employs a maximum of commercially available hardware subsystems and because the architecture is software-transparent to the individual processors, allowing straightforward re-use of whatever commercially available operating-systems and support software that is suitable to run on the commercially-produced processors. A tiny prototype instrument, designed along this architecture has already operated. A few elementary examples of programs which can run efficiently are presented. The large machine which the authors would propose to build would be based upon a highly competent array-processor, the ST-100 Array Processor, and specific design possibilities are discussed. The first step toward realizing this plan practically is to install a single ST-100 to allow algorithm development to proceed while a demonstration unit is built using two of the ST-100 Array Processors

  15. Multithreading in vector processors

    Energy Technology Data Exchange (ETDEWEB)

    Evangelinos, Constantinos; Kim, Changhoan; Nair, Ravi

    2018-01-16

    In one embodiment, a system includes a processor having a vector processing mode and a multithreading mode. The processor is configured to operate on one thread per cycle in the multithreading mode. The processor includes a program counter register having a plurality of program counters, and the program counter register is vectorized. Each program counter in the program counter register represents a distinct corresponding thread of a plurality of threads. The processor is configured to execute the plurality of threads by activating the plurality of program counters in a round robin cycle.

  16. Computational models of the single substitutional nitrogen atom in diamond

    CERN Document Server

    Lombardi, E B; Osuch, K; Reynhardt, E C

    2003-01-01

    The single substitutional nitrogen atom in diamond is apparently a very simple defect in a very simple elemental solid. It has been modelled by a range of computational models, few of which either agree with each other, or with the experimental data on the defect. If the computational models of less well understood defects in this and more complex materials are to be reliable, we should understand why the discrepancies arise and how they can be avoided in future modelling. This paper presents an all-electron, augmented plane-wave (APW) density functional theory (DFT) calculation using the modern APW with local orbitals full potential periodic approximation. This is compared to DFT, finite cluster pseudopotential calculations and a semi-empirical Hartree-Fock model. Comparisons between the results of these and previous models allow us to discuss the reliability of computational methods of this and similar defects.

  17. Probabilistic implementation of universal quantum processors

    International Nuclear Information System (INIS)

    Hillery, Mark; Buzek, Vladimir; Ziman, Mario

    2002-01-01

    We present a probabilistic quantum processor for qudits on a single qudit of dimension N. The processor itself is represented by a fixed array of gates. The input of the processor consists of two registers. In the program register the set of instructions (program) is encoded. This program is applied to the data register. The processor can perform any operation on a single qudit of dimension N with a certain probability. For a general unitary operation, the probability is 1/N 2 , but for more restricted sets of operators the probability can be higher. In fact, this probability can be independent of the dimension of the qudit Hilbert space of the qudit under some conditions

  18. Convolutional Deep Belief Networks for Single-Cell/Object Tracking in Computational Biology and Computer Vision

    OpenAIRE

    Zhong, Bineng; Pan, Shengnan; Zhang, Hongbo; Wang, Tian; Du, Jixiang; Chen, Duansheng; Cao, Liujuan

    2016-01-01

    In this paper, we propose deep architecture to dynamically learn the most discriminative features from data for both single-cell and object tracking in computational biology and computer vision. Firstly, the discriminative features are automatically learned via a convolutional deep belief network (CDBN). Secondly, we design a simple yet effective method to transfer features learned from CDBNs on the source tasks for generic purpose to the object tracking tasks using only limited amount of tra...

  19. An overview of flight computer technologies for future NASA

    Science.gov (United States)

    Alkalai, L.

    2001-01-01

    In this paper, we present an overview of current developments by several US Government Agencies and associated programs, towards high-performance single board computers for use in space. Three separate projects will be described; two that are based on the Power PC processor, and one based on the Pentium processor.

  20. Distributed processor systems

    International Nuclear Information System (INIS)

    Zacharov, B.

    1976-01-01

    In recent years, there has been a growing tendency in high-energy physics and in other fields to solve computational problems by distributing tasks among the resources of inter-coupled processing devices and associated system elements. This trend has gained further momentum more recently with the increased availability of low-cost processors and with the development of the means of data distribution. In two lectures, the broad question of distributed computing systems is examined and the historical development of such systems reviewed. An attempt is made to examine the reasons for the existence of these systems and to discern the main trends for the future. The components of distributed systems are discussed in some detail and particular emphasis is placed on the importance of standards and conventions in certain key system components. The ideas and principles of distributed systems are discussed in general terms, but these are illustrated by a number of concrete examples drawn from the context of the high-energy physics environment. (Auth.)

  1. Computing with a single qubit faster than the computation quantum speed limit

    Science.gov (United States)

    Sinitsyn, Nikolai A.

    2018-02-01

    The possibility to save and process information in fundamentally indistinguishable states is the quantum mechanical resource that is not encountered in classical computing. I demonstrate that, if energy constraints are imposed, this resource can be used to accelerate information-processing without relying on entanglement or any other type of quantum correlations. In fact, there are computational problems that can be solved much faster, in comparison to currently used classical schemes, by saving intermediate information in nonorthogonal states of just a single qubit. There are also error correction strategies that protect such computations.

  2. 21 CFR 870.1435 - Single-function, preprogrammed diagnostic computer.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Single-function, preprogrammed diagnostic computer... Single-function, preprogrammed diagnostic computer. (a) Identification. A single-function, preprogrammed diagnostic computer is a hard-wired computer that calculates a specific physiological or blood-flow parameter...

  3. Slime mould processors, logic gates and sensors.

    Science.gov (United States)

    Adamatzky, A

    2015-07-28

    A heterotic, or hybrid, computation implies that two or more substrates of different physical nature are merged into a single device with indistinguishable parts. These hybrid devices then undertake coherent acts on programmable and sensible processing of information. We study the potential of heterotic computers using slime mould acting under the guidance of chemical, mechanical and optical stimuli. Plasmodium of acellular slime mould Physarum polycephalum is a gigantic single cell visible to the unaided eye. The cell shows a rich spectrum of behavioural morphological patterns in response to changing environmental conditions. Given data represented by chemical or physical stimuli, we can employ and modify the behaviour of the slime mould to make it solve a range of computing and sensing tasks. We overview results of laboratory experimental studies on prototyping of the slime mould morphological processors for approximation of Voronoi diagrams, planar shapes and solving mazes, and discuss logic gates implemented via collision of active growing zones and tactile responses of P. polycephalum. We also overview a range of electronic components--memristor, chemical, tactile and colour sensors-made of the slime mould. © 2015 The Author(s) Published by the Royal Society. All rights reserved.

  4. 3081/E processor and its on-line use

    International Nuclear Information System (INIS)

    Rankin, P.; Bricaud, B.; Gravina, M.

    1985-05-01

    The 3081/E is a second generation emulator of a mainframe IBM. One of it's applications will be to form part of the data acquisition system of the upgraded Mark II detector for data taking at the SLAC linear collider. Since the processor does not have direct connections to I/O devices a FASTBUS interface will be provided to allow communication with both SLAC Scanner Processors (which are responsible for the accumulation of data at a crate level) and the experiment's VAX 8600 mainframe. The 3081/E's will supply a significant amount of on-line computing power to the experiment (a single 3081/E is equivalent to 4 to 5 VAX 11/780's). A major advantage of the 3081/E is that program development can be done on an IBM mainframe (such as the one used for off-line analysis) which gives the programmer access to a full range of debugging tools. The processor's performance can be continually monitored by comparison of the results obtained using it to those given when the same program is run on an IBM computer. 9 refs

  5. Spiking neural circuits with dendritic stimulus processors : encoding, decoding, and identification in reproducing kernel Hilbert spaces.

    Science.gov (United States)

    Lazar, Aurel A; Slutskiy, Yevgeniy B

    2015-02-01

    We present a multi-input multi-output neural circuit architecture for nonlinear processing and encoding of stimuli in the spike domain. In this architecture a bank of dendritic stimulus processors implements nonlinear transformations of multiple temporal or spatio-temporal signals such as spike trains or auditory and visual stimuli in the analog domain. Dendritic stimulus processors may act on both individual stimuli and on groups of stimuli, thereby executing complex computations that arise as a result of interactions between concurrently received signals. The results of the analog-domain computations are then encoded into a multi-dimensional spike train by a population of spiking neurons modeled as nonlinear dynamical systems. We investigate general conditions under which such circuits faithfully represent stimuli and demonstrate algorithms for (i) stimulus recovery, or decoding, and (ii) identification of dendritic stimulus processors from the observed spikes. Taken together, our results demonstrate a fundamental duality between the identification of the dendritic stimulus processor of a single neuron and the decoding of stimuli encoded by a population of neurons with a bank of dendritic stimulus processors. This duality result enabled us to derive lower bounds on the number of experiments to be performed and the total number of spikes that need to be recorded for identifying a neural circuit.

  6. Special processor for in-core control systems

    International Nuclear Information System (INIS)

    Golovanov, M.N.; Duma, V.R.; Levin, G.L.; Mel'nikov, A.V.; Polikanin, A.V.; Filatov, V.P.

    1978-01-01

    The BUTs-20 special processor is discussed, designed to control the units of the in-core control equipment which are incorporated into the VECTOR communication channel, and to provide preliminary data processing prior to computer calculations. A set of instructions and flowsheet of the processor, organization of its communication with memories and other units of the system are given. The processor components: a control unit and an arithmetic logical unit are discussed. It is noted that the special processor permits more effective utilization of the computer time

  7. Integrated fuel processor development

    International Nuclear Information System (INIS)

    Ahmed, S.; Pereira, C.; Lee, S. H. D.; Krumpelt, M.

    2001-01-01

    The Department of Energy's Office of Advanced Automotive Technologies has been supporting the development of fuel-flexible fuel processors at Argonne National Laboratory. These fuel processors will enable fuel cell vehicles to operate on fuels available through the existing infrastructure. The constraints of on-board space and weight require that these fuel processors be designed to be compact and lightweight, while meeting the performance targets for efficiency and gas quality needed for the fuel cell. This paper discusses the performance of a prototype fuel processor that has been designed and fabricated to operate with liquid fuels, such as gasoline, ethanol, methanol, etc. Rated for a capacity of 10 kWe (one-fifth of that needed for a car), the prototype fuel processor integrates the unit operations (vaporization, heat exchange, etc.) and processes (reforming, water-gas shift, preferential oxidation reactions, etc.) necessary to produce the hydrogen-rich gas (reformate) that will fuel the polymer electrolyte fuel cell stacks. The fuel processor work is being complemented by analytical and fundamental research. With the ultimate objective of meeting on-board fuel processor goals, these studies include: modeling fuel cell systems to identify design and operating features; evaluating alternative fuel processing options; and developing appropriate catalysts and materials. Issues and outstanding challenges that need to be overcome in order to develop practical, on-board devices are discussed

  8. Heterogeneous Multicore Processor Technologies for Embedded Systems

    CERN Document Server

    Uchiyama, Kunio; Kasahara, Hironori; Nojiri, Tohru; Noda, Hideyuki; Tawara, Yasuhiro; Idehara, Akio; Iwata, Kenichi; Shikano, Hiroaki

    2012-01-01

    To satisfy the higher requirements of digitally converged embedded systems, this book describes heterogeneous multicore technology that uses various kinds of low-power embedded processor cores on a single chip. With this technology, heterogeneous parallelism can be implemented on an SoC, and greater flexibility and superior performance per watt can then be achieved. This book defines the heterogeneous multicore architecture and explains in detail several embedded processor cores including CPU cores and special-purpose processor cores that achieve highly arithmetic-level parallelism. The authors developed three multicore chips (called RP-1, RP-2, and RP-X) according to the defined architecture with the introduced processor cores. The chip implementations, software environments, and applications running on the chips are also explained in the book. Provides readers an overview and practical discussion of heterogeneous multicore technologies from both a hardware and software point of view; Discusses a new, high-p...

  9. Computing tools for implementing standards for single-case designs.

    Science.gov (United States)

    Chen, Li-Ting; Peng, Chao-Ying Joanne; Chen, Ming-E

    2015-11-01

    In the single-case design (SCD) literature, five sets of standards have been formulated and distinguished: design standards, assessment standards, analysis standards, reporting standards, and research synthesis standards. This article reviews computing tools that can assist researchers and practitioners in meeting the analysis standards recommended by the What Works Clearinghouse: Procedures and Standards Handbook-the WWC standards. These tools consist of specialized web-based calculators or downloadable software for SCD data, and algorithms or programs written in Excel, SAS procedures, SPSS commands/Macros, or the R programming language. We aligned these tools with the WWC standards and evaluated them for accuracy and treatment of missing data, using two published data sets. All tools were tested to be accurate. When missing data were present, most tools either gave an error message or conducted analysis based on the available data. Only one program used a single imputation method. This article concludes with suggestions for an inclusive computing tool or environment, additional research on the treatment of missing data, and reasonable and flexible interpretations of the WWC standards. © The Author(s) 2015.

  10. Assembly processor program converts symbolic programming language to machine language

    Science.gov (United States)

    Pelto, E. V.

    1967-01-01

    Assembly processor program converts symbolic programming language to machine language. This program translates symbolic codes into computer understandable instructions, assigns locations in storage for successive instructions, and computer locations from symbolic addresses.

  11. Single Cell Adhesion Assay Using Computer Controlled Micropipette

    Science.gov (United States)

    Salánki, Rita; Hős, Csaba; Orgovan, Norbert; Péter, Beatrix; Sándor, Noémi; Bajtay, Zsuzsa; Erdei, Anna; Horvath, Robert; Szabó, Bálint

    2014-01-01

    Cell adhesion is a fundamental phenomenon vital for all multicellular organisms. Recognition of and adhesion to specific macromolecules is a crucial task of leukocytes to initiate the immune response. To gain statistically reliable information of cell adhesion, large numbers of cells should be measured. However, direct measurement of the adhesion force of single cells is still challenging and today’s techniques typically have an extremely low throughput (5–10 cells per day). Here, we introduce a computer controlled micropipette mounted onto a normal inverted microscope for probing single cell interactions with specific macromolecules. We calculated the estimated hydrodynamic lifting force acting on target cells by the numerical simulation of the flow at the micropipette tip. The adhesion force of surface attached cells could be accurately probed by repeating the pick-up process with increasing vacuum applied in the pipette positioned above the cell under investigation. Using the introduced methodology hundreds of cells adhered to specific macromolecules were measured one by one in a relatively short period of time (∼30 min). We blocked nonspecific cell adhesion by the protein non-adhesive PLL-g-PEG polymer. We found that human primary monocytes are less adherent to fibrinogen than their in vitro differentiated descendants: macrophages and dendritic cells, the latter producing the highest average adhesion force. Validation of the here introduced method was achieved by the hydrostatic step-pressure micropipette manipulation technique. Additionally the result was reinforced in standard microfluidic shear stress channels. Nevertheless, automated micropipette gave higher sensitivity and less side-effect than the shear stress channel. Using our technique, the probed single cells can be easily picked up and further investigated by other techniques; a definite advantage of the computer controlled micropipette. Our experiments revealed the existence of a sub

  12. Single cell adhesion assay using computer controlled micropipette.

    Directory of Open Access Journals (Sweden)

    Rita Salánki

    Full Text Available Cell adhesion is a fundamental phenomenon vital for all multicellular organisms. Recognition of and adhesion to specific macromolecules is a crucial task of leukocytes to initiate the immune response. To gain statistically reliable information of cell adhesion, large numbers of cells should be measured. However, direct measurement of the adhesion force of single cells is still challenging and today's techniques typically have an extremely low throughput (5-10 cells per day. Here, we introduce a computer controlled micropipette mounted onto a normal inverted microscope for probing single cell interactions with specific macromolecules. We calculated the estimated hydrodynamic lifting force acting on target cells by the numerical simulation of the flow at the micropipette tip. The adhesion force of surface attached cells could be accurately probed by repeating the pick-up process with increasing vacuum applied in the pipette positioned above the cell under investigation. Using the introduced methodology hundreds of cells adhered to specific macromolecules were measured one by one in a relatively short period of time (∼30 min. We blocked nonspecific cell adhesion by the protein non-adhesive PLL-g-PEG polymer. We found that human primary monocytes are less adherent to fibrinogen than their in vitro differentiated descendants: macrophages and dendritic cells, the latter producing the highest average adhesion force. Validation of the here introduced method was achieved by the hydrostatic step-pressure micropipette manipulation technique. Additionally the result was reinforced in standard microfluidic shear stress channels. Nevertheless, automated micropipette gave higher sensitivity and less side-effect than the shear stress channel. Using our technique, the probed single cells can be easily picked up and further investigated by other techniques; a definite advantage of the computer controlled micropipette. Our experiments revealed the existence of a

  13. Convolutional Deep Belief Networks for Single-Cell/Object Tracking in Computational Biology and Computer Vision.

    Science.gov (United States)

    Zhong, Bineng; Pan, Shengnan; Zhang, Hongbo; Wang, Tian; Du, Jixiang; Chen, Duansheng; Cao, Liujuan

    2016-01-01

    In this paper, we propose deep architecture to dynamically learn the most discriminative features from data for both single-cell and object tracking in computational biology and computer vision. Firstly, the discriminative features are automatically learned via a convolutional deep belief network (CDBN). Secondly, we design a simple yet effective method to transfer features learned from CDBNs on the source tasks for generic purpose to the object tracking tasks using only limited amount of training data. Finally, to alleviate the tracker drifting problem caused by model updating, we jointly consider three different types of positive samples. Extensive experiments validate the robustness and effectiveness of the proposed method.

  14. Graphics Processor Units (GPUs)

    Science.gov (United States)

    Wyrwas, Edward J.

    2017-01-01

    This presentation will include information about Graphics Processor Units (GPUs) technology, NASA Electronic Parts and Packaging (NEPP) tasks, The test setup, test parameter considerations, lessons learned, collaborations, a roadmap, NEPP partners, results to date, and future plans.

  15. Logistic Fuel Processor Development

    National Research Council Canada - National Science Library

    Salavani, Reza

    2004-01-01

    The Air Base Technologies Division of the Air Force Research Laboratory has developed a logistic fuel processor that removes the sulfur content of the fuel and in the process converts logistic fuel...

  16. Adaptive signal processor

    Energy Technology Data Exchange (ETDEWEB)

    Walz, H.V.

    1980-07-01

    An experimental, general purpose adaptive signal processor system has been developed, utilizing a quantized (clipped) version of the Widrow-Hoff least-mean-square adaptive algorithm developed by Moschner. The system accommodates 64 adaptive weight channels with 8-bit resolution for each weight. Internal weight update arithmetic is performed with 16-bit resolution, and the system error signal is measured with 12-bit resolution. An adapt cycle of adjusting all 64 weight channels is accomplished in 8 ..mu..sec. Hardware of the signal processor utilizes primarily Schottky-TTL type integrated circuits. A prototype system with 24 weight channels has been constructed and tested. This report presents details of the system design and describes basic experiments performed with the prototype signal processor. Finally some system configurations and applications for this adaptive signal processor are discussed.

  17. Adaptive signal processor

    International Nuclear Information System (INIS)

    Walz, H.V.

    1980-07-01

    An experimental, general purpose adaptive signal processor system has been developed, utilizing a quantized (clipped) version of the Widrow-Hoff least-mean-square adaptive algorithm developed by Moschner. The system accommodates 64 adaptive weight channels with 8-bit resolution for each weight. Internal weight update arithmetic is performed with 16-bit resolution, and the system error signal is measured with 12-bit resolution. An adapt cycle of adjusting all 64 weight channels is accomplished in 8 μsec. Hardware of the signal processor utilizes primarily Schottky-TTL type integrated circuits. A prototype system with 24 weight channels has been constructed and tested. This report presents details of the system design and describes basic experiments performed with the prototype signal processor. Finally some system configurations and applications for this adaptive signal processor are discussed

  18. Accuracy requirements of optical linear algebra processors in adaptive optics imaging systems

    Science.gov (United States)

    Downie, John D.; Goodman, Joseph W.

    1989-10-01

    The accuracy requirements of optical processors in adaptive optics systems are determined by estimating the required accuracy in a general optical linear algebra processor (OLAP) that results in a smaller average residual aberration than that achieved with a conventional electronic digital processor with some specific computation speed. Special attention is given to an error analysis of a general OLAP with regard to the residual aberration that is created in an adaptive mirror system by the inaccuracies of the processor, and to the effect of computational speed of an electronic processor on the correction. Results are presented on the ability of an OLAP to compete with a digital processor in various situations.

  19. Single seed precise sowing of maize using computer simulation.

    Science.gov (United States)

    Zhao, Longgang; Han, Zhongzhi; Yang, Jinzhong; Qi, Hua

    2018-01-01

    In order to test the feasibility of computer simulation in field maize planting, the selection of the method of single seed precise sowing in maize is studied based on the quadratic function model Y = A×(D-Dm)2+Ym, which depicts the relationship between maize yield and planting density. And the advantages and disadvantages of the two planting methods under the condition of single seed sowing are also compared: Method 1 is optimum density planting, while Method 2 is the ideal seedling emergence number planting. It is found that the yield reduction rate and yield fluctuation of Method 2 are all lower than those of Method 1. The yield of Method 2 increased by at least 0.043 t/hm2, and showed more advantages over Method 1 with higher yield level. Further study made on the influence of seedling emergence rate on the yield of maize finds that the yields of the two methods are both highly positively correlated with the seedling emergence rate and the standard deviations of their yields are both highly negatively correlated with the seedling emergence rate. For the study of the break-up problem of sparse caused by the method of single seed precise sowing, the definition of seedling missing spots is put forward. The study found that the relationship between number of hundred-dot spot and field seedling emergence rate is as the parabola function y = -189.32x2 + 309.55x - 118.95 and the relationship between number of spot missing seedling and field seedling emergence rate is as the negative exponent function y = 395.69e-6.144x. The results may help to guide the maize seeds production and single seed precise sowing to some extent.

  20. Functional unit for a processor

    NARCIS (Netherlands)

    Rohani, A.; Kerkhoff, Hans G.

    2013-01-01

    The invention relates to a functional unit for a processor, such as a Very Large Instruction Word Processor. The invention further relates to a processor comprising at least one such functional unit. The invention further relates to a functional unit and processor capable of mitigating the effect of

  1. Algorithms for parallel computers

    International Nuclear Information System (INIS)

    Churchhouse, R.F.

    1985-01-01

    Until relatively recently almost all the algorithms for use on computers had been designed on the (usually unstated) assumption that they were to be run on single processor, serial machines. With the introduction of vector processors, array processors and interconnected systems of mainframes, minis and micros, however, various forms of parallelism have become available. The advantage of parallelism is that it offers increased overall processing speed but it also raises some fundamental questions, including: (i) which, if any, of the existing 'serial' algorithms can be adapted for use in the parallel mode. (ii) How close to optimal can such adapted algorithms be and, where relevant, what are the convergence criteria. (iii) How can we design new algorithms specifically for parallel systems. (iv) For multi-processor systems how can we handle the software aspects of the interprocessor communications. Aspects of these questions illustrated by examples are considered in these lectures. (orig.)

  2. Single photon emission computed tomography in AIDS dementia complex

    International Nuclear Information System (INIS)

    Pohl, P.; Vogl, G.; Fill, H.; Roessler, H.Z.; Zangerle, R.; Gerstenbrand, F.

    1988-01-01

    Single photon emission computed tomography (SPECT) studies were performed in AIDS dementia complex using IMP in 12 patients (and HM-PAO in four of these same patients). In all patients, SPECT revealed either multiple or focal uptake defects, the latter corresponding with focal signs or symptoms in all but one case. Computerized tomography showed a diffuse cerebral atrophy in eight of 12 patients, magnetic resonance imaging exhibited changes like atrophy and/or leukoencephalopathy in two of five cases. Our data indicate that both disturbance of cerebral amine metabolism and alteration of local perfusion share in the pathogenesis of AIDS dementia complex. SPECT is an important aid in the diagnosis of AIDS dementia complex and contributes to the understanding of the pathophysiological mechanisms of this disorder

  3. Spectrum of single photon emission computed tomography/computed tomography findings in patients with parathyroid adenomas.

    Science.gov (United States)

    Chakraborty, Dhritiman; Mittal, Bhagwant Rai; Harisankar, Chidambaram Natrajan Balasubramanian; Bhattacharya, Anish; Bhadada, Sanjay

    2011-01-01

    Primary hyperparathyroidism results from excessive parathyroid hormone secretion. Approximately 85% of all cases of primary hyperparathyroidism are caused by a single parathyroid adenoma; 10-15% of the cases are caused by parathyroid hyperplasia. Parathyroid carcinoma accounts for approximately 3-4% of cases of primary disease. Technetium-99m-sestamibi (MIBI), the current scintigraphic procedure of choice for preoperative parathyroid localization, can be performed in various ways. The "single-isotope, double-phase technique" is based on the fact that MIBI washes out more rapidly from the thyroid than from abnormal parathyroid tissue. However, not all parathyroid lesions retain MIBI and not all thyroid tissue washes out quickly, and subtraction imaging is helpful. Single photon emission computed tomography (SPECT) provides information for localizing parathyroid lesions, differentiating thyroid from parathyroid lesions, and detecting and localizing ectopic parathyroid lesions. Addition of CT with SPECT improves the sensitivity. This pictorial assay demonstrates various SPECT/CT patterns observed in parathyroid scintigraphy.

  4. Proceedings of clinical SPECT [single photon emission computed tomography] symposium

    International Nuclear Information System (INIS)

    1986-09-01

    It has been five years since the last in-depth American College of Nuclear Physicians/Society of Nuclear Medicine Symposium on the subject of single photon emission computed tomography (SPECT) was held. Because this subject was nominated as the single most desired topic we have selected SPECT imaging as the basis for this year's program. The objectives of this symposium are to survey the progress of SPECT clinical applications that have taken place over the last five years and to provide practical and timely guidelines to users of SPECT so that this exciting imaging modality can be fully integrated into the evaluation of pathologic processes. The first half was devoted to a consideration of technical factors important in SPECT acquisition and the second half was devoted to those organ systems about which sufficient clinical SPECT imaging data are available. With respect to the technical aspect of the program we have selected the key areas which demand awareness and attention in order to make SPECT operational in clinical practice. These include selection of equipment, details of uniformity correction, utilization of phantoms for equipment acceptance and quality assurance, the major aspect of algorithms, an understanding of filtered back projection and appropriate choice of filters and an awareness of the most commonly generated artifacts and how to recognize them. With respect to the acquisition and interpretation of organ images, the faculty will present information on the major aspects of hepatic, brain, cardiac, skeletal, and immunologic imaging techniques. Individual papers are processed separately for the data base

  5. Programmable DNA-Mediated Multitasking Processor.

    Science.gov (United States)

    Shu, Jian-Jun; Wang, Qi-Wen; Yong, Kian-Yan; Shao, Fangwei; Lee, Kee Jin

    2015-04-30

    Because of DNA appealing features as perfect material, including minuscule size, defined structural repeat and rigidity, programmable DNA-mediated processing is a promising computing paradigm, which employs DNAs as information storing and processing substrates to tackle the computational problems. The massive parallelism of DNA hybridization exhibits transcendent potential to improve multitasking capabilities and yield a tremendous speed-up over the conventional electronic processors with stepwise signal cascade. As an example of multitasking capability, we present an in vitro programmable DNA-mediated optimal route planning processor as a functional unit embedded in contemporary navigation systems. The novel programmable DNA-mediated processor has several advantages over the existing silicon-mediated methods, such as conducting massive data storage and simultaneous processing via much fewer materials than conventional silicon devices.

  6. A Real Time Digital Coincidence Processor for positron emission tomography

    International Nuclear Information System (INIS)

    Dent, H.M.; Jones, W.F.; Casey, M.E.

    1986-01-01

    A Real Time Digital Coincidence Processor has been developed for use in the Positron Emission Tomograph (PET) ECAT scanners manufactured by Computer Technology and Imaging, Inc. (CTI). The primary functions of the Coincidence Processor include: receive from the BGO detector modules serial data, which includes timing information and detector identification; process the received data to form coincidence detector pairs; and present the coincidence pair data to a Real Time Sorter. The primary design emphasis was placed on the Coincidence Processor being able to process the detector data into coincidence pairs at real time rates. This paper briefly describes the Coincidence Processor and some of the considerations that went into its design

  7. ACP/R3000 processors in data acquisition systems

    International Nuclear Information System (INIS)

    Deppe, J.; Areti, H.; Atac, R.

    1989-02-01

    We describe ACP/R3000 processor based data acquisition systems for high energy physics. This VME bus compatible processor board, with a computational power equivalent to 15 VAX 11/780s or better, contains 8 Mb of memory for event buffering and has a high speed secondary bus that allows data gathering from front end electronics. 2 refs., 3 figs

  8. Message Passing on a Time-predictable Multicore Processor

    DEFF Research Database (Denmark)

    Sørensen, Rasmus Bo; Puffitsch, Wolfgang; Schoeberl, Martin

    2015-01-01

    Real-time systems need time-predictable computing platforms. For a multicore processor to be time-predictable, communication between processor cores needs to be time-predictable as well. This paper presents a time-predictable message-passing library for such a platform. We show how to build up...

  9. 3081//sub E/ processor

    International Nuclear Information System (INIS)

    Kunz, P.F.; Gravina, M.; Oxoby, G.; Trang, Q.; Fucci, A.; Jacobs, D.; Martin, B.; Storr, K.

    1983-03-01

    Since the introduction of the 168//sub E/, emulating processors have been successful over an amazingly wide range of applications. This paper will describe a second generation processor, the 3081//sub E/. This new processor, which is being developed as a collaboration between SLAC and CERN, goes beyond just fixing the obvious faults of the 168//sub E/. Not only will the 3081//sub E/ have much more memory space, incorporate many more IBM instructions, and have much more memory space, incorporate many more IBM instructions, and have full double precision floating point arithmetic, but it will also have faster execution times and be much simpler to build, debug, and maintain. The simple interface and reasonable cost of the 168//sub E/ will be maintained for the 3081//sub E/

  10. Proceedings of clinical SPECT (single photon emission computed tomography) symposium

    Energy Technology Data Exchange (ETDEWEB)

    1986-09-01

    It has been five years since the last in-depth American College of Nuclear Physicians/Society of Nuclear Medicine Symposium on the subject of single photon emission computed tomography (SPECT) was held. Because this subject was nominated as the single most desired topic we have selected SPECT imaging as the basis for this year's program. The objectives of this symposium are to survey the progress of SPECT clinical applications that have taken place over the last five years and to provide practical and timely guidelines to users of SPECT so that this exciting imaging modality can be fully integrated into the evaluation of pathologic processes. The first half was devoted to a consideration of technical factors important in SPECT acquisition and the second half was devoted to those organ systems about which sufficient clinical SPECT imaging data are available. With respect to the technical aspect of the program we have selected the key areas which demand awareness and attention in order to make SPECT operational in clinical practice. These include selection of equipment, details of uniformity correction, utilization of phantoms for equipment acceptance and quality assurance, the major aspect of algorithms, an understanding of filtered back projection and appropriate choice of filters and an awareness of the most commonly generated artifacts and how to recognize them. With respect to the acquisition and interpretation of organ images, the faculty will present information on the major aspects of hepatic, brain, cardiac, skeletal, and immunologic imaging techniques. Individual papers are processed separately for the data base. (TEM)

  11. A high-speed analog neural processor

    NARCIS (Netherlands)

    Masa, P.; Masa, Peter; Hoen, Klaas; Hoen, Klaas; Wallinga, Hans

    1994-01-01

    Targeted at high-energy physics research applications, our special-purpose analog neural processor can classify up to 70 dimensional vectors within 50 nanoseconds. The decision-making process of the implemented feedforward neural network enables this type of computation to tolerate weight

  12. A programmable two-qubit quantum processor in silicon

    Science.gov (United States)

    Watson, T. F.; Philips, S. G. J.; Kawakami, E.; Ward, D. R.; Scarlino, P.; Veldhorst, M.; Savage, D. E.; Lagally, M. G.; Friesen, Mark; Coppersmith, S. N.; Eriksson, M. A.; Vandersypen, L. M. K.

    2018-03-01

    Now that it is possible to achieve measurement and control fidelities for individual quantum bits (qubits) above the threshold for fault tolerance, attention is moving towards the difficult task of scaling up the number of physical qubits to the large numbers that are needed for fault-tolerant quantum computing. In this context, quantum-dot-based spin qubits could have substantial advantages over other types of qubit owing to their potential for all-electrical operation and ability to be integrated at high density onto an industrial platform. Initialization, readout and single- and two-qubit gates have been demonstrated in various quantum-dot-based qubit representations. However, as seen with small-scale demonstrations of quantum computers using other types of qubit, combining these elements leads to challenges related to qubit crosstalk, state leakage, calibration and control hardware. Here we overcome these challenges by using carefully designed control techniques to demonstrate a programmable two-qubit quantum processor in a silicon device that can perform the Deutsch–Josza algorithm and the Grover search algorithm—canonical examples of quantum algorithms that outperform their classical analogues. We characterize the entanglement in our processor by using quantum-state tomography of Bell states, measuring state fidelities of 85–89 per cent and concurrences of 73–82 per cent. These results pave the way for larger-scale quantum computers that use spins confined to quantum dots.

  13. A programmable two-qubit quantum processor in silicon.

    Science.gov (United States)

    Watson, T F; Philips, S G J; Kawakami, E; Ward, D R; Scarlino, P; Veldhorst, M; Savage, D E; Lagally, M G; Friesen, Mark; Coppersmith, S N; Eriksson, M A; Vandersypen, L M K

    2018-03-29

    Now that it is possible to achieve measurement and control fidelities for individual quantum bits (qubits) above the threshold for fault tolerance, attention is moving towards the difficult task of scaling up the number of physical qubits to the large numbers that are needed for fault-tolerant quantum computing. In this context, quantum-dot-based spin qubits could have substantial advantages over other types of qubit owing to their potential for all-electrical operation and ability to be integrated at high density onto an industrial platform. Initialization, readout and single- and two-qubit gates have been demonstrated in various quantum-dot-based qubit representations. However, as seen with small-scale demonstrations of quantum computers using other types of qubit, combining these elements leads to challenges related to qubit crosstalk, state leakage, calibration and control hardware. Here we overcome these challenges by using carefully designed control techniques to demonstrate a programmable two-qubit quantum processor in a silicon device that can perform the Deutsch-Josza algorithm and the Grover search algorithm-canonical examples of quantum algorithms that outperform their classical analogues. We characterize the entanglement in our processor by using quantum-state tomography of Bell states, measuring state fidelities of 85-89 per cent and concurrences of 73-82 per cent. These results pave the way for larger-scale quantum computers that use spins confined to quantum dots.

  14. An Autonomous Underwater Recorder Based on a Single Board Computer

    Science.gov (United States)

    Caldas-Morgan, Manuel; Alvarez-Rosario, Alexander; Rodrigues Padovese, Linilson

    2015-01-01

    As industrial activities continue to grow on the Brazilian coast, underwater sound measurements are becoming of great scientific importance as they are essential to evaluate the impact of these activities on local ecosystems. In this context, the use of commercial underwater recorders is not always the most feasible alternative, due to their high cost and lack of flexibility. Design and construction of more affordable alternatives from scratch can become complex because it requires profound knowledge in areas such as electronics and low-level programming. With the aim of providing a solution; a well succeeded model of a highly flexible, low-cost alternative to commercial recorders was built based on a Raspberry Pi single board computer. A properly working prototype was assembled and it demonstrated adequate performance levels in all tested situations. The prototype was equipped with a power management module which was thoroughly evaluated. It is estimated that it will allow for great battery savings on long-term scheduled recordings. The underwater recording device was successfully deployed at selected locations along the Brazilian coast, where it adequately recorded animal and manmade acoustic events, among others. Although power consumption may not be as efficient as that of commercial and/or micro-processed solutions, the advantage offered by the proposed device is its high customizability, lower development time and inherently, its cost. PMID:26076479

  15. An Autonomous Underwater Recorder Based on a Single Board Computer.

    Science.gov (United States)

    Caldas-Morgan, Manuel; Alvarez-Rosario, Alexander; Rodrigues Padovese, Linilson

    2015-01-01

    As industrial activities continue to grow on the Brazilian coast, underwater sound measurements are becoming of great scientific importance as they are essential to evaluate the impact of these activities on local ecosystems. In this context, the use of commercial underwater recorders is not always the most feasible alternative, due to their high cost and lack of flexibility. Design and construction of more affordable alternatives from scratch can become complex because it requires profound knowledge in areas such as electronics and low-level programming. With the aim of providing a solution; a well succeeded model of a highly flexible, low-cost alternative to commercial recorders was built based on a Raspberry Pi single board computer. A properly working prototype was assembled and it demonstrated adequate performance levels in all tested situations. The prototype was equipped with a power management module which was thoroughly evaluated. It is estimated that it will allow for great battery savings on long-term scheduled recordings. The underwater recording device was successfully deployed at selected locations along the Brazilian coast, where it adequately recorded animal and manmade acoustic events, among others. Although power consumption may not be as efficient as that of commercial and/or micro-processed solutions, the advantage offered by the proposed device is its high customizability, lower development time and inherently, its cost.

  16. Single photon emission computed tomography in lumbar degenerative spondylolisthesis

    International Nuclear Information System (INIS)

    Ito, S.; Muro, T.; Eisenstein, S.

    1998-01-01

    Analysis of single photon emission computed tomographic images and plain X-ray films of the lumbar vertebrae was performed in 15 patients with lumbar spondylosis and 15 patients with lumbar degenerative spondylolisthesis. The facet joint and osteophyte images were observed in particular, and the slipping ratio of spondylolisthetic vertebrae was determined. The slipping ratio of degenerative spondylolisthesis ranged from 11.8 % to 22.3 %. Hot uptake of 99mTc-HMDP by both L4-5 facet joints was significantly greater in the patients with degenerative spondylolisthesis than in those with lumbar spondylosis. The hot uptake by the osteophytes in lumbar spondylosis was nearly uniform among the three inferior segments, L3-4, L4-5 and L5-S, but was localized to the spondylolisthetic vertebrae, L4-5, or L5-S, in the patients with spondylolisthesis. Half of the osteophytes with hot uptake were assigned to the 3rd degree of Nathan's grading. It was suggested that stress was localized to the slipping vertebrae and their facet joints in patients with lumbar degenerative spondylolisthesis. (author)

  17. Brain single photon emission computed tomography in neonates

    Energy Technology Data Exchange (ETDEWEB)

    Denays, R.; Van Pachterbeke, T.; Tondeur, M.; Spehl, M.; Toppet, V.; Ham, H.; Piepsz, A.; Rubinstein, M.; Nol, P.H.; Haumont, D. (Free Universities of Brussels (Belgium))

    1989-08-01

    This study was designed to rate the clinical value of ({sup 123}I)iodoamphetamine (IMP) or ({sup 99m}Tc) hexamethyl propylene amine oxyme (HM-PAO) brain single photon emission computed tomography (SPECT) in neonates, especially in those likely to develop cerebral palsy. The results showed that SPECT abnormalities were congruent in most cases with structural lesions demonstrated by ultrasonography. However, mild bilateral ventricular dilatation and bilateral subependymal porencephalic cysts diagnosed by ultrasound were not associated with an abnormal SPECT finding. In contrast, some cortical periventricular and sylvian lesions and all the parasagittal lesions well visualized in SPECT studies were not diagnosed by ultrasound scans. In neonates with subependymal and/or intraventricular hemorrhage the existence of a parenchymal abnormality was only diagnosed by SPECT. These results indicate that ({sup 123}I)IMP or ({sup 99m}Tc)HM-PAO brain SPECT shows a potential clinical value as the neurodevelopmental outcome is clearly related to the site, the extent, and the number of cerebral lesions. Long-term clinical follow-up is, however, mandatory in order to define which SPECT abnormality is associated with neurologic deficit.

  18. Single-Photon Emission Computed Tomography (SPECT) in childhood epilepsy

    International Nuclear Information System (INIS)

    Gulati, Sheffali; Kalra, Veena; Bal, C.S.

    2000-01-01

    The success of epilepsy surgery is determined strongly by the precise location of the epileptogenic focus. The information from clinical electrophysiological data needs to be strengthened by functional neuroimaging techniques. Single photon emission computed tomography (SPECT) available locally has proved useful as a localising investigation. It evaluates the regional cerebral blood flow and the comparison between ictal and interictal blood flow on SPECT has proved to be a sensitive nuclear marker for the site of seizure onset. Many studies justify the utility of SPECT in localising lesions to possess greater precision than interictal scalp EEG or anatomic neuroimaging. SPECT is of definitive value in temporal lobe epilepsy. Its role in extratemporal lobe epilepsy is less clearly defined. It is useful in various other generalized and partial seizure disorders including epileptic syndromes and helps in differentiating pseudoseizures from true seizures. The need for newer radiopharmaceutical agents with specific neurochemical properties and longer shelf life are under investigation. Subtraction ictal SPECT co-registered to MRI is a promising new modality. (author)

  19. Parallel processor programs in the Federal Government

    Science.gov (United States)

    Schneck, P. B.; Austin, D.; Squires, S. L.; Lehmann, J.; Mizell, D.; Wallgren, K.

    1985-01-01

    In 1982, a report dealing with the nation's research needs in high-speed computing called for increased access to supercomputing resources for the research community, research in computational mathematics, and increased research in the technology base needed for the next generation of supercomputers. Since that time a number of programs addressing future generations of computers, particularly parallel processors, have been started by U.S. government agencies. The present paper provides a description of the largest government programs in parallel processing. Established in fiscal year 1985 by the Institute for Defense Analyses for the National Security Agency, the Supercomputing Research Center will pursue research to advance the state of the art in supercomputing. Attention is also given to the DOE applied mathematical sciences research program, the NYU Ultracomputer project, the DARPA multiprocessor system architectures program, NSF research on multiprocessor systems, ONR activities in parallel computing, and NASA parallel processor projects.

  20. Intrusion Detection Architecture Utilizing Graphics Processors

    Directory of Open Access Journals (Sweden)

    Branislav Madoš

    2012-12-01

    Full Text Available With the thriving technology and the great increase in the usage of computer networks, the risk of having these network to be under attacks have been increased. Number of techniques have been created and designed to help in detecting and/or preventing such attacks. One common technique is the use of Intrusion Detection Systems (IDS. Today, number of open sources and commercial IDS are available to match enterprises requirements. However, the performance of these systems is still the main concern. This paper examines perceptions of intrusion detection architecture implementation, resulting from the use of graphics processor. It discusses recent research activities, developments and problems of operating systems security. Some exploratory evidence is presented that shows capabilities of using graphical processors and intrusion detection systems. The focus is on how knowledge experienced throughout the graphics processor inclusion has played out in the design of intrusion detection architecture that is seen as an opportunity to strengthen research expertise.

  1. Embedded processor extensions for image processing

    Science.gov (United States)

    Thevenin, Mathieu; Paindavoine, Michel; Letellier, Laurent; Heyrman, Barthélémy

    2008-04-01

    The advent of camera phones marks a new phase in embedded camera sales. By late 2009, the total number of camera phones will exceed that of both conventional and digital cameras shipped since the invention of photography. Use in mobile phones of applications like visiophony, matrix code readers and biometrics requires a high degree of component flexibility that image processors (IPs) have not, to date, been able to provide. For all these reasons, programmable processor solutions have become essential. This paper presents several techniques geared to speeding up image processors. It demonstrates that a gain of twice is possible for the complete image acquisition chain and the enhancement pipeline downstream of the video sensor. Such results confirm the potential of these computing systems for supporting future applications.

  2. gFEX, the ATLAS Calorimeter Level 1 Real Time Processor

    CERN Document Server

    Tang, Shaochun; The ATLAS collaboration

    2015-01-01

    The global feature extractor (gFEX) is a component of the Level-1Calorimeter trigger Phase-I upgrade for the ATLAS experiment. It is intended to identify patterns of energy associated with the hadronic decays of high momentum Higgs, W, & Z bosons, top quarks, and exotic particles in real time at the LHC crossing rate. The single processor board will be packaged in an Advanced Telecommunications Computing Architecture (ATCA) module and implemented as a fast reconfigurable processor based on three Xilinx Ultra-scale FPGAs. The board will receive coarse-granularity information from all the ATLAS calorimeters on 264 optical fibers with the data transferred at the 40 MHz LHC clock frequency. The gFEX will be controlled by a single system-on-chip processor, ZYNQ, that will be used to configure all the processor FPGAs, monitor board health, and interface to external signals. Now, the pre-prototype board which includes one ZYNQ and one Vertex-7 FPGA has been designed for testing and verification. The performance ...

  3. gFEX, the ATLAS Calorimeter Level-1 Real Time Processor

    CERN Document Server

    AUTHOR|(SzGeCERN)759889; The ATLAS collaboration; Begel, Michael; Chen, Hucheng; Lanni, Francesco; Takai, Helio; Wu, Weihao

    2015-01-01

    The global feature extractor (gFEX) is a component of the Level-1 Calorimeter trigger Phase-I upgrade for the ATLAS experiment. It is intended to identify patterns of energy associated with the hadronic decays of high momentum Higgs, W, & Z bosons, top quarks, and exotic particles in real time at the LHC crossing rate. The single processor board will be packaged in an Advanced Telecommunications Computing Architecture (ATCA) module and implemented as a fast reconfigurable processor based on three Xilinx Vertex Ultra-scale FPGAs. The board will receive coarse-granularity information from all the ATLAS calorimeters on 276 optical fibers with the data transferred at the 40 MHz Large Hadron Collider (LHC) clock frequency. The gFEX will be controlled by a single system-on-chip processor, ZYNQ, that will be used to configure all the processor Field-Programmable Gate Array (FPGAs), monitor board health, and interface to external signals. Now, the pre-prototype board which includes one ZYNQ and one Vertex-7 FPGA ...

  4. MULTI-CORE AND OPTICAL PROCESSOR RELATED APPLICATIONS RESEARCH AT OAK RIDGE NATIONAL LABORATORY

    International Nuclear Information System (INIS)

    Barhen, Jacob; Kerekes, Ryan A.; St Charles, Jesse Lee; Buckner, Mark A.

    2008-01-01

    performs the matrix-vector multiplications, where the nominal matrix size is 256x256. The system clock is 125MHz. At each clock cycle, 128K multiply-and-add operations per second (OPS) are carried out, which yields a peak performance of 16 TeraOPS. IBM Cell Broadband Engine. The Cell processor is the extraordinary resulting product of 5 years of sustained, intensive R and D collaboration (involving over $400M investment) between IBM, Sony, and Toshiba. Its architecture comprises one multithreaded 64-bit PowerPC processor element (PPE) with VMX capabilities and two levels of globally coherent cache, and 8 synergistic processor elements (SPEs). Each SPE consists of a processor (SPU) designed for streaming workloads, local memory, and a globally coherent direct memory access (DMA) engine. Computations are performed in 128-bit wide single instruction multiple data streams (SIMD). An integrated high-bandwidth element interconnect bus (EIB) connects the nine processors and their ports to external memory and to system I/O. The Applied Software Engineering Research (ASER) Group at the ORNL is applying the Cell to a variety of text and image analysis applications. Research on Cell-equipped PlayStation3 (PS3) consoles has led to the development of a correlation-based image recognition engine that enables a single PS3 to process images at more than 10X the speed of state-of-the-art single-core processors. NVIDIA Graphics Processing Units. The ASER group is also employing the latest NVIDIA graphical processing units (GPUs) to accelerate clustering of thousands of text documents using recently developed clustering algorithms such as document flocking and affinity propagation.

  5. MULTI-CORE AND OPTICAL PROCESSOR RELATED APPLICATIONS RESEARCH AT OAK RIDGE NATIONAL LABORATORY

    Energy Technology Data Exchange (ETDEWEB)

    Barhen, Jacob [ORNL; Kerekes, Ryan A [ORNL; ST Charles, Jesse Lee [ORNL; Buckner, Mark A [ORNL

    2008-01-01

    performs the matrix-vector multiplications, where the nominal matrix size is 256x256. The system clock is 125MHz. At each clock cycle, 128K multiply-and-add operations per second (OPS) are carried out, which yields a peak performance of 16 TeraOPS. IBM Cell Broadband Engine. The Cell processor is the extraordinary resulting product of 5 years of sustained, intensive R&D collaboration (involving over $400M investment) between IBM, Sony, and Toshiba. Its architecture comprises one multithreaded 64-bit PowerPC processor element (PPE) with VMX capabilities and two levels of globally coherent cache, and 8 synergistic processor elements (SPEs). Each SPE consists of a processor (SPU) designed for streaming workloads, local memory, and a globally coherent direct memory access (DMA) engine. Computations are performed in 128-bit wide single instruction multiple data streams (SIMD). An integrated high-bandwidth element interconnect bus (EIB) connects the nine processors and their ports to external memory and to system I/O. The Applied Software Engineering Research (ASER) Group at the ORNL is applying the Cell to a variety of text and image analysis applications. Research on Cell-equipped PlayStation3 (PS3) consoles has led to the development of a correlation-based image recognition engine that enables a single PS3 to process images at more than 10X the speed of state-of-the-art single-core processors. NVIDIA Graphics Processing Units. The ASER group is also employing the latest NVIDIA graphical processing units (GPUs) to accelerate clustering of thousands of text documents using recently developed clustering algorithms such as document flocking and affinity propagation.

  6. Universal hybrid quantum processors

    International Nuclear Information System (INIS)

    Vlasov, A.Yu.

    2003-01-01

    A quantum processor (the programmable gate array) is a quantum network with a fixed structure. A space of states is represented as tensor product of data and program registers. Different unitary operations with the data register correspond to 'loaded' programs without any changing or 'tuning' of the network itself. Due to such property and undesirability of entanglement between program and data registers, universality of quantum processors is a subject of rather strong restrictions. Universal 'stochastic' quantum gate arrays were developed by different authors. It was also proved that 'deterministic' quantum processors with finite-dimensional space of states may be universal only in approximate sense. In the present paper it is shown that, using a hybrid system with continuous and discrete quantum variables, it is possible to suggest a design of strictly universal quantum processors. It is also shown that 'deterministic' limit of specific programmable 'stochastic' U(1) gates (probability of success becomes a unit for the infinite program register), discussed by other authors, may be essentially the same kind of hybrid quantum systems used here

  7. Beyond processor sharing

    NARCIS (Netherlands)

    S. Aalto; U. Ayesta (Urtzi); S.C. Borst (Sem); V. Misra; R. Núñez Queija (Rudesindo)

    2007-01-01

    textabstractWhile the (Egalitarian) Processor-Sharing (PS) discipline offers crucial insights in the performance of fair resource allocation mechanisms, it is inherently limited in analyzing and designing differentiated scheduling algorithms such as Weighted Fair Queueing and Weighted Round-Robin.

  8. Computational identification and analysis of single-nucleotide ...

    Indian Academy of Sciences (India)

    Department of Biotechnology and Bioinformatics, Jaypee University of Information Technology (JUIT), Waknaghat, Teh Kandaghat, Solan 173 234, India; School of Computer Science and Information Technology, Devi Ahilya Vishwavidyalaya (DAVV), Indore 452 013, India; Computational Biology Group, Abhyudaya ...

  9. Automobile Crash Sensor Signal Processor

    Science.gov (United States)

    1973-11-01

    The crash sensor signal processor described interfaces between an automobile-installed doppler radar and an air bag activating solenoid or equivalent electromechanical device. The processor utilizes both digital and analog techniques to produce an ou...

  10. The Central Trigger Processor (CTP)

    CERN Multimedia

    Franchini, Matteo

    2016-01-01

    The Central Trigger Processor (CTP) receives trigger information from the calorimeter and muon trigger processors, as well as from other sources of trigger. It makes the Level-1 decision (L1A) based on a trigger menu.

  11. Universal quantum gates for Single Cooper Pair Box based quantum computing

    Science.gov (United States)

    Echternach, P.; Williams, C. P.; Dultz, S. C.; Braunstein, S.; Dowling, J. P.

    2000-01-01

    We describe a method for achieving arbitrary 1-qubit gates and controlled-NOT gates within the context of the Single Cooper Pair Box (SCB) approach to quantum computing. Such gates are sufficient to support universal quantum computation.

  12. Single-photon emission computed tomography and early death in acute ischemic stroke

    NARCIS (Netherlands)

    Limburg, M.; van Royen, E. A.; Hijdra, A.; de Bruïne, J. F.; Verbeeten, B. W.

    1990-01-01

    Single-photon emission computed tomography with thallium-201-labeled diethyldithiocarbamate was performed in 26 consecutive patients less than or equal to 24 hours after a supratentorial brain infarction. Computed tomography excluded other relevant pathology. Two observers assessed the initial

  13. Computational and Experimental Insight Into Single-Molecule Piezoelectric Materials

    Science.gov (United States)

    Marvin, Christopher Wayne

    Piezoelectric materials allow for the harvesting of ambient waste energy from the environment. Producing lightweight, highly responsive materials is a challenge for this type of material, requiring polymer, foam, or bio-inspired materials. In this dissertation, I explore the origin of the piezoelectric effect in single molecules through density functional theory (DFT), analyze the piezoresponse of bio-inspired peptidic materials through the use of atomic and piezoresponse force microscopy (AFM and PFM), and develop a novel class of materials combining flexible polyurethane foams and non-piezoelectric, polar dopants. For the DFT calculations, functional group, regiochemical, and heteroatom derivatives of [6]helicene were examined for their influence on the piezoelectric response. An aza[6]helicene derivative was found to have a piezoelectric response (108 pm/V) comparable to ceramics such as lead zirconium titanate (200+ pm/V). These computed materials have the possibility to compete with current field-leading piezomaterials such as lead zirconium titanate (PZT), zinc oxide (ZnO), and polyvinylidene difluoride (PVDF) and its derivatives. The use of AFM/PFM allows for the demonstration of the piezoelectric effect of the selfassembled monolayer (SAM) peptidic systems. Through PFM, the influence that the helicity and sequence of the peptide has on the overall response of the molecule can be analyzed. Finally, development of a novel class of piezoelectrics, the foam-based materials, expands the current understanding of the qualities required for a piezoelectric material from ceramic and rigid materials to more flexible, organic materials. Through the exploration of these novel types of piezoelectric materials, new design rules and figures of merit have been developed.

  14. Processor register error correction management

    Science.gov (United States)

    Bose, Pradip; Cher, Chen-Yong; Gupta, Meeta S.

    2016-12-27

    Processor register protection management is disclosed. In embodiments, a method of processor register protection management can include determining a sensitive logical register for executable code generated by a compiler, generating an error-correction table identifying the sensitive logical register, and storing the error-correction table in a memory accessible by a processor. The processor can be configured to generate a duplicate register of the sensitive logical register identified by the error-correction table.

  15. High-Performance Linear Algebra Processor using FPGA

    National Research Council Canada - National Science Library

    Johnson, J

    2004-01-01

    With recent advances in FPGA (Field Programmable Gate Array) technology it is now feasible to use these devices to build special purpose processors for floating point intensive applications that arise in scientific computing...

  16. Complexity of scheduling multiprocessor tasks with prespecified processor allocations

    NARCIS (Netherlands)

    Hoogeveen, J.A.; van de Velde, S.L.; van de Velde, S.L.; Veltman, Bart

    1995-01-01

    We investigate the computational complexity of scheduling multiprocessor tasks with prespecified processor allocations. We consider two criteria: minimizing schedule length and minimizing the sum of the task completion times. In addition, we investigate the complexity of problems when precedence

  17. The Molen Polymorphic Media Processor

    NARCIS (Netherlands)

    Kuzmanov, G.K.

    2004-01-01

    In this dissertation, we address high performance media processing based on a tightly coupled co-processor architectural paradigm. More specifically, we introduce a reconfigurable media augmentation of a general purpose processor and implement it into a fully operational processor prototype. The

  18. Dual-core Itanium Processor

    CERN Multimedia

    2006-01-01

    Intel’s first dual-core Itanium processor, code-named "Montecito" is a major release of Intel's Itanium 2 Processor Family, which implements the Intel Itanium architecture on a dual-core processor with two cores per die (integrated circuit). Itanium 2 is much more powerful than its predecessor. It has lower power consumption and thermal dissipation.

  19. 50 CFR 660.160 - Catcher/processor (C/P) Coop Program.

    Science.gov (United States)

    2010-10-01

    ... 50 Wildlife and Fisheries 9 2010-10-01 2010-10-01 false Catcher/processor (C/P) Coop Program. 660... Groundfish-Limited Entry Trawl Fisheries § 660.160 Catcher/processor (C/P) Coop Program. (a) General. The C/P... fishery and is a single voluntary coop. Eligible harvesters and processors must meet the requirements set...

  20. 77 FR 44572 - Second Fishing Capacity Reduction Program for the Longline Catcher Processor Subsector of the...

    Science.gov (United States)

    2012-07-30

    ... Processor Subsector of the Bering Sea and Aleutian Islands Non- Pollock Groundfish Fishery AGENCY: National... million loan for a single latent permit within the Longline Catcher Processor Subsector of the Bering Sea... second round of capacity reduction for the BSAI Longline Catcher Processor Subsector, NMFS must publish...

  1. The breaking point of modern processor and platform technology

    International Nuclear Information System (INIS)

    Jarp, Sverre; Lazzaro, Alfio; Leduc, Julien; Nowak, Andrzej

    2011-01-01

    This work is an overview of state of the art processors used in High Energy Physics, their architecture and an extensive outline of the forthcoming technologies. Silicon process science and hardware design are making constant and rapid progress, and a solid grasp of these developments is imperative to the understanding of their possible future applications, which might include software strategy, optimizations, computing center operations and hardware acquisitions. In particular, the current issue of software and platform scalability is becoming more and more noticeable, and will develop in the near future with the growing core count of single chips and the approach of certain x86 architectural limits. Other topics brought forward include the hard, physical limits of innovation, the applicability of tried and tested computing formulas to modern technologies, as well as an analysis of viable alternate choices for continued development.

  2. Software verification and validation methodology for advanced digital reactor protection system using diverse dual processors to prevent common mode failure

    International Nuclear Information System (INIS)

    Son, Ki Chang; Shin, Hyun Kook; Lee, Nam Hoon; Baek, Seung Min; Kim, Hang Bae

    2001-01-01

    The Advanced Digital Reactor Protection System (ADRPS) with diverse dual processors is being developed by the National Research Lab of KOPEC for ADRPS development. One of the ADRPS goals is to develop digital Plant Protection System (PPS) free of Common Mode Failure (CMF). To prevent CMF, the principle of diversity is applied to both hardware design and software design. For the hardware diversity, two different types of CPUs are used for Bistable Processor and Local Coincidence Logic Processor. The VME based Single Board Computers (SBC) are used for the CPU hardware platforms. The QNX Operating System (OS) and the VxWorks OS are used for software diversity. Rigorous Software Verification and Validation (V and V) is also required to prevent CMF. In this paper, software V and V methodology for the ADRPS is described to enhance the ADRPS software reliability and to assure high quality of the ADRPS software

  3. Computational identification and analysis of single-nucleotide ...

    Indian Academy of Sciences (India)

    2School of Computer Science and Information Technology, Devi Ahilya Vishwavidyalaya (DAVV), Indore 452 013, India. 3Computational Biology ... and breeding as genes of scientific and agronomic impor- tance can be isolated solely on ... indels (insertion/deletion) has led to a revolution in their use as molecular markers ...

  4. Multimode power processor

    Science.gov (United States)

    O'Sullivan, George A.; O'Sullivan, Joseph A.

    1999-01-01

    In one embodiment, a power processor which operates in three modes: an inverter mode wherein power is delivered from a battery to an AC power grid or load; a battery charger mode wherein the battery is charged by a generator; and a parallel mode wherein the generator supplies power to the AC power grid or load in parallel with the battery. In the parallel mode, the system adapts to arbitrary non-linear loads. The power processor may operate on a per-phase basis wherein the load may be synthetically transferred from one phase to another by way of a bumpless transfer which causes no interruption of power to the load when transferring energy sources. Voltage transients and frequency transients delivered to the load when switching between the generator and battery sources are minimized, thereby providing an uninterruptible power supply. The power processor may be used as part of a hybrid electrical power source system which may contain, in one embodiment, a photovoltaic array, diesel engine, and battery power sources.

  5. Simulation of a processor switching circuit with APLSV

    International Nuclear Information System (INIS)

    Dilcher, H.

    1979-01-01

    The report describes the simulation of a processor switching circuit with APL. Furthermore an APL function is represented to simulate a processor in an assembly like language. Both together serve as a tool for studying processor properties. By means of the programming function it is also possible to program other simulated processors. The processor is to be used in the processing of data in real time analysis that occur in high energy physics experiments. The data are already offered to the computer in digitalized form. A typical data rate is at 10 KB/ sec. The data are structured in blocks. The particular blocks are 1 KB wide and are independent from each other. Aprocessor has to decide, whether the block data belong to an event that is part of the backround noise and can therefore be forgotten, or whether the data should be saved for a later evaluation. (orig./WB) [de

  6. WATERLOPP V2/64: A highly parallel machine for numerical computation

    Science.gov (United States)

    Ostlund, Neil S.

    1985-07-01

    Current technological trends suggest that the high performance scientific machines of the future are very likely to consist of a large number (greater than 1024) of processors connected and communicating with each other in some as yet undetermined manner. Such an assembly of processors should behave as a single machine in obtaining numerical solutions to scientific problems. However, the appropriate way of organizing both the hardware and software of such an assembly of processors is an unsolved and active area of research. It is particularly important to minimize the organizational overhead of interprocessor comunication, global synchronization, and contention for shared resources if the performance of a large number ( n) of processors is to be anything like the desirable n times the performance of a single processor. In many situations, adding a processor actually decreases the performance of the overall system since the extra organizational overhead is larger than the extra processing power added. The systolic loop architecture is a new multiple processor architecture which attemps at a solution to the problem of how to organize a large number of asynchronous processors into an effective computational system while minimizing the organizational overhead. This paper gives a brief overview of the basic systolic loop architecture, systolic loop algorithms for numerical computation, and a 64-processor implementation of the architecture, WATERLOOP V2/64, that is being used as a testbed for exploring the hardware, software, and algorithmic aspects of the architecture.

  7. Systematic approach for deriving feasible mappings of parallel algorithms to parallel computing platforms

    NARCIS (Netherlands)

    Arkin, Ethem; Tekinerdogan, Bedir; Imre, Kayhan M.

    2017-01-01

    The need for high-performance computing together with the increasing trend from single processor to parallel computer architectures has leveraged the adoption of parallel computing. To benefit from parallel computing power, usually parallel algorithms are defined that can be mapped and executed

  8. Single-Chip Computers With Microelectromechanical Systems-Based Magnetic Memory

    NARCIS (Netherlands)

    Carley, L. Richard; Bain, James A.; Fedder, Gary K.; Greve, David W.; Guillou, David F.; Lu, Michael S.C.; Mukherjee, Tamal; Santhanam, Suresh; Abelmann, Leon; Min, Seungook

    This article describes an approach for implementing a complete computer system (CPU, RAM, I/O, and nonvolatile mass memory) on a single integrated-circuit substrate (a chip)—hence, the name "single-chip computer." The approach presented combines advances in the field of microelectromechanical

  9. Computation in a single neuron: Hodgkin and Huxley revisited

    OpenAIRE

    Arcas, Blaise Aguera y; Fairhall, Adrienne L.; Bialek, William

    2002-01-01

    A spiking neuron ``computes'' by transforming a complex dynamical input into a train of action potentials, or spikes. The computation performed by the neuron can be formulated as dimensional reduction, or feature detection, followed by a nonlinear decision function over the low dimensional space. Generalizations of the reverse correlation technique with white noise input provide a numerical strategy for extracting the relevant low dimensional features from experimental data, and information t...

  10. Applying graphics processor units to Monte Carlo dose calculation in radiation therapy

    Directory of Open Access Journals (Sweden)

    Bakhtiari M

    2010-01-01

    Full Text Available We investigate the potential in using of using a graphics processor unit (GPU for Monte-Carlo (MC-based radiation dose calculations. The percent depth dose (PDD of photons in a medium with known absorption and scattering coefficients is computed using a MC simulation running on both a standard CPU and a GPU. We demonstrate that the GPU′s capability for massive parallel processing provides a significant acceleration in the MC calculation, and offers a significant advantage for distributed stochastic simulations on a single computer. Harnessing this potential of GPUs will help in the early adoption of MC for routine planning in a clinical environment.

  11. XOP: A second generation fast processor for on-line use in high energy physics experiments

    International Nuclear Information System (INIS)

    Lingjaerde, T.

    1981-01-01

    Processors for trigger calculations and data compression in high energy physics are characterized by a high data input capability combined with fas execution of relatively simple routines. In order to achieve the required performance it is advantageous to replace the classical computer instruction-set by microcoded instructions, the various fields of which control the internal subunits in parallel. The fast processor called ESOP is based on such a principle: the different operations are handled step by step by dedicated optimized modules under control of a central instruction unit. Thus, the arithmetic operations, address calculations, conditional checking, loop counts and next instruction evaluation all overlap in time. Based upon the experience from ESOP the architecture of a new processor 'XOP' is beginning to take shape which will be faster and easier to use. In this context the most important innovations are: easy handling of operands in the arithmetic unit by means of three data buses and large data files, a powerful data addressing unit for easy handling of vectors, as well as single operands, and a very flexible logic for conditional branching. Input/output will be made transparent through the introduction of internal fast processors which will be used in conjunction with powerful firmware as a software debugging aid. (orig.)

  12. Bank switched memory interface for an image processor

    International Nuclear Information System (INIS)

    Barron, M.; Downward, J.

    1980-09-01

    A commercially available image processor is interfaced to a PDP-11/45 through an 8K window of memory addresses. When the image processor was not in use it was desired to be able to use the 8K address space as real memory. The standard method of accomplishing this would have been to use UNIBUS switches to switch in either the physical 8K bank of memory or the image processor memory. This method has the disadvantage of being rather expensive. As a simple alternative, a device was built to selectively enable or disable either an 8K bank of memory or the image processor memory. To enable the image processor under program control, GEN is contracted in size, the memory is disabled, a device partition for the image processor is created above GEN, and the image processor memory is enabled. The process is reversed to restore memory to GEN. The hardware to enable/disable the image and computer memories is controlled using spare bits from a DR-11K output register. The image processor and physical memory can be switched in or out on line with no adverse affects on the system's operation

  13. Design of Processors with Reconfigurable Microarchitecture

    Directory of Open Access Journals (Sweden)

    Andrey Mokhov

    2014-01-01

    Full Text Available Energy becomes a dominating factor for a wide spectrum of computations: from intensive data processing in “big data” companies resulting in large electricity bills, to infrastructure monitoring with wireless sensors relying on energy harvesting. In this context it is essential for a computation system to be adaptable to the power supply and the service demand, which often vary dramatically during runtime. In this paper we present an approach to building processors with reconfigurable microarchitecture capable of changing the way they fetch and execute instructions depending on energy availability and application requirements. We show how to use Conditional Partial Order Graphs to formally specify the microarchitecture of such a processor, explore the design possibilities for its instruction set, and synthesise the instruction decoder using correct-by-construction techniques. The paper is focused on the design methodology, which is evaluated by implementing a power-proportional version of Intel 8051 microprocessor.

  14. Lattice gauge theory using parallel processors

    International Nuclear Information System (INIS)

    Lee, T.D.; Chou, K.C.; Zichichi, A.

    1987-01-01

    The book's contents include: Lattice Gauge Theory Lectures: Introduction and Current Fermion Simulations; Monte Carlo Algorithms for Lattice Gauge Theory; Specialized Computers for Lattice Gauge Theory; Lattice Gauge Theory at Finite Temperature: A Monte Carlo Study; Computational Method - An Elementary Introduction to the Langevin Equation, Present Status of Numerical Quantum Chromodynamics; Random Lattice Field Theory; The GF11 Processor and Compiler; and The APE Computer and First Physics Results; Columbia Supercomputer Project: Parallel Supercomputer for Lattice QCD; Statistical and Systematic Errors in Numerical Simulations; Monte Carlo Simulation for LGT and Programming Techniques on the Columbia Supercomputer; Food for Thought: Five Lectures on Lattice Gauge Theory

  15. Timing organization of a real-time multicore processor

    DEFF Research Database (Denmark)

    Schoeberl, Martin; Sparsø, Jens

    2017-01-01

    Real-time systems need a time-predictable computing platform. Computation, communication, and access to shared resources needs to be time-predictable. We use time division multiplexing to statically schedule all computation and communication resources, such as access to main memory or message pas......-predictable multicore processor where we can statically analyze the worst-case execution time of tasks....

  16. Globe hosts launch of new processor

    CERN Multimedia

    2006-01-01

    Launch of the quadecore processor chip at the Globe. On 14 November, in a series of major media events around the world, the chip-maker Intel launched its new 'quadcore' processor. For the regions of Europe, the Middle East and Africa, the day-long launch event took place in CERN's Globe of Science and Innovation, with over 30 journalists in attendance, coming from as far away as Johannesburg and Dubai. CERN was a significant choice for the event: the first tests of this new generation of processor in Europe had been made at CERN over the preceding months, as part of CERN openlab, a research partnership with leading IT companies such as Intel, HP and Oracle. The event also provided the opportunity for the journalists to visit ATLAS and the CERN Computer Centre. The strategy of putting multiple processor cores on the same chip, which has been pursued by Intel and other chip-makers in the last few years, represents an important departure from the more traditional improvements in the sheer speed of such chips. ...

  17. Computing magnetic anisotropy constants of single molecule magnets

    Indian Academy of Sciences (India)

    Administrator

    stringent requirements for a molecule to behave as a. SMM. Modelling magnetic anisotropy in these sys- tems becomes necessary for developing new SMMs with desired properties. Magnetic anisotropy of SMMs is computed by treating the anisotropy Hamiltonian as a perturbation over the Heisenberg exchange ...

  18. Debugging in a multi-processor environment

    International Nuclear Information System (INIS)

    Spann, J.M.

    1981-01-01

    The Supervisory Control and Diagnostic System (SCDS) for the Mirror Fusion Test Facility (MFTF) consists of nine 32-bit minicomputers arranged in a tightly coupled distributed computer system utilizing a share memory as the data exchange medium. Debugging of more than one program in the multi-processor environment is a difficult process. This paper describes what new tools were developed and how the testing of software is performed in the SCDS for the MFTF project

  19. Trigger and decision processors

    International Nuclear Information System (INIS)

    Franke, G.

    1980-11-01

    In recent years there have been many attempts in high energy physics to make trigger and decision processes faster and more sophisticated. This became necessary due to a permanent increase of the number of sensitive detector elements in wire chambers and calorimeters, and in fact it was possible because of the fast developments in integrated circuits technique. In this paper the present situation will be reviewed. The discussion will be mainly focussed upon event filtering by pure software methods and - rather hardware related - microprogrammable processors as well as random access memory triggers. (orig.)

  20. Optical Finite Element Processor

    Science.gov (United States)

    Casasent, David; Taylor, Bradley K.

    1986-01-01

    A new high-accuracy optical linear algebra processor (OLAP) with many advantageous features is described. It achieves floating point accuracy, handles bipolar data by sign-magnitude representation, performs LU decomposition using only one channel, easily partitions and considers data flow. A new application (finite element (FE) structural analysis) for OLAPs is introduced and the results of a case study presented. Error sources in encoded OLAPs are addressed for the first time. Their modeling and simulation are discussed and quantitative data are presented. Dominant error sources and the effects of composite error sources are analyzed.

  1. Command and Data Handling Processor

    OpenAIRE

    Perschy, James

    1996-01-01

    This command and data handling processor is designed to perform mission critical functions for the NEAR and ACE spacecraft. For both missions the processor formats telemetry and executes real-time, delayed and autonomy-rule commands. For the ACE mission the processor also performs spin stabilized attitude control. The design is based on the Harris RTX2010 microprocessor and the UTMC Summit MIL-STD-1553 bus controller. Fault tolerant features added include error detection, correction and write...

  2. Analog processor for electroluminescent detector

    International Nuclear Information System (INIS)

    Belkin, V.S.

    1988-01-01

    Analog processor for spectrometric channel of soft X-ray radiation electroluminescent detector is described. Time internal spectrometric measurer (TIM) with 1 ns/chan quick action serves as signal analyzer. Analog processor restores signals direct component, integrates detector signals and generates control pulses on the TIM input, provides signal discrimination by amplitude and duration, counts number of input pulses per measuring cycle. Flowsheet of analog processor and its man characteristics are presented. Analog processor dead time constitutes 0.5-5 ms. Signal/noise relation is ≥ 500. Scale integral nonlinearity is < 2%

  3. Discovering Motifs in Biological Sequences Using the Micron Automata Processor.

    Science.gov (United States)

    Roy, Indranil; Aluru, Srinivas

    2016-01-01

    Finding approximately conserved sequences, called motifs, across multiple DNA or protein sequences is an important problem in computational biology. In this paper, we consider the (l, d) motif search problem of identifying one or more motifs of length l present in at least q of the n given sequences, with each occurrence differing from the motif in at most d substitutions. The problem is known to be NP-complete, and the largest solved instance reported to date is (26,11). We propose a novel algorithm for the (l,d) motif search problem using streaming execution over a large set of non-deterministic finite automata (NFA). This solution is designed to take advantage of the micron automata processor, a new technology close to deployment that can simultaneously execute multiple NFA in parallel. We demonstrate the capability for solving much larger instances of the (l, d) motif search problem using the resources available within a single automata processor board, by estimating run-times for problem instances (39,18) and (40,17). The paper serves as a useful guide to solving problems using this new accelerator technology.

  4. On the Distribution of Control in Asynchronous Processor Architectures

    OpenAIRE

    Rebello, Vinod

    1997-01-01

    The effective performance of computer systems is to a large measure determined by the synergy between the processor architecture, the instruction set and the compiler. In the past, the sequencing of information within processor architectures has normally been synchronous: controlled centrally by a clock. However, this global signal could possibly limit the future gains in performance that can potentially be achieved through improvements in implementation technology. T...

  5. The Interface Between Redundant Processor Modules Of Safety Grade PLC Using Mass Storage DPRAM

    Energy Technology Data Exchange (ETDEWEB)

    Hwang, Sung Jae; Song, Seong Hwan; No, Young Hun; Yun, Dong Hwa; Park, Gang Min; Kim, Min Gyu; Choi, Kyung Chul; Lee, Ui Taek [POSCO ICT Co., Korea University, Seoul (Korea, Republic of)

    2010-10-15

    Processor module of safety grade PLC (hereinafter called as POSAFE-Q) developed by POSCO ICT provides high reliability and safety. However, POSAFEQ would have suffered a malfunction when we think taking place of abnormal operation by exceptional environmental. POSAFE-Q would not able to conduct its function normally in such case. To prevent these situations, the necessity of redundant processor module has been raised. Therefore, redundant processor module, NCPU-2Q, has been developed which has not only functions of single processor module with high reliability and safety but also functions of redundant processor

  6. A low-power geometric mapping co-processor for high-speed graphics application

    OpenAIRE

    Leeke, Selwyn; Maharatna, Koushik

    2006-01-01

    In this article we present a novel design of a low-power geometric mapping co-processor that can be used for high-performance graphics system. The processor can carry out any single or a combination of transformations belonging to affine transformation family ranging from 1-D to 3-D. It allows interactive operations which can be defined either by a user (allowing it to be a stand-alone geometric transformation processor) or by a host processor (allowing it to be a co-processor to accelerate c...

  7. High-Speed General Purpose Genetic Algorithm Processor.

    Science.gov (United States)

    Hoseini Alinodehi, Seyed Pourya; Moshfe, Sajjad; Saber Zaeimian, Masoumeh; Khoei, Abdollah; Hadidi, Khairollah

    2016-07-01

    In this paper, an ultrafast steady-state genetic algorithm processor (GAP) is presented. Due to the heavy computational load of genetic algorithms (GAs), they usually take a long time to find optimum solutions. Hardware implementation is a significant approach to overcome the problem by speeding up the GAs procedure. Hence, we designed a digital CMOS implementation of GA in [Formula: see text] process. The proposed processor is not bounded to a specific application. Indeed, it is a general-purpose processor, which is capable of performing optimization in any possible application. Utilizing speed-boosting techniques, such as pipeline scheme, parallel coarse-grained processing, parallel fitness computation, parallel selection of parents, dual-population scheme, and support for pipelined fitness computation, the proposed processor significantly reduces the processing time. Furthermore, by relying on a built-in discard operator the proposed hardware may be used in constrained problems that are very common in control applications. In the proposed design, a large search space is achievable through the bit string length extension of individuals in the genetic population by connecting the 32-bit GAPs. In addition, the proposed processor supports parallel processing, in which the GAs procedure can be run on several connected processors simultaneously.

  8. Computing a single cell in the overlay of two simple polygons

    NARCIS (Netherlands)

    Berg, M. de; Devillers, O.; Dobrindt, K.T.G.; Schwarzkopf, O.

    1997-01-01

    This note combines the lazy randomized incremental construction scheme with the technique of \\connectivity acceleration" to obtain an O ( n (log ? n ) 2 ) time randomized algorithm to compute a single face in the overlay oftwo simple polygons in the plane.

  9. Never Trust Your Word Processor

    Science.gov (United States)

    Linke, Dirk

    2009-01-01

    In this article, the author talks about the auto correction mode of word processors that leads to a number of problems and describes an example in biochemistry exams that shows how word processors can lead to mistakes in databases and in papers. The author contends that, where this system is applied, spell checking should not be left to a word…

  10. Encouraging More Women Into Computer Science:Initiating a Single-Sex Intervention Program in Sweden

    OpenAIRE

    Brandell, Gerd; Carlsson, Svante; Ekblom, Håkan; Nord, Ann-Charlotte

    1997-01-01

    The process of starting a new program in computer science and engineering, heavily based on applied mathematics and only open to women, is described in this paper. The program was introduced into an educational system without any tradition in single-sex education. Important observations made during the process included the considerable interest in mathematics and curiosity about computer science found among female students at the secondary school level, and the acceptance of the single-sex pr...

  11. A dedicated line-processor as used at the SHF

    International Nuclear Information System (INIS)

    Bevan, A.V.; Hatley, R.W.; Price, D.R.; Rankin, P.

    1985-01-01

    A hardwired trigger processor was used at the SLAC Hybrid Facility to find evidence for charged tracks originating from the fiducial volume of a 40'' rapidcycling bubble chamber. Straight-line projections of these tracks in the plane perpendicular to the applied magnetic field were searched for using data from three sets of proportional wire chambers (PWC). This information was made directly available to the processor by means of a special digitizing card. The results memory of the processor simulated read-only memory in a 168/E processor and was accessible by it. The 168/E controlled the issuing of a trigger command to the bubble chamber flash tubes. The same design of digitizer card used by the line processor was incorporated into the 168/E, again as read only memory, which allowed it access to the raw data for continual monitoring of trigger integrity. The design logic of the trigger processor was verified by running real PWC data through a FORTRAN simulation of the hardware. This enabled the debugging to become highly automated since a step by step, computer controlled comparison of processor registers to simulation predictions could be made

  12. Embedded Processor Oriented Compiler Infrastructure

    Directory of Open Access Journals (Sweden)

    DJUKIC, M.

    2014-08-01

    Full Text Available In the recent years, research of special compiler techniques and algorithms for embedded processors broaden the knowledge of how to achieve better compiler performance in irregular processor architectures. However, industrial strength compilers, besides ability to generate efficient code, must also be robust, understandable, maintainable, and extensible. This raises the need for compiler infrastructure that provides means for convenient implementation of embedded processor oriented compiler techniques. Cirrus Logic Coyote 32 DSP is an example that shows how traditional compiler infrastructure is not able to cope with the problem. That is why the new compiler infrastructure was developed for this processor, based on research. in the field of embedded system software tools and experience in development of industrial strength compilers. The new infrastructure is described in this paper. Compiler generated code quality is compared with code generated by the previous compiler for the same processor architecture.

  13. Encouraging More Women into Computer Science: Initiating a Single-Sex Intervention Program in Sweden.

    Science.gov (United States)

    Brandell, Gerd; Carlsson, Svante; Eklbom, Hakan; Nord, Ann-Charlotte

    1997-01-01

    Describes the process of starting a new program in computer science and engineering that is heavily based on applied mathematics and only open to women. Emphasizes that success requires considerable interest in mathematics and curiosity about computer science among female students at the secondary level and the acceptance of the single-sex program…

  14. Performance of Artificial Intelligence Workloads on the Intel Core 2 Duo Series Desktop Processors

    Directory of Open Access Journals (Sweden)

    Abdul Kareem PARCHUR

    2010-12-01

    Full Text Available As the processor architecture becomes more advanced, Intel introduced its Intel Core 2 Duo series processors. Performance impact on Intel Core 2 Duo processors are analyzed using SPEC CPU INT 2006 performance numbers. This paper studied the behavior of Artificial Intelligence (AI benchmarks on Intel Core 2 Duo series processors. Moreover, we estimated the task completion time (TCT @1 GHz, @2 GHz and @3 GHz Intel Core 2 Duo series processors frequency. Our results show the performance scalability in Intel Core 2 Duo series processors. Even though AI benchmarks have similar execution time, they have dissimilar characteristics which are identified using principal component analysis and dendogram. As the processor frequency increased from 1.8 GHz to 3.167 GHz the execution time is decreased by ~370 sec for AI workloads. In the case of Physics/Quantum Computing programs it was ~940 sec.

  15. Single photon emission computed tomography by using fan beam collimator

    International Nuclear Information System (INIS)

    Akiyama, Yoshihisa

    1992-01-01

    A multislice fan beam collimator which has parallel collimation along the cephalic-caudul axis of a patient and converging collimation within planes that are perpendicular to that axis was designed for a SPECT system with a rotating scintillation camera, and it was constructed by the lead casting method which was developed in recent years. A reconstruction algorithm for fan beam SPECT was formed originally by combining the reconstruction algorithm of the parallel beam SPECT with that of the fan beam X-ray CT. The algorithm for fan beam SPECT was confirmed by means of computer simulation and a head phantom filled with diluted radionuclide. Not only 99m Tc but also 123 I was used as a radionuclide. A SPECT image with the fan beam collimator was compared with that of a parallel hole, low energy, high resolution collimator which was routinely used for clinical and research SPECT studies. Both system resolution and sensitivity of the fan beam collimator were ∼20% better than those of the parallel hole collimator. Comparing SPECT images obtained from fan beam collimator with those of parallel hole collimator, the SPECT images using fan beam collimator had far better resolution. A fan beam collimator is a useful implement for the SPECT study. (author)

  16. Compute Server Performance Results

    Science.gov (United States)

    Stockdale, I. E.; Barton, John; Woodrow, Thomas (Technical Monitor)

    1994-01-01

    Parallel-vector supercomputers have been the workhorses of high performance computing. As expectations of future computing needs have risen faster than projected vector supercomputer performance, much work has been done investigating the feasibility of using Massively Parallel Processor systems as supercomputers. An even more recent development is the availability of high performance workstations which have the potential, when clustered together, to replace parallel-vector systems. We present a systematic comparison of floating point performance and price-performance for various compute server systems. A suite of highly vectorized programs was run on systems including traditional vector systems such as the Cray C90, and RISC workstations such as the IBM RS/6000 590 and the SGI R8000. The C90 system delivers 460 million floating point operations per second (FLOPS), the highest single processor rate of any vendor. However, if the price-performance ration (PPR) is considered to be most important, then the IBM and SGI processors are superior to the C90 processors. Even without code tuning, the IBM and SGI PPR's of 260 and 220 FLOPS per dollar exceed the C90 PPR of 160 FLOPS per dollar when running our highly vectorized suite,

  17. FPGA Based Intelligent Co-operative Processor in Memory Architecture

    DEFF Research Database (Denmark)

    Ahmed, Zaki; Sotudeh, Reza; Hussain, Dil Muhammad Akbar

    2011-01-01

    In a continuing effort to improve computer system performance, Processor-In-Memory (PIM) architecture has emerged as an alternative solution. PIM architecture incorporates computational units and control logic directly on the memory to provide immediate access to the data. To exploit the potentia...

  18. A microprocessor-based single board computer for high energy physics event pattern recognition

    International Nuclear Information System (INIS)

    Bernstein, H.; Gould, J.J.; Imossi, R.; Kopp, J.K.; Love, W.A.; Ozaki, S.; Platner, E.D.; Kramer, M.A.

    1981-01-01

    A single board MC 68000 based computer has been assembled and bench marked against the CDC 7600 running portions of the pattern recognition code used at the MPS. This computer has a floating coprocessor to achieve throughputs equivalent to several percent that of the 7600. A major part of this work was the construction of a FORTRAN compiler including assembler, linker and library. The intention of this work is to assemble a large number of these single board computers in a parallel FASTBUS environment to act as an on-line and off-line filter for the raw data from MPS II and ISABELLE experiments. (orig.)

  19. Microprocessor-based single board computer for high energy physics event pattern recognition

    International Nuclear Information System (INIS)

    Bernstein, H.; Gould, J.J.; Imossi, R.; Kopp, J.K.; Love, W.A.; Ozaki, S.; Platner, E.D.; Kramer, M.A.

    1981-01-01

    A single board MC 68000 based computer has been assembled and bench marked against the CDC 7600 running portions of the pattern recognition code used at the MPS. This computer has a floating coprocessor to achieve throughputs equivalent to several percent that of the 7600. A major part of this work was the construction of a FORTRAN compiler including assembler, linker and library. The intention of this work is to assemble a large number of these single board computers in a parallel FASTBUS environment to act as an on-line and off-line filter for the raw data from MPS II and ISABELLE experiments

  20. A two-qubit photonic quantum processor and its application to solving systems of linear equations

    Science.gov (United States)

    Barz, Stefanie; Kassal, Ivan; Ringbauer, Martin; Lipp, Yannick Ole; Dakić, Borivoje; Aspuru-Guzik, Alán; Walther, Philip

    2014-01-01

    Large-scale quantum computers will require the ability to apply long sequences of entangling gates to many qubits. In a photonic architecture, where single-qubit gates can be performed easily and precisely, the application of consecutive two-qubit entangling gates has been a significant obstacle. Here, we demonstrate a two-qubit photonic quantum processor that implements two consecutive CNOT gates on the same pair of polarisation-encoded qubits. To demonstrate the flexibility of our system, we implement various instances of the quantum algorithm for solving of systems of linear equations. PMID:25135432

  1. Making CSB + -Trees Processor Conscious

    DEFF Research Database (Denmark)

    Samuel, Michael; Pedersen, Anders Uhl; Bonnet, Philippe

    2005-01-01

    Cache-conscious indexes, such as CSB+-tree, are sensitive to the underlying processor architecture. In this paper, we focus on how to adapt the CSB+-tree so that it performs well on a range of different processor architectures. Previous work has focused on the impact of node size on the performance...... of the CSB+-tree. We argue that it is necessary to consider a larger group of parameters in order to adapt CSB+-tree to processor architectures as different as Pentium and Itanium. We identify this group of parameters and study how it impacts the performance of CSB+-tree on Itanium 2. Finally, we propose...

  2. Real-time wavefront processors for the next generation of adaptive optics systems: a design and analysis

    Science.gov (United States)

    Truong, Tuan; Brack, Gary L.; Troy, Mitchell; Trinh, Thang; Shi, Fang; Dekany, Richard G.

    2003-02-01

    Adaptive optics (AO) systems currently under investigation will require at least two orders of magitude increase in the number of actuators, which in turn translates to effectively a 104 increase in compute latency. Since the performance of an AO system invariably improves as the compute latency decreases, it is important to study how today's computer systems will scale to address this expected increase in actuator utilization. This paper answers this question by characterizing the performance of a single deformable mirror (DM) Shack-Hartmann natural guide star AO system implemented on the present-generation digital signal processor (DSP) TMS320C6701 from Texas Instruments. We derive the compute latency of such a system in terms of a few basic parameters, such as the number of DM actuators, the number of data channels used to read out the camera pixels, the number of DSPs, the available memory bandwidth, as well as the inter-processor communication (IPC) bandwidth and the pixel transfer rate. We show how the results would scale for future systems that utilizes multiple DMs and guide stars. We demonstrate that the principal performance bottleneck of such a system is the available memory bandwidth of the processors and to lesser extent the IPC bandwidth. This paper concludes with suggestions for mitigating this bottleneck.

  3. Efficient quantum walk on a quantum processor.

    Science.gov (United States)

    Qiang, Xiaogang; Loke, Thomas; Montanaro, Ashley; Aungskunsiri, Kanin; Zhou, Xiaoqi; O'Brien, Jeremy L; Wang, Jingbo B; Matthews, Jonathan C F

    2016-05-05

    The random walk formalism is used across a wide range of applications, from modelling share prices to predicting population genetics. Likewise, quantum walks have shown much potential as a framework for developing new quantum algorithms. Here we present explicit efficient quantum circuits for implementing continuous-time quantum walks on the circulant class of graphs. These circuits allow us to sample from the output probability distributions of quantum walks on circulant graphs efficiently. We also show that solving the same sampling problem for arbitrary circulant quantum circuits is intractable for a classical computer, assuming conjectures from computational complexity theory. This is a new link between continuous-time quantum walks and computational complexity theory and it indicates a family of tasks that could ultimately demonstrate quantum supremacy over classical computers. As a proof of principle, we experimentally implement the proposed quantum circuit on an example circulant graph using a two-qubit photonics quantum processor.

  4. Efficient quantum walk on a quantum processor

    Science.gov (United States)

    Qiang, Xiaogang; Loke, Thomas; Montanaro, Ashley; Aungskunsiri, Kanin; Zhou, Xiaoqi; O'Brien, Jeremy L.; Wang, Jingbo B.; Matthews, Jonathan C. F.

    2016-01-01

    The random walk formalism is used across a wide range of applications, from modelling share prices to predicting population genetics. Likewise, quantum walks have shown much potential as a framework for developing new quantum algorithms. Here we present explicit efficient quantum circuits for implementing continuous-time quantum walks on the circulant class of graphs. These circuits allow us to sample from the output probability distributions of quantum walks on circulant graphs efficiently. We also show that solving the same sampling problem for arbitrary circulant quantum circuits is intractable for a classical computer, assuming conjectures from computational complexity theory. This is a new link between continuous-time quantum walks and computational complexity theory and it indicates a family of tasks that could ultimately demonstrate quantum supremacy over classical computers. As a proof of principle, we experimentally implement the proposed quantum circuit on an example circulant graph using a two-qubit photonics quantum processor. PMID:27146471

  5. Computational imaging with a single-pixel detector and a consumer video projector

    Science.gov (United States)

    Sych, D.; Aksenov, M.

    2018-02-01

    Single-pixel imaging is a novel rapidly developing imaging technique that employs spatially structured illumination and a single-pixel detector. In this work, we experimentally demonstrate a fully operating modular single-pixel imaging system. Light patterns in our setup are created with help of a computer-controlled digital micromirror device from a consumer video projector. We investigate how different working modes and settings of the projector affect the quality of reconstructed images. We develop several image reconstruction algorithms and compare their performance for real imaging. Also, we discuss the potential use of the single-pixel imaging system for quantum applications.

  6. Array processors: an introduction to their architecture, software, and applications in nuclear medicine

    International Nuclear Information System (INIS)

    King, M.A.; Doherty, P.W.; Rosenberg, R.J.; Cool, S.L.

    1983-01-01

    Array processors are ''number crunchers'' that dramatically enhance the processing power of nuclear medicine computer systems for applicatons dealing with the repetitive operations involved in digital image processing of large segments of data. The general architecture and the programming of array processors are introduced, along with some applications of array processors to the reconstruction of emission tomographic images, digital image enhancement, and functional image formation

  7. Meteorological Processors and Accessory Programs

    Science.gov (United States)

    Surface and upper air data, provided by NWS, are important inputs for air quality models. Before these data are used in some of the EPA dispersion models, meteorological processors are used to manipulate the data.

  8. A fast processor for di-lepton triggers

    CERN Document Server

    Kostarakis, P; Barsotti, E; Conetti, S; Cox, B; Enagonio, J; Haldeman, M; Haynes, W; Katsanevas, S; Kerns, C; Lebrun, P; Smith, H; Soszyniski, T; Stoffel, J; Treptow, K; Turkot, F; Wagner, R

    1981-01-01

    As a new application of the Fermilab ECL-CAMAC logic modules a fast trigger processor was developed for Fermilab experiment E-537, aiming to measure the higher mass di-muon production by antiprotons. The processor matches the hit information received from drift chambers and scintillation counters, to find candidate muon tracks and determine their directions and momenta. The tracks are then paired to compute an invariant mass: when the computed mass falls within the desired range, the event is accepted. The process is accomplished in times of 5 to 10 microseconds, while achieving a trigger rate reduction of up to a factor of ten. (5 refs).

  9. Global synchronization of parallel processors using clock pulse width modulation

    Science.gov (United States)

    Chen, Dong; Ellavsky, Matthew R.; Franke, Ross L.; Gara, Alan; Gooding, Thomas M.; Haring, Rudolf A.; Jeanson, Mark J.; Kopcsay, Gerard V.; Liebsch, Thomas A.; Littrell, Daniel; Ohmacht, Martin; Reed, Don D.; Schenck, Brandon E.; Swetz, Richard A.

    2013-04-02

    A circuit generates a global clock signal with a pulse width modification to synchronize processors in a parallel computing system. The circuit may include a hardware module and a clock splitter. The hardware module may generate a clock signal and performs a pulse width modification on the clock signal. The pulse width modification changes a pulse width within a clock period in the clock signal. The clock splitter may distribute the pulse width modified clock signal to a plurality of processors in the parallel computing system.

  10. OLYMPUS system and development of its pre-processor

    International Nuclear Information System (INIS)

    Okamoto, Masao; Takeda, Tatsuoki; Tanaka, Masatoshi; Asai, Kiyoshi; Nakano, Koh.

    1977-08-01

    The OLYMPUS SYSTEM developed by K. V. Roverts et al. was converted and introduced in computer system FACOM 230/75 of the JAERI Computing Center. A pre-processor was also developed for the OLYMPUS SYSTEM. The OLYMPUS SYSTEM is very useful for development, standardization and exchange of programs in thermonuclear fusion research and plasma physics. The pre-processor developed by the present authors is not only essential for the JAERI OLYMPUS SYSTEM, but also useful in manipulation, creation and correction of program files. (auth.)

  11. Computing platforms for software-defined radio

    CERN Document Server

    Nurmi, Jari; Isoaho, Jouni; Garzia, Fabio

    2017-01-01

    This book addresses Software-Defined Radio (SDR) baseband processing from the computer architecture point of view, providing a detailed exploration of different computing platforms by classifying different approaches, highlighting the common features related to SDR requirements and by showing pros and cons of the proposed solutions. Coverage includes architectures exploiting parallelism by extending single-processor environment (such as VLIW, SIMD, TTA approaches), multi-core platforms distributing the computation to either a homogeneous array or a set of specialized heterogeneous processors, and architectures exploiting fine-grained, coarse-grained, or hybrid reconfigurability. Describes a computer engineering approach to SDR baseband processing hardware; Discusses implementation of numerous compute-intensive signal processing algorithms on single and multicore platforms; Enables deep understanding of optimization techniques related to power and energy consumption of multicore platforms using several basic a...

  12. Multicore: Fallout From a Computing Evolution (LBNL Summer Lecture Series)

    Energy Technology Data Exchange (ETDEWEB)

    Yelick, Kathy [Director, NERSC

    2008-07-22

    Summer Lecture Series 2008: Parallel computing used to be reserved for big science and engineering projects, but in two years that's all changed. Even laptops and hand-helds use parallel processors. Unfortunately, the software hasn't kept pace. Kathy Yelick, Director of the National Energy Research Scientific Computing Center at Berkeley Lab, describes the resulting chaos and the computing community's efforts to develop exciting applications that take advantage of tens or hundreds of processors on a single chip.

  13. Digital Signal Processors

    Indian Academy of Sciences (India)

    A computer controlling the motion of a satellite should acquire signals from the satellite while it is in motion, compute corrections (if required) to the trajectory and send control signals back within a specified time for effective control. Delays may be fatal to ..... emulators and system evaluation tools have facilitated concurrent.

  14. Computed tomography angiography and perfusion to assess coronary artery stenosis causing perfusion defects by single photon emission computed tomography

    DEFF Research Database (Denmark)

    Rochitte, Carlos E; George, Richard T; Chen, Marcus Y

    2014-01-01

    AIMS: To evaluate the diagnostic power of integrating the results of computed tomography angiography (CTA) and CT myocardial perfusion (CTP) to identify coronary artery disease (CAD) defined as a flow limiting coronary artery stenosis causing a perfusion defect by single photon emission computed...... tomography (SPECT). METHODS AND RESULTS: We conducted a multicentre study to evaluate the accuracy of integrated CTA-CTP for the identification of patients with flow-limiting CAD defined by ≥50% stenosis by invasive coronary angiography (ICA) with a corresponding perfusion deficit on stress single photon...... myocardial infarction, the AUC was 0.90 (95% CI: 0.87-0.94) and in patients without prior CAD the AUC for combined CTA-CTP was 0.93 (95% CI: 0.89-0.97). For the combination of a CTA stenosis ≥50% stenosis and a CTP perfusion deficit, the sensitivity, specificity, positive predictive, and negative predicative...

  15. 7 CFR 926.13 - Processor.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Processor. 926.13 Section 926.13 Agriculture... Processor. Processor means any person who receives or acquires fresh or frozen cranberries or cranberries in the form of concentrate from handlers, producer-handlers, importers, brokers or other processors and...

  16. 40 CFR 791.45 - Processors.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 31 2010-07-01 2010-07-01 true Processors. 791.45 Section 791.45...) DATA REIMBURSEMENT Basis for Proposed Order § 791.45 Processors. (a) Generally, processors will be... processors will have a responsibility to provide reimbursement directly to those paying for the testing: (1...

  17. High performancein silicovirtual drug screening on many-core processors.

    Science.gov (United States)

    McIntosh-Smith, Simon; Price, James; Sessions, Richard B; Ibarra, Amaurys A

    2015-05-01

    Drug screening is an important part of the drug development pipeline for the pharmaceutical industry. Traditional, lab-based methods are increasingly being augmented with computational methods, ranging from simple molecular similarity searches through more complex pharmacophore matching to more computationally intensive approaches, such as molecular docking. The latter simulates the binding of drug molecules to their targets, typically protein molecules. In this work, we describe BUDE, the Bristol University Docking Engine, which has been ported to the OpenCL industry standard parallel programming language in order to exploit the performance of modern many-core processors. Our highly optimized OpenCL implementation of BUDE sustains 1.43 TFLOP/s on a single Nvidia GTX 680 GPU, or 46% of peak performance. BUDE also exploits OpenCL to deliver effective performance portability across a broad spectrum of different computer architectures from different vendors, including GPUs from Nvidia and AMD, Intel's Xeon Phi and multi-core CPUs with SIMD instruction sets.

  18. Mobile clusters of single board computers: an option for providing resources to student projects and researchers.

    Science.gov (United States)

    Baun, Christian

    2016-01-01

    Clusters usually consist of servers, workstations or personal computers as nodes. But especially for academic purposes like student projects or scientific projects, the cost for purchase and operation can be a challenge. Single board computers cannot compete with the performance or energy-efficiency of higher-value systems, but they are an option to build inexpensive cluster systems. Because of the compact design and modest energy consumption, it is possible to build clusters of single board computers in a way that they are mobile and can be easily transported by the users. This paper describes the construction of such a cluster, useful applications and the performance of the single nodes. Furthermore, the clusters' performance and energy-efficiency is analyzed by executing the High Performance Linpack benchmark with a different number of nodes and different proportion of the systems total main memory utilized.

  19. Seismometer array station processors

    International Nuclear Information System (INIS)

    Key, F.A.; Lea, T.G.; Douglas, A.

    1977-01-01

    A description is given of the design, construction and initial testing of two types of Seismometer Array Station Processor (SASP), one to work with data stored on magnetic tape in analogue form, the other with data in digital form. The purpose of a SASP is to detect the short period P waves recorded by a UK-type array of 20 seismometers and to edit these on to a a digital library tape or disc. The edited data are then processed to obtain a rough location for the source and to produce seismograms (after optimum processing) for analysis by a seismologist. SASPs are an important component in the scheme for monitoring underground explosions advocated by the UK in the Conference of the Committee on Disarmament. With digital input a SASP can operate at 30 times real time using a linear detection process and at 20 times real time using the log detector of Weichert. Although the log detector is slower, it has the advantage over the linear detector that signals with lower signal-to-noise ratio can be detected and spurious large amplitudes are less likely to produce a detection. It is recommended, therefore, that where possible array data should be recorded in digital form for input to a SASP and that the log detector of Weichert be used. Trial runs show that a SASP is capable of detecting signals down to signal-to-noise ratios of about two with very few false detections, and at mid-continental array sites it should be capable of detecting most, if not all, the signals with magnitude above msub(b) 4.5; the UK argues that, given a suitable network, it is realistic to hope that sources of this magnitude and above can be detected and identified by seismological means alone. (author)

  20. Dedicated hardware processor and corresponding system-on-chip design for real-time laser speckle imaging.

    Science.gov (United States)

    Jiang, Chao; Zhang, Hongyan; Wang, Jia; Wang, Yaru; He, Heng; Liu, Rui; Zhou, Fangyuan; Deng, Jialiang; Li, Pengcheng; Luo, Qingming

    2011-11-01

    Laser speckle imaging (LSI) is a noninvasive and full-field optical imaging technique which produces two-dimensional blood flow maps of tissues from the raw laser speckle images captured by a CCD camera without scanning. We present a hardware-friendly algorithm for the real-time processing of laser speckle imaging. The algorithm is developed and optimized specifically for LSI processing in the field programmable gate array (FPGA). Based on this algorithm, we designed a dedicated hardware processor for real-time LSI in FPGA. The pipeline processing scheme and parallel computing architecture are introduced into the design of this LSI hardware processor. When the LSI hardware processor is implemented in the FPGA running at the maximum frequency of 130 MHz, up to 85 raw images with the resolution of 640×480 pixels can be processed per second. Meanwhile, we also present a system on chip (SOC) solution for LSI processing by integrating the CCD controller, memory controller, LSI hardware processor, and LCD display controller into a single FPGA chip. This SOC solution also can be used to produce an application specific integrated circuit for LSI processing.

  1. Iterative image reconstruction with a single-board computer employing hardware acceleration

    International Nuclear Information System (INIS)

    Mayans, R.; Rogers, W.L.; Clinthorne, N.H.; Atkins, D.; Chin, I.; Hanao, J.

    1984-01-01

    Iterative reconstruction of tomographic images offers much greater flexibility than filtered backprojection; finite ray width, spatially variant resolution, nonstandard ray geometry, missing angular samples and irregular attenuation maps are all readily accommodated. In addition, various solution strategies such as least square or maximum entropy can be implemented. The principal difficulty is that either a large computer must be used or the computation time is excessive. The authors have developed an image reconstructor based on the Intel 86/12 single-board computer. The design strategy was to first implement a family of reconstruction algorithms in PLM-86 and to identify the slowest common computation segments. Next, double precision arithmetic was recoded and extended addressing calls replaced with in-line code. Finally, the inner loop was shortened by factoring the computation. Computation times for these versions were in the ratio 1:0:75:0.5. Using software only, a single iteration of the ART algorithm for finite beam geometry involving 300k pixel weights could be accomplished in 70 seconds with high quality images obtained in three iterations. In addition the authors examined multibus compatible hardware additions to further speed the computation. The simplest of those schemes, which performs only the forward projection, has been constructed and is being tested. With this addition, computation time is expected to be reduced an additional 40%. With this approach that have combined flexible choice of algorithm with reasonable image reconstruction time

  2. Simultaneous fluid-flow, heat-transfer and solid-stress computation in a single computer code

    Energy Technology Data Exchange (ETDEWEB)

    Spalding, D.B. [Concentration Heat and Momentum Ltd, London (United Kingdom)

    1997-12-31

    Computer simulation of flow- and thermally-induced stresses in mechanical-equipment assemblies has, in the past, required the use of two distinct software packages, one to determine the forces and the temperatures, and the other to compute the resultant stresses. The present paper describes how a single computer program can perform both tasks at the same time. The technique relies on the similarity of the equations governing velocity distributions in fluids to those governing displacements in solids. The same SIMPLE-like algorithm is used for solving both. Applications to 1-, 2- and 3-dimensional situations are presented. It is further suggested that Solid-Fluid-Thermal, ie SFT analysis may come to replace CFD on the one hand and the analysis of stresses in solids on the other, by performing the functions of both. (author) 7 refs.

  3. An energy-efficient high-performance processor with reconfigurable data-paths using RSFQ circuits

    International Nuclear Information System (INIS)

    Takagi, Naofumi

    2013-01-01

    Highlights: ► An idea of a high-performance computer using RSFQ circuits is shown. ► An outline of processor with reconfigurable data-paths (RDPs) is shown. ► Architectural details of an SFQ-RDP are described. -- Abstract: We show recent progress in our research on an energy-efficient high-performance processor with reconfigurable data-paths (RDPs) using rapid single-flux-quantum (RSFQ) circuits. We mainly describe the architectural details of an RDP implemented using RSFQ circuits. An RDP consists of a lot of floating-point units (FPUs) and operand routing networks (ORNs) which connect the FPUs. We reconfigure the RDP to fit a computation, i.e., a group of floating-point operations, appearing in a ‘for’ loop of programs for numerical computations by setting the route in ORNs before the execution of the loop. In the RDP, a lot of FPUs work in parallel with pipelined fashion, and hence, very high-performance computation is achieved

  4. Channel processor in 2D cluster finding algorithm for high energy physics application

    International Nuclear Information System (INIS)

    Paul, Rourab; Chakrabarti, Amlan; Mitra, Jubin; Khan, Shuaib A.; Nayak, Tapan; Mukherjee, Sanjoy

    2016-01-01

    In a Large Ion Collider Experiment (ALICE) at CERN 1 TB/s (approximately) data comes from front end electronics. Previously, we had 1 GBT link operated with a cluster clock frequencies of 133 MHz and 320 MHz in Run 1 and Run 2 respectively. The cluster algorithm proposed in Run 1 and 2 could not work in Run 3 as the data speed increased almost 20 times. Older version cluster algorithm receives data sequentially as a stream. It has 2 main sub processes - Channel Processor, Merging process. The initial step of channel processor finds a peak Q max and sums up pads (sensors) data from -2 time bin to +2 time bin in the time direction. The computed value stores in a register named cluster fragment data (cfd o ). The merging process merges cfd o in pad direction. The data streams in Run 2 comes sequentially, which processed by the channel processor and merging block in a sequential manner with very less resource over head. In Run 3 data comes parallely, 1600 data from 1600 pads of a single time instant comes at each 200 ns interval (5 MHz) which is very challenging to process in the budgeted resource platform of Arria 10 FPGA hardware with 250 to 320 MHz cluster clock

  5. Encouraging more women into computer science: Initiating a single-sex intervention program in Sweden

    Science.gov (United States)

    Brandell, Gerd; Carlsson, Svante; Ekblom, Håkan; Nord, Ann-Charlotte

    1997-11-01

    The process of starting a new program in computer science and engineering, heavily based on applied mathematics and only open to women, is described in this paper. The program was introduced into an educational system without any tradition in single-sex education. Important observations made during the process included the considerable interest in mathematics and curiosity about computer science found among female students at the secondary school level, and the acceptance of the single-sex program by the staff, administration, and management of the university as well as among male and female students. The process described highlights the importance of preparing the environment for a totally new type of educational program.

  6. A COTS-based single board radiation-hardened computer for space applications

    International Nuclear Information System (INIS)

    Stewart, S.; Hillman, R.; Layton, P.; Krawzsenek, D.

    1999-01-01

    There is great community interest in the ability to use COTS (Commercial-Off-The-Shelf) technology in radiation environments. Space Electronics, Inc. has developed a high performance COTS-based radiation hardened computer. COTS approaches were selected for both hardware and software. Through parts testing, selection and packaging, all requirements have been met without parts or process development. Reliability, total ionizing dose and single event performance are attractive. The characteristics, performance and radiation resistance of the single board computer will be presented. (authors)

  7. The design of a graphics processor

    International Nuclear Information System (INIS)

    Holmes, M.; Thorne, A.R.

    1975-12-01

    The design of a graphics processor is described which takes into account known and anticipated user requirements, the availability of cheap minicomputers, the state of integrated circuit technology, and the overall need to minimise cost for a given performance. The main user needs are the ability to display large high resolution pictures, and to dynamically change the user's view in real time by means of fast coordinate processing hardware. The transformations that can be applied to 2D or 3D coordinates either singly or in combination are: translation, scaling, mirror imaging, rotation, and the ability to map the transformation origin on to any point on the screen. (author)

  8. Introduction to Parallel Computing

    Science.gov (United States)

    1992-05-01

    routing, often termed wormhole routing, can eliminate some of these difficulties [Dally:87]. Among the commercially available processors in this category...computational processor. 15 I I Source Router Processor RISCI sProcessor I Figure 2-7. Intel Message Routing The iPSC/860 uses a derivative of wormhole ...FORTRAN attempt to identify actual data dependencies vis-a-vis the artificially imposed dependencies which are caused by the nature of these

  9. Benchmarking NWP Kernels on Multi- and Many-core Processors

    Science.gov (United States)

    Michalakes, J.; Vachharajani, M.

    2008-12-01

    Increased computing power for weather, climate, and atmospheric science has provided direct benefits for defense, agriculture, the economy, the environment, and public welfare and convenience. Today, very large clusters with many thousands of processors are allowing scientists to move forward with simulations of unprecedented size. But time-critical applications such as real-time forecasting or climate prediction need strong scaling: faster nodes and processors, not more of them. Moreover, the need for good cost- performance has never been greater, both in terms of performance per watt and per dollar. For these reasons, the new generations of multi- and many-core processors being mass produced for commercial IT and "graphical computing" (video games) are being scrutinized for their ability to exploit the abundant fine- grain parallelism in atmospheric models. We present results of our work to date identifying key computational kernels within the dynamics and physics of a large community NWP model, the Weather Research and Forecast (WRF) model. We benchmark and optimize these kernels on several different multi- and many-core processors. The goals are to (1) characterize and model performance of the kernels in terms of computational intensity, data parallelism, memory bandwidth pressure, memory footprint, etc. (2) enumerate and classify effective strategies for coding and optimizing for these new processors, (3) assess difficulties and opportunities for tool or higher-level language support, and (4) establish a continuing set of kernel benchmarks that can be used to measure and compare effectiveness of current and future designs of multi- and many-core processors for weather and climate applications.

  10. Study on irradiation effects of nucleus electromagnetic pulse on single chip computer system

    International Nuclear Information System (INIS)

    Hou Minsheng; Liu Shanghe; Wang Shuping

    2001-01-01

    Intense electromagnetic pulse, namely nucleus electromagnetic pulse (NEMP), lightning electromagnetic pulse (LEMP) and high power microwave (HPM), can disturb and destroy the single chip computer system. To study this issue, the authors made irradiation experiments by NEMPs generated by gigahertz transversal electromagnetic (GTEM) Cell. The experiments show that shutdown, restarting, communication errors of the single chip microcomputer system would occur when it was irradiated by the NEMPs. Based on the experiments, the cause on the effects on the single chip microcomputer system is discussed

  11. Benefits of upgrading to the Nucleus® 6 sound processor for a wider clinical population.

    Science.gov (United States)

    Todorov, Michelle J; Galvin, Karyn L

    2018-03-22

    To determine whether a large clinical group of cochlear implant (CI) recipients demonstrated a difference in sentence recognition in noise when using their pre-upgrade sound processor compared to when using the Nucleus 6 processor, and to examine the impact of the following factors: implant type, sound processor type, age, or onset of hearing loss. A file review of 154 CI recipients (aged 7-92 years old) who requested an upgrade to the Nucleus 6 sound processor at the Cochlear Care Centre Melbourne was conducted. 105 recipients had complete data collected according to the protocol. A repeated measures, single subject design was used. Performance of CI recipients was compared with their pre-upgrade sound processor versus the Nucleus 6 processor using the Australian Sentence Test in Noise. Group performance of CI recipients improved by 4.7 dB with the Nucleus 6 compared with the pre-upgrade sound processor. The benefit was not affected by pre-upgrade sound processor type or implant type (including older implant types and sound processors), age or onset of hearing loss (pre-lingual versus post-lingual). This study confirmed that a clinical group of CI recipients obtained a significant benefit when upgrading to the Nucleus 6 sound processor.

  12. Experimental and computational characterization of biological liquid crystals: a review of single-molecule bioassays.

    Science.gov (United States)

    Eom, Kilho; Yang, Jaemoon; Park, Jinsung; Yoon, Gwonchan; Soo Sohn, Young; Park, Shinsuk; Yoon, Dae Sung; Na, Sungsoo; Kwon, Taeyun

    2009-09-10

    Quantitative understanding of the mechanical behavior of biological liquid crystals such as proteins is essential for gaining insight into their biological functions, since some proteins perform notable mechanical functions. Recently, single-molecule experiments have allowed not only the quantitative characterization of the mechanical behavior of proteins such as protein unfolding mechanics, but also the exploration of the free energy landscape for protein folding. In this work, we have reviewed the current state-of-art in single-molecule bioassays that enable quantitative studies on protein unfolding mechanics and/or various molecular interactions. Specifically, single-molecule pulling experiments based on atomic force microscopy (AFM) have been overviewed. In addition, the computational simulations on single-molecule pulling experiments have been reviewed. We have also reviewed the AFM cantilever-based bioassay that provides insight into various molecular interactions. Our review highlights the AFM-based single-molecule bioassay for quantitative characterization of biological liquid crystals such as proteins.

  13. Myndplay: Measuring Attention Regulation with Single Dry Electrode Brain Computer Interface

    NARCIS (Netherlands)

    van der Wal, C.N.; Irrmischer, M.; Guo, Y.; Friston, K.; Faisal, A.; Hill, S.; Peng, H.

    2015-01-01

    Future applications for the detection of attention can be helped by the development and validation of single electrode brain computer interfaces that are small and user-friendly. The two objectives of this study were: to (1) understand the correlates of attention regulation as detected with the

  14. Computational and experimental studies of the interaction between single-walled carbon nanotubes and folic acid

    DEFF Research Database (Denmark)

    Castillo, John J.; Rozo, Ciro E.; Castillo-León, Jaime

    2013-01-01

    This work involved the preparation of a conjugate between single-walled carbon nanotubes and folic acid that was obtained without covalent chemical functionalization using a simple “one pot” synthesis method. Subsequently, the conjugate was investigated by a computational hybrid method: our own N...

  15. Evaluation of a 99Tcm bound brain scanning agent for single photon emission computed tomography

    DEFF Research Database (Denmark)

    Andersen, A R; Hasselbalch, S G; Paulson, O B

    1986-01-01

    D,L HM-PAO-99Tcm (PAO) is a lipophilic tracer complex which is avidly taken up by the brain. We have compared the regional distribution of PAO with regional cerebral blood flow (CBF). CBF was measured by single photon emission computed tomography (SPECT) by Tomomatic 64 after 133Xe inhalation in 41...

  16. Detection of User Independent Single Trial ERPs in Brain Computer Interfaces: An Adaptive Spatial Filtering Approach

    DEFF Research Database (Denmark)

    Leza, Cristina; Puthusserypady, Sadasivan

    2017-01-01

    Brain Computer Interfaces (BCIs) use brain signals to communicate with the external world. The main challenges to address are speed, accuracy and adaptability. Here, a novel algorithm for P300 based BCI spelling system is presented, specifically suited for single-trial detection of Event...

  17. Single photon emission computed tomography in motor neuron disease with dementia

    Energy Technology Data Exchange (ETDEWEB)

    Sawada, H.; Udaka, F.; Kishi, Y.; Seriu, N.; Ohtani, S.; Abe, K.; Mezaki, T.; Kameyama, M.; Honda, M.; Tomonobu, M.

    1988-12-01

    Single photon emission computed tomography with (123 I) isopropylamphetamine was carried out on a patient with motor neutron disease with dementia. (123 I) uptake was decreased in the frontal lobes. This would reflect the histopathological findings such as neuronal loss and gliosis in the frontal lobes.

  18. Single photon emission computed tomography in motor neuron disease with dementia.

    Science.gov (United States)

    Sawada, H; Udaka, F; Kishi, Y; Seriu, N; Mezaki, T; Kameyama, M; Honda, M; Tomonobu, M

    1988-01-01

    Single photon emission computed tomography with [123 I] isopropylamphetamine was carried out on a patient with motor neuron disease with dementia. [123 I] uptake was decreased in the frontal lobes. This would reflect the histopathological findings such as neuronal loss and gliosis in the frontal lobes.

  19. Monte Carlo simulations on SIMD computer architectures

    International Nuclear Information System (INIS)

    Burmester, C.P.; Gronsky, R.; Wille, L.T.

    1992-01-01

    In this paper algorithmic considerations regarding the implementation of various materials science applications of the Monte Carlo technique to single instruction multiple data (SIMD) computer architectures are presented. In particular, implementation of the Ising model with nearest, next nearest, and long range screened Coulomb interactions on the SIMD architecture MasPar MP-1 (DEC mpp-12000) series of massively parallel computers is demonstrated. Methods of code development which optimize processor array use and minimize inter-processor communication are presented including lattice partitioning and the use of processor array spanning tree structures for data reduction. Both geometric and algorithmic parallel approaches are utilized. Benchmarks in terms of Monte Carl updates per second for the MasPar architecture are presented and compared to values reported in the literature from comparable studies on other architectures

  20. Reward-based learning under hardware constraints - Using a RISC processor embedded in a neuromorphic substrate

    Directory of Open Access Journals (Sweden)

    Simon eFriedmann

    2013-09-01

    Full Text Available In this study, we propose and analyze in simulations a new, highly flexible method of imple-menting synaptic plasticity in a wafer-scale, accelerated neuromorphic hardware system. Thestudy focuses on globally modulated STDP, as a special use-case of this method. Flexibility isachieved by embedding a general-purpose processor dedicated to plasticity into the wafer. Toevaluate the suitability of the proposed system, we use a reward modulated STDP rule in a spiketrain learning task. A single layer of neurons is trained to fire at specific points in time withonly the reward as feedback. This model is simulated to measure its performance, i.e. the in-crease in received reward after learning. Using this performance as baseline, we then simulatethe model with various constraints imposed by the proposed implementation and compare theperformance. The simulated constraints include discretized synaptic weights, a restricted inter-face between analog synapses and embedded processor, and mismatch of analog circuits. Wefind that probabilistic updates can increase the performance of low-resolution weights, a simpleinterface between analog synapses and processor is sufficient for learning, and performance isinsensitive to mismatch. Further, we consider communication latency between wafer and theconventional control computer system that is simulating the environment. This latency increasesthe delay, with which the reward is sent to the embedded processor. Because of the time continu-ous operation of the analog synapses, delay can cause a deviation of the updates as compared tothe not delayed situation. We find that for highly accelerated systems latency has to be kept to aminimum. This study demonstrates the suitability of the proposed implementation to emulatethe selected reward modulated STDP learning rule. It is therefore an ideal candidate for imple-mentation in an upgraded version of the wafer-scale system developed within the BrainScaleSproject.

  1. Slowdown in the $M/M/1$ discriminatory processor-sharing queue

    NARCIS (Netherlands)

    Cheung, S.K.; Kim, Bara; Kim, Jeongsim

    2008-01-01

    We consider a queue with multiple K job classes, Poisson arrivals, and exponentially distributed required service times in which a single processor serves according to the discriminatory processor-sharing (DPS) discipline. For this queue, we obtain the first and second moments of the slowdown, which

  2. Java Processor Optimized for RTSJ

    Directory of Open Access Journals (Sweden)

    Tu Shiliang

    2007-01-01

    Full Text Available Due to the preeminent work of the real-time specification for Java (RTSJ, Java is increasingly expected to become the leading programming language in real-time systems. To provide a Java platform suitable for real-time applications, a Java processor which can execute Java bytecode is directly proposed in this paper. It provides efficient support in hardware for some mechanisms specified in the RTSJ and offers a simpler programming model through ameliorating the scoped memory of the RTSJ. The worst case execution time (WCET of the bytecodes implemented in this processor is predictable by employing the optimization method proposed in our previous work, in which all the processing interfering predictability is handled before bytecode execution. Further advantage of this method is to make the implementation of the processor simpler and suited to a low-cost FPGA chip.

  3. Using a Multicore Processor for Rover Autonomous Science

    Science.gov (United States)

    Bornstein, Benjamin; Estlin, Tara; Clement, Bradley; Springer, Paul

    2011-01-01

    Multicore processing promises to be a critical component of future spacecraft. It provides immense increases in onboard processing power and provides an environment for directly supporting fault-tolerant computing. This paper discusses using a state-of-the-art multicore processor to efficiently perform image analysis onboard a Mars rover in support of autonomous science activities.

  4. Experiences with Compiler Support for Processors with Exposed Pipelines

    DEFF Research Database (Denmark)

    Jensen, Nicklas Bo; Schleuniger, Pascal; Hindborg, Andreas Erik

    2015-01-01

    Field programmable gate arrays, FPGAs, have become an attractive implementation technology for a broad range of computing systems. We recently proposed a processor architecture, Tinuso, which achieves high performance by moving complexity from hardware to the compiler tool chain. This means...

  5. Elementary function calculation programs for the central processor-6

    International Nuclear Information System (INIS)

    Dobrolyubov, L.V.; Ovcharenko, G.A.; Potapova, V.A.

    1976-01-01

    Subprograms of elementary functions calculations are given for the central processor (CP AS-6). A procedure is described to obtain calculated formulae which represent the elementary functions as a polynomial. Standard programs for random numbers are considered. All the programs described are based upon the algorithms of respective programs for BESM computer

  6. ALICE chip processor

    CERN Multimedia

    Maximilien Brice

    2003-01-01

    This tiny chip provides data processing for the time projection chamber on ALICE. Known as the ALICE TPC Read Out (ALTRO), this device was designed to minimize the size and power consumption of the TPC front end electronics. This single chip contains 16 low-power analogue-to-digital converters with six million transistors of digital processing and 8 kbits of data storage.

  7. An accurate projection algorithm for array processor based SPECT systems

    International Nuclear Information System (INIS)

    King, M.A.; Schwinger, R.B.; Cool, S.L.

    1985-01-01

    A data re-projection algorithm has been developed for use in single photon emission computed tomography (SPECT) on an array processor based computer system. The algorithm makes use of an accurate representation of pixel activity (uniform square pixel model of intensity distribution), and is rapidly performed due to the efficient handling of an array based algorithm and the Fast Fourier Transform (FFT) on parallel processing hardware. The algorithm consists of using a pixel driven nearest neighbour projection operation to an array of subdivided projection bins. This result is then convolved with the projected uniform square pixel distribution before being compressed to original bin size. This distribution varies with projection angle and is explicitly calculated. The FFT combined with a frequency space multiplication is used instead of a spatial convolution for more rapid execution. The new algorithm was tested against other commonly used projection algorithms by comparing the accuracy of projections of a simulated transverse section of the abdomen against analytically determined projections of that transverse section. The new algorithm was found to yield comparable or better standard error and yet result in easier and more efficient implementation on parallel hardware. Applications of the algorithm include iterative reconstruction and attenuation correction schemes and evaluation of regions of interest in dynamic and gated SPECT

  8. SAD PROCESSOR FOR MULTIPLE MACROBLOCK MATCHING IN FAST SEARCH VIDEO MOTION ESTIMATION

    Directory of Open Access Journals (Sweden)

    Nehal N. Shah

    2015-02-01

    Full Text Available Motion estimation is a very important but computationally complex task in video coding. Process of determining motion vectors based on the temporal correlation of consecutive frame is used for video compression. In order to reduce the computational complexity of motion estimation and maintain the quality of encoding during motion compensation, different fast search techniques are available. These block based motion estimation algorithms use the sum of absolute difference (SAD between corresponding macroblock in current frame and all the candidate macroblocks in the reference frame to identify best match. Existing implementations can perform SAD between two blocks using sequential or pipeline approach but performing multi operand SAD in single clock cycle with optimized recourses is state of art. In this paper various parallel architectures for computation of the fixed block size SAD is evaluated and fast parallel SAD architecture is proposed with optimized resources. Further SAD processor is described with 9 processing elements which can be configured for any existing fast search block matching algorithm. Proposed SAD processor consumes 7% fewer adders compared to existing implementation for one processing elements. Using nine PE it can process 84 HD frames per second in worse case which is good outcome for real time implementation. In average case architecture process 325 HD frames per second.

  9. Fast processor for dilepton triggers

    International Nuclear Information System (INIS)

    Katsanevas, S.; Kostarakis, P.; Baltrusaitis, R.

    1983-01-01

    We describe a fast trigger processor, developed for and used in Fermilab experiment E-537, for selecting high-mass dimuon events produced by negative pions and anti-protons. The processor finds candidate tracks by matching hit information received from drift chambers and scintillation counters, and determines their momenta. Invariant masses are calculated for all possible pairs of tracks and an event is accepted if any invariant mass is greater than some preselectable minimum mass. The whole process, accomplished within 5 to 10 microseconds, achieves up to a ten-fold reduction in trigger rate

  10. Optical Array Processor: Laboratory Results

    Science.gov (United States)

    Casasent, David; Jackson, James; Vaerewyck, Gerard

    1987-01-01

    A Space Integrating (SI) Optical Linear Algebra Processor (OLAP) is described and laboratory results on its performance in several practical engineering problems are presented. The applications include its use in the solution of a nonlinear matrix equation for optimal control and a parabolic Partial Differential Equation (PDE), the transient diffusion equation with two spatial variables. Frequency-multiplexed, analog and high accuracy non-base-two data encoding are used and discussed. A multi-processor OLAP architecture is described and partitioning and data flow issues are addressed.

  11. Making CSB + -Trees Processor Conscious

    DEFF Research Database (Denmark)

    Samuel, Michael; Pedersen, Anders Uhl; Bonnet, Philippe

    2005-01-01

    Cache-conscious indexes, such as CSB+-tree, are sensitive to the underlying processor architecture. In this paper, we focus on how to adapt the CSB+-tree so that it performs well on a range of different processor architectures. Previous work has focused on the impact of node size on the performan...... a systematic method for adapting CSB+-tree to new platforms. This work is a first step towards integrating CSB+-tree in MySQL’s heap storage manager....

  12. An HDLC communication processor

    International Nuclear Information System (INIS)

    Brehmer, W.; Wawer, W.

    1981-09-01

    Systems for data acquisition and process control may comprise several intelligent stations with local computing power, each of them performing specific tasks in the control system. These stations generally are not independent of each other but are interconnected by the process being monitored or controlled. Therefore they must communicate with each other by transmitting or receiving messages which may contain instructions directed to the control system, status information and data from peripheral devices, variables which synchronize the execution of programs in the autonomous intelligent stations, and data for man-machine communication. This report describes a microprocessor based device which interfaces the I/O port of a host computer (CAMAC Dataway) to an HDLC bus system. The microprocessor (M6800) performs the HDLC protocol for a Multi-Point Unbalanced System. (orig.) [de

  13. No-go theorem for passive single-rail linear optical quantum computing.

    Science.gov (United States)

    Wu, Lian-Ao; Walther, Philip; Lidar, Daniel A

    2013-01-01

    Photonic quantum systems are among the most promising architectures for quantum computers. It is well known that for dual-rail photons effective non-linearities and near-deterministic non-trivial two-qubit gates can be achieved via the measurement process and by introducing ancillary photons. While in principle this opens a legitimate path to scalable linear optical quantum computing, the technical requirements are still very challenging and thus other optical encodings are being actively investigated. One of the alternatives is to use single-rail encoded photons, where entangled states can be deterministically generated. Here we prove that even for such systems universal optical quantum computing using only passive optical elements such as beam splitters and phase shifters is not possible. This no-go theorem proves that photon bunching cannot be passively suppressed even when extra ancilla modes and arbitrary number of photons are used. Our result provides useful guidance for the design of optical quantum computers.

  14. Matrix preconditioning: a robust operation for optical linear algebra processors.

    Science.gov (United States)

    Ghosh, A; Paparao, P

    1987-07-15

    Analog electrooptical processors are best suited for applications demanding high computational throughput with tolerance for inaccuracies. Matrix preconditioning is one such application. Matrix preconditioning is a preprocessing step for reducing the condition number of a matrix and is used extensively with gradient algorithms for increasing the rate of convergence and improving the accuracy of the solution. In this paper, we describe a simple parallel algorithm for matrix preconditioning, which can be implemented efficiently on a pipelined optical linear algebra processor. From the results of our numerical experiments we show that the efficacy of the preconditioning algorithm is affected very little by the errors of the optical system.

  15. Use of emulating processors in high energy physics

    International Nuclear Information System (INIS)

    Kunz, P.F.

    1981-01-01

    The needs for data processing in high energy physics is growing at a rate that exceeds the data processing capabilities of traditional computers. The use of emulating processors is one method to fill this growing gap. This paper will analyze the data processing requirements from the point of view of program development and data production. Needs in both off-line and on-line environments will be considered. It will be shown that emulating processors fulfill most of the requirements at a reasonable cost. (Auth.)

  16. Ring-array processor distribution topology for optical interconnects

    Science.gov (United States)

    Li, Yao; Ha, Berlin; Wang, Ting; Wang, Sunyu; Katz, A.; Lu, X. J.; Kanterakis, E.

    1992-01-01

    The existing linear and rectangular processor distribution topologies for optical interconnects, although promising in many respects, cannot solve problems such as clock skews, the lack of supporting elements for efficient optical implementation, etc. The use of a ring-array processor distribution topology, however, can overcome these problems. Here, a study of the ring-array topology is conducted with an aim of implementing various fast clock rate, high-performance, compact optical networks for digital electronic multiprocessor computers. Practical design issues are addressed. Some proof-of-principle experimental results are included.

  17. Parallel Processor for 3D Recovery from Optical Flow

    Directory of Open Access Journals (Sweden)

    Jose Hugo Barron-Zambrano

    2009-01-01

    Full Text Available 3D recovery from motion has received a major effort in computer vision systems in the recent years. The main problem lies in the number of operations and memory accesses to be performed by the majority of the existing techniques when translated to hardware or software implementations. This paper proposes a parallel processor for 3D recovery from optical flow. Its main feature is the maximum reuse of data and the low number of clock cycles to calculate the optical flow, along with the precision with which 3D recovery is achieved. The results of the proposed architecture as well as those from processor synthesis are presented.

  18. REVIEW: Optical logic elements for high-throughput optical processors

    Science.gov (United States)

    Fedorov, V. B.

    1990-12-01

    An analysis is made of the current state and problems as well as prospects of the development of optical logic elements and threshold light amplifiers for high-throughput computing. An analysis is made of the specific case of a variant of an optical processor capable of 1013-1014 arithmetic operations per second under conditions of pipelined processing of two-dimensional arrays of multidigit binary operands. The basic requirements which must be satisfied by parameters and characteristics of optical logic elements in such a processor are identified.

  19. The associative memory system for the FTK processor at ATLAS

    CERN Document Server

    Magalotti, D; The ATLAS collaboration; Donati, S; Luciano, P; Piendibene, M; Giannetti, P; Lanza, A; Verzellesi, G; Sakellariou, Andreas; Billereau, W; Combe, J M

    2014-01-01

    In high energy physics experiments, the most interesting processes are very rare and hidden in an extremely large level of background. As the experiment complexity, accelerator backgrounds, and instantaneous luminosity increase, more effective and accurate data selection techniques are needed. The Fast TracKer processor (FTK) is a real time tracking processor designed for the ATLAS trigger upgrade. The FTK core is the Associative Memory system. It provides massive computing power to minimize the processing time of complex tracking algorithms executed online. This paper reports on the results and performance of a new prototype of Associative Memory system.

  20. A computer based Moessbauer spectrometer system

    International Nuclear Information System (INIS)

    Jin Ge; Li Yuzhi; Yin Zejie; Yao Chunbo; Li Tie; Tan Yexian; Wang Jian

    1999-01-01

    A computer based Moessbauer spectrometer system with a single chip processor for online control and data acquisition is developed. The spectrometer is designed as a single-width NIM module and can be performed directly in NIM crate. Because the structure of the spectrometer is designed to be quite flexible, the system is easy to be configured with other kinds of Moessbauer driver, and can be used in other data acquisition systems

  1. Wavelength-encoded OCDMA system using opto-VLSI processors

    Science.gov (United States)

    Aljada, Muhsen; Alameh, Kamal

    2007-07-01

    We propose and experimentally demonstrate a 2.5 Gbits/sper user wavelength-encoded optical code-division multiple-access encoder-decoder structure based on opto-VLSI processing. Each encoder and decoder is constructed using a single 1D opto-very-large-scale-integrated (VLSI) processor in conjunction with a fiber Bragg grating (FBG) array of different Bragg wavelengths. The FBG array spectrally and temporally slices the broadband input pulse into several components and the opto-VLSI processor generates codewords using digital phase holograms. System performance is measured in terms of the autocorrelation and cross-correlation functions as well as the eye diagram.

  2. Wavelength-encoded OCDMA system using opto-VLSI processors.

    Science.gov (United States)

    Aljada, Muhsen; Alameh, Kamal

    2007-07-01

    We propose and experimentally demonstrate a 2.5 Gbits/sper user wavelength-encoded optical code-division multiple-access encoder-decoder structure based on opto-VLSI processing. Each encoder and decoder is constructed using a single 1D opto-very-large-scale-integrated (VLSI) processor in conjunction with a fiber Bragg grating (FBG) array of different Bragg wavelengths. The FBG array spectrally and temporally slices the broadband input pulse into several components and the opto-VLSI processor generates codewords using digital phase holograms. System performance is measured in terms of the autocorrelation and cross-correlation functions as well as the eye diagram.

  3. Processor farming in two-level analysis of historical bridge

    Science.gov (United States)

    Krejčí, T.; Kruis, J.; Koudelka, T.; Šejnoha, M.

    2017-11-01

    This contribution presents a processor farming method in connection with a multi-scale analysis. In this method, each macro-scopic integration point or each finite element is connected with a certain meso-scopic problem represented by an appropriate representative volume element (RVE). The solution of a meso-scale problem provides then effective parameters needed on the macro-scale. Such an analysis is suitable for parallel computing because the meso-scale problems can be distributed among many processors. The application of the processor farming method to a real world masonry structure is illustrated by an analysis of Charles bridge in Prague. The three-dimensional numerical model simulates the coupled heat and moisture transfer of one half of arch No. 3. and it is a part of a complex hygro-thermo-mechanical analysis which has been developed to determine the influence of climatic loading on the current state of the bridge.

  4. The L0(muon) processor

    CERN Document Server

    Aslanides, Elie; Le Gac, R; Menouni, M; Potheau, R; Tsaregorodtsev, A Yu; Tsaregorodtsev, Andrei

    1999-01-01

    99-008 In this note we review the Marseille implementation for the L0(muon) processor. We describe the data flow, hardware implementation, synchronization issue as well as our first ideas on debugging and monitoring procedure. We also present the performance of the proposed architecture with an estimate of its cost.

  5. GENERALIZED PROCESSOR SHARING (GPS) TECHNIQUES

    African Journals Online (AJOL)

    Olumide

    popular technique, Generalized Processor Sharing (GPS), provided an effective and efficient utilization of the available resources at the face of stringent and varied QoS requirements. This paper, therefore, presents the comparison of two GPS techniques –. PGPS and CDGPS – based on performance with limited resources ...

  6. Very Long Instruction Word Processors

    Indian Academy of Sciences (India)

    memory stage. The fetch stage fetches instructions from the cache. In this stage, current day processors (like the IA-64) also incorporate a branch prediction unit. The branch prediction unit predicts the direction of branch instructions and speculatively fetches instructions from the predicted path. This is necessary to keep the ...

  7. Very Long Instruction Word Processors

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 6; Issue 12. Very Long Instruction Word Processors. S Balakrishnan. General Article Volume 6 Issue 12 December 2001 pp 61-68. Fulltext. Click here to view fulltext PDF. Permanent link: http://www.ias.ac.in/article/fulltext/reso/006/12/0061-0068 ...

  8. Computer-automated tuning of semiconductor double quantum dots into the single-electron regime

    Energy Technology Data Exchange (ETDEWEB)

    Baart, T. A.; Vandersypen, L. M. K. [QuTech, Delft University of Technology, P.O. Box 5046, 2600 GA Delft (Netherlands); Kavli Institute of Nanoscience, Delft University of Technology, P.O. Box 5046, 2600 GA Delft (Netherlands); Eendebak, P. T. [QuTech, Delft University of Technology, P.O. Box 5046, 2600 GA Delft (Netherlands); Netherlands Organisation for Applied Scientific Research (TNO), P.O. Box 155, 2600 AD Delft (Netherlands); Reichl, C.; Wegscheider, W. [Solid State Physics Laboratory, ETH Zürich, 8093 Zürich (Switzerland)

    2016-05-23

    We report the computer-automated tuning of gate-defined semiconductor double quantum dots in GaAs heterostructures. We benchmark the algorithm by creating three double quantum dots inside a linear array of four quantum dots. The algorithm sets the correct gate voltages for all the gates to tune the double quantum dots into the single-electron regime. The algorithm only requires (1) prior knowledge of the gate design and (2) the pinch-off value of the single gate T that is shared by all the quantum dots. This work significantly alleviates the user effort required to tune multiple quantum dot devices.

  9. Flight design system level C requirements. Solid rocket booster and external tank impact prediction processors. [space transportation system

    Science.gov (United States)

    Seale, R. H.

    1979-01-01

    The prediction of the SRB and ET impact areas requires six separate processors. The SRB impact prediction processor computes the impact areas and related trajectory data for each SRB element. Output from this processor is stored on a secure file accessible by the SRB impact plot processor which generates the required plots. Similarly the ET RTLS impact prediction processor and the ET RTLS impact plot processor generates the ET impact footprints for return-to-launch-site (RTLS) profiles. The ET nominal/AOA/ATO impact prediction processor and the ET nominal/AOA/ATO impact plot processor generate the ET impact footprints for non-RTLS profiles. The SRB and ET impact processors compute the size and shape of the impact footprints by tabular lookup in a stored footprint dispersion data base. The location of each footprint is determined by simulating a reference trajectory and computing the reference impact point location. To insure consistency among all flight design system (FDS) users, much input required by these processors will be obtained from the FDS master data base.

  10. Cassava processors' awareness of occupational and environmental ...

    African Journals Online (AJOL)

    ) is not without hazards both to the environment, the processors, and even the consumers. This study, therefore, investigated cassava processors' awareness of occupational and environmental hazards associated with and factors affecting ...

  11. The role of dendritic non-linearities in single neuron computation

    Directory of Open Access Journals (Sweden)

    Boris Gutkin

    2014-05-01

    Full Text Available Experiment has demonstrated that summation of excitatory post-synaptic protientials (EPSPs in dendrites is non-linear. The sum of multiple EPSPs can be larger than their arithmetic sum, a superlinear summation due to the opening of voltage-gated channels and similar to somatic spiking. The so-called dendritic spike. The sum of multiple of EPSPs can also be smaller than their arithmetic sum, because the synaptic current necessarily saturates at some point. While these observations are well-explained by biophysical models the impact of dendritic spikes on computation remains a matter of debate. One reason is that dendritic spikes may fail to make the neuron spike; similarly, dendritic saturations are sometime presented as a glitch which should be corrected by dendritic spikes. We will provide solid arguments against this claim and show that dendritic saturations as well as dendritic spikes enhance single neuron computation, even when they cannot directly make the neuron fire. To explore the computational impact of dendritic spikes and saturations, we are using a binary neuron model in conjunction with Boolean algebra. We demonstrate using these tools that a single dendritic non-linearity, either spiking or saturating, combined with somatic non-linearity, enables a neuron to compute linearly non-separable Boolean functions (lnBfs. These functions are impossible to compute when summation is linear and the exclusive OR is a famous example of lnBfs. Importantly, the implementation of these functions does not require the dendritic non-linearity to make the neuron spike. Next, We show that reduced and realistic biophysical models of the neuron are capable of computing lnBfs. Within these models and contrary to the binary model, the dendritic and somatic non-linearity are tightly coupled. Yet we show that these neuron models are capable of linearly non-separable computations.

  12. Safety-Critical Java on a Time-predictable Processor

    DEFF Research Database (Denmark)

    Korsholm, Stephan Erbs; Schoeberl, Martin; Puffitsch, Wolfgang

    2015-01-01

    For real-time systems the whole execution stack needs to be time-predictable and analyzable for the worst-case execution time (WCET). This paper presents a time-predictable platform for safety-critical Java. The platform consists of (1) the Patmos processor, which is a time-predictable processor......; (2) a C compiler for Patmos with support for WCET analysis; (3) the HVM, which is a Java-to-C compiler; (4) the HVM-SCJ implementation which supports SCJ Level 0, 1, and 2 (for both single and multicore platforms); and (5) a WCET analysis tool. We show that real-time Java programs translated to C...... and compiled to a Patmos binary can be analyzed by the AbsInt aiT WCET analysis tool. To the best of our knowledge the presented system is the second WCET analyzable real-time Java system; and the first one on top of a RISC processor....

  13. Safety-critical Java on a time-predictable processor

    DEFF Research Database (Denmark)

    Korsholm, Stephan E.; Schoeberl, Martin; Puffitsch, Wolfgang

    2015-01-01

    For real-time systems the whole execution stack needs to be time-predictable and analyzable for the worst-case execution time (WCET). This paper presents a time-predictable platform for safety-critical Java. The platform consists of (1) the Patmos processor, which is a time-predictable processor......; (2) a C compiler for Patmos with support for WCET analysis; (3) the HVM, which is a Java-to-C compiler; (4) the HVM-SCJ implementation which supports SCJ Level 0, 1, and 2 (for both single and multicore platforms); and (5) a WCET analysis tool. We show that real-time Java programs translated to C...... and compiled to a Patmos binary can be analyzed by the AbsInt aiT WCET analysis tool. To the best of our knowledge the presented system is the second WCET analyzable real-time Java system; and the first one on top of a RISC processor....

  14. A Josephson systolic array processor for multiplication/addition operations

    International Nuclear Information System (INIS)

    Morisue, M.; Li, F.Q.; Tobita, M.; Kaneko, S.

    1991-01-01

    A novel Josephson systolic array processor to perform multiplication/addition operations is proposed. The systolic array processor proposed here consists of a set of three kinds of interconnected cells of which main circuits are made by using SQUID gates. A multiplication of 2 bits by 2 bits is performed in the single cell at a time and an addition of three data with two bits is simultaneously performed in an another type of cell. Furthermore, information in this system flows between cells in a pipeline fashion so that a high performance can be achieved. In this paper the principle of Josephson systolic array processor is described in detail and the simulation results are illustrated for the multiplication/addition of (4 bits x 4 bits + 8 bits). The results show that these operations can be executed in 330ps

  15. Optimal partitioning of random programs across two processors

    Science.gov (United States)

    Nicol, David M.

    1989-01-01

    The optimal partitioning of random-distributed programs is discussed. It is concluded that the optimal partitioning of a homogeneous random program over a homogeneous distributed system either assigns all modules to a single processor, or distributes the modules as evenly as possible among all processors. The analysis rests heavily on the approximation which equates the expected maximum of a set of independent random variables with the set's maximum expectation. The results are strengthened by providing an approximation-free proof of this result for two processors under general conditions on the module execution time distribution. It is also shown that use of this approximation causes two of the previous central results to be false.

  16. Application of Computer Program Carsim for Modelling Single and Double Lane Change Manoeuvres

    Directory of Open Access Journals (Sweden)

    Artūras Žukas

    2012-11-01

    Full Text Available The paper analyzes the possibilities of using computer aided modelling programs for developing new cars to achieve better dynamical properties of control over vehicles. The article shortly reviews the behaviour of young and experienced drivers and models describing it. The paper covers the process of turning car steering wheel, considers acceptable values of lateral acceleration comfortable for a car driver and all car passengers and presents computer aided modelling program CarSim used for displaying single and double lane change manoeuvres at various speeds on dry asphalt. The given charts, including data about steering wheel angle and lateral acceleration values indicate single and double lane change manoeuvres performed by a car. Also, the values of longitudinal and lateral forces of each wheel during the double lane change manoeuvre are provided.

  17. Deterministic chaos in the processor load

    International Nuclear Information System (INIS)

    Halbiniak, Zbigniew; Jozwiak, Ireneusz J.

    2007-01-01

    In this article we present the results of research whose purpose was to identify the phenomenon of deterministic chaos in the processor load. We analysed the time series of the processor load during efficiency tests of database software. Our research was done on a Sparc Alpha processor working on the UNIX Sun Solaris 5.7 operating system. The conducted analyses proved the presence of the deterministic chaos phenomenon in the processor load in this particular case

  18. Degenerative dementia: nosological aspects and results of single photon emission computed tomography

    International Nuclear Information System (INIS)

    Dubois, B.; Habert, M.O.

    1999-01-01

    Ten years ago, the diagnosis discussion of a dementia case for the old patient was limited to two pathologies: the Alzheimer illness and the Pick illness. During these last years, the frame of these primary degenerative dementia has fallen into pieces. The different diseases and the results got with single photon emission computed tomography are discussed. for example: fronto-temporal dementia, primary progressive aphasia, progressive apraxia, visio-spatial dysfunction, dementia at Lewy's bodies, or cortico-basal degeneration. (N.C.)

  19. Evaluation of a 99Tcm bound brain scanning agent for single photon emission computed tomography

    DEFF Research Database (Denmark)

    Andersen, A R; Hasselbalch, S G; Paulson, O B

    1986-01-01

    D,L HM-PAO-99Tcm (PAO) is a lipophilic tracer complex which is avidly taken up by the brain. We have compared the regional distribution of PAO with regional cerebral blood flow (CBF). CBF was measured by single photon emission computed tomography (SPECT) by Tomomatic 64 after 133Xe inhalation in ...... of the (decay corrected) brain counts were lost during 24 hours....

  20. CuPIDS: An Exploration of Highly Focused, Co-Processor-Based Information System Protection

    National Research Council Canada - National Science Library

    Williams, Paul D; Spafford, Eugene H

    2005-01-01

    The Co-Processing Intrusion Detection System (CuPID S) project is exploring how torn improve information system security by dedicating computational resources to system security tasks in a shared resource, multi-processor (MP) architecture...

  1. HTGR core seismic analysis using an array processor

    International Nuclear Information System (INIS)

    Shatoff, H.; Charman, C.M.

    1983-01-01

    A Floating Point Systems array processor performs nonlinear dynamic analysis of the high-temperature gas-cooled reactor (HTGR) core with significant time and cost savings. The graphite HTGR core consists of approximately 8000 blocks of various shapes which are subject to motion and impact during a seismic event. Two-dimensional computer programs (CRUNCH2D, MCOCO) can perform explicit step-by-step dynamic analyses of up to 600 blocks for time-history motions. However, use of two-dimensional codes was limited by the large cost and run times required. Three-dimensional analysis of the entire core, or even a large part of it, had been considered totally impractical. Because of the needs of the HTGR core seismic program, a Floating Point Systems array processor was used to enhance computer performance of the two-dimensional core seismic computer programs, MCOCO and CRUNCH2D. This effort began by converting the computational algorithms used in the codes to a form which takes maximum advantage of the parallel and pipeline processors offered by the architecture of the Floating Point Systems array processor. The subsequent conversion of the vectorized FORTRAN coding to the array processor required a significant programming effort to make the system work on the General Atomic (GA) UNIVAC 1100/82 host. These efforts were quite rewarding, however, since the cost of running the codes has been reduced approximately 50-fold and the time threefold. The core seismic analysis with large two-dimensional models has now become routine and extension to three-dimensional analysis is feasible. These codes simulate the one-fifth-scale full-array HTGR core model. This paper compares the analysis with the test results for sine-sweep motion

  2. Reconfigurable Very Long Instruction Word (VLIW) Processor

    Science.gov (United States)

    Velev, Miroslav N.

    2015-01-01

    Future NASA missions will depend on radiation-hardened, power-efficient processing systems-on-a-chip (SOCs) that consist of a range of processor cores custom tailored for space applications. Aries Design Automation, LLC, has developed a processing SOC that is optimized for software-defined radio (SDR) uses. The innovation implements the Institute of Electrical and Electronics Engineers (IEEE) RazorII voltage management technique, a microarchitectural mechanism that allows processor cores to self-monitor, self-analyze, and selfheal after timing errors, regardless of their cause (e.g., radiation; chip aging; variations in the voltage, frequency, temperature, or manufacturing process). This highly automated SOC can also execute legacy PowerPC 750 binary code instruction set architecture (ISA), which is used in the flight-control computers of many previous NASA space missions. In developing this innovation, Aries Design Automation has made significant contributions to the fields of formal verification of complex pipelined microprocessors and Boolean satisfiability (SAT) and has developed highly efficient electronic design automation tools that hold promise for future developments.

  3. 7 CFR 1215.14 - Processor.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 10 2010-01-01 2010-01-01 false Processor. 1215.14 Section 1215.14 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (MARKETING AGREEMENTS... Processor. Processor means a person engaged in the preparation of unpopped popcorn for the market who owns...

  4. 7 CFR 989.13 - Processor.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Processor. 989.13 Section 989.13 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements... CALIFORNIA Order Regulating Handling Definitions § 989.13 Processor. Processor means any person who receives...

  5. 7 CFR 927.14 - Processor.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Processor. 927.14 Section 927.14 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements... Order Regulating Handling Definitions § 927.14 Processor. Processor means any person who as owner, agent...

  6. Radio frequency single electron transistors: readout for a solid state quantum computer

    International Nuclear Information System (INIS)

    Buehler, T.M.; Reilly, D.J.; Starrett, R.P.; Brenner, R.; Hamilton, A.R.; Clark, R.G.; Court, N.A.; Dzurak, A.S.

    2002-01-01

    Full text: Quantum computers promise unprecedented computational power if they can be scaled to a large number of qubits. Essential to the operation of such a machine is readout: the determination of the final quantum state of the system. In the case of the silicon based solid state architecture proposed by Kane, readout is achieved by determining the direction of a single electron spin via the detection of a spin dependent tunneling event. This requires a highly sensitive electrometer that can detect the motion of a single electron in a timescale less than the spin relaxation time. The Radio Frequency Single Electron Transistor (RF-SET) is a device that possesses both the charge sensitivity (oq ∼ 10 -6 / √Hz), approaching the quantum limit) and fast response required to perform readout in such a system. Here we describe the fabrication and operation of transmission mode RF-SETs and discuss the application of these novel electrometers in the readout of a solid state quantum computer

  7. Broadcasting collective operation contributions throughout a parallel computer

    Science.gov (United States)

    Faraj, Ahmad [Rochester, MN

    2012-02-21

    Methods, systems, and products are disclosed for broadcasting collective operation contributions throughout a parallel computer. The parallel computer includes a plurality of compute nodes connected together through a data communications network. Each compute node has a plurality of processors for use in collective parallel operations on the parallel computer. Broadcasting collective operation contributions throughout a parallel computer according to embodiments of the present invention includes: transmitting, by each processor on each compute node, that processor's collective operation contribution to the other processors on that compute node using intra-node communications; and transmitting on a designated network link, by each processor on each compute node according to a serial processor transmission sequence, that processor's collective operation contribution to the other processors on the other compute nodes using inter-node communications.

  8. Using graphics processors to accelerate protein docking calculations.

    Science.gov (United States)

    Ritchie, David W; Venkatraman, Vishwesh; Mavridis, Lazaros

    2010-01-01

    Protein docking is the computationally intensive task of calculating the three-dimensional structure of a protein complex starting from the individual structures of the constituent proteins. In order to make the calculation tractable, most docking algorithms begin by assuming that the structures to be docked are rigid. This article describes some recent developments we have made to adapt our FFT-based "Hex" rigid-body docking algorithm to exploit the computational power of modern graphics processors (GPUs). The Hex algorithm is very efficient on conventional central processor units (CPUs), yet significant further speed-ups have been obtained by using GPUs. Thus, FFT-based docking calculations which formerly took many hours to complete using CPUs may now be carried out in a matter of seconds using GPUs. The Hex docking program and access to a server version of Hex on a GPU-based compute cluster are both available for public use.

  9. The diagnostic value of single-photon emission computed tomography/computed tomography for severe sacroiliac joint dysfunction.

    Science.gov (United States)

    Tofuku, Katsuhiro; Koga, Hiroaki; Komiya, Setsuro

    2015-04-01

    We aimed to evaluate the value of single-photon emission computed tomography (SPECT)/computed tomography (CT) for the diagnosis of sacroiliac joint (SIJ) dysfunction. SPECT/CT was performed in 32 patients with severe SIJ dysfunction, who did not respond to 1-year conservative treatment and had a score of >4 points on a 10-cm visual analog scale. We investigated the relationship between the presence of severe SIJ dysfunction and tracer accumulation, as confirmed by SPECT/CT. In cases of bilateral SIJ dysfunction, we also compared the intensity of tracer accumulation on each side. Moreover, we examined the relationship between the intensity of tracer accumulation and the different treatments the patients subsequently received. All 32 patients with severe SIJ dysfunction had tracer accumulation with a standardized uptake value (SUV) of >2.2 (mean SUV 4.7). In the 19 patients with lateralized symptom intensity, mean SUVs of the dominant side were significantly higher than those of the nondominant side. In 10 patients with no lateralization, the difference in the SUVs between sides was tracer accumulation required more advanced treatment. Patients with higher levels of tracer accumulation had greater symptom severity and also required more advanced treatment. Thus, we believe that SPECT/CT may be a suitable supplementary diagnostic modality for SIJ dysfunction as well as a useful technique for predicting the prognosis of this condition.

  10. Processor-in-memory-and-storage architecture

    Science.gov (United States)

    DeBenedictis, Erik

    2018-01-02

    A method and apparatus for performing reliable general-purpose computing. Each sub-core of a plurality of sub-cores of a processor core processes a same instruction at a same time. A code analyzer receives a plurality of residues that represents a code word corresponding to the same instruction and an indication of whether the code word is a memory address code or a data code from the plurality of sub-cores. The code analyzer determines whether the plurality of residues are consistent or inconsistent. The code analyzer and the plurality of sub-cores perform a set of operations based on whether the code word is a memory address code or a data code and a determination of whether the plurality of residues are consistent or inconsistent.

  11. Experimental and Computational Characterization of Biological Liquid Crystals: A Review of Single-Molecule Bioassays

    Directory of Open Access Journals (Sweden)

    Sungsoo Na

    2009-09-01

    Full Text Available Quantitative understanding of the mechanical behavior of biological liquid crystals such as proteins is essential for gaining insight into their biological functions, since some proteins perform notable mechanical functions. Recently, single-molecule experiments have allowed not only the quantitative characterization of the mechanical behavior of proteins such as protein unfolding mechanics, but also the exploration of the free energy landscape for protein folding. In this work, we have reviewed the current state-of-art in single-molecule bioassays that enable quantitative studies on protein unfolding mechanics and/or various molecular interactions. Specifically, single-molecule pulling experiments based on atomic force microscopy (AFM have been overviewed. In addition, the computational simulations on single-molecule pulling experiments have been reviewed. We have also reviewed the AFM cantilever-based bioassay that provides insight into various molecular interactions. Our review highlights the AFM-based single-molecule bioassay for quantitative characterization of biological liquid crystals such as proteins.

  12. Computing single step operators of logic programming in radial basis function neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Hamadneh, Nawaf; Sathasivam, Saratha; Choon, Ong Hong [School of Mathematical Sciences, Universiti Sains Malaysia, 11800 USM, Penang (Malaysia)

    2014-07-10

    Logic programming is the process that leads from an original formulation of a computing problem to executable programs. A normal logic program consists of a finite set of clauses. A valuation I of logic programming is a mapping from ground atoms to false or true. The single step operator of any logic programming is defined as a function (T{sub p}:I→I). Logic programming is well-suited to building the artificial intelligence systems. In this study, we established a new technique to compute the single step operators of logic programming in the radial basis function neural networks. To do that, we proposed a new technique to generate the training data sets of single step operators. The training data sets are used to build the neural networks. We used the recurrent radial basis function neural networks to get to the steady state (the fixed point of the operators). To improve the performance of the neural networks, we used the particle swarm optimization algorithm to train the networks.

  13. Computing single step operators of logic programming in radial basis function neural networks

    Science.gov (United States)

    Hamadneh, Nawaf; Sathasivam, Saratha; Choon, Ong Hong

    2014-07-01

    Logic programming is the process that leads from an original formulation of a computing problem to executable programs. A normal logic program consists of a finite set of clauses. A valuation I of logic programming is a mapping from ground atoms to false or true. The single step operator of any logic programming is defined as a function (Tp:I→I). Logic programming is well-suited to building the artificial intelligence systems. In this study, we established a new technique to compute the single step operators of logic programming in the radial basis function neural networks. To do that, we proposed a new technique to generate the training data sets of single step operators. The training data sets are used to build the neural networks. We used the recurrent radial basis function neural networks to get to the steady state (the fixed point of the operators). To improve the performance of the neural networks, we used the particle swarm optimization algorithm to train the networks.

  14. Computing single step operators of logic programming in radial basis function neural networks

    International Nuclear Information System (INIS)

    Hamadneh, Nawaf; Sathasivam, Saratha; Choon, Ong Hong

    2014-01-01

    Logic programming is the process that leads from an original formulation of a computing problem to executable programs. A normal logic program consists of a finite set of clauses. A valuation I of logic programming is a mapping from ground atoms to false or true. The single step operator of any logic programming is defined as a function (T p :I→I). Logic programming is well-suited to building the artificial intelligence systems. In this study, we established a new technique to compute the single step operators of logic programming in the radial basis function neural networks. To do that, we proposed a new technique to generate the training data sets of single step operators. The training data sets are used to build the neural networks. We used the recurrent radial basis function neural networks to get to the steady state (the fixed point of the operators). To improve the performance of the neural networks, we used the particle swarm optimization algorithm to train the networks

  15. Occult primary tumors of the head and neck: accuracy of thallium 201 single-photon emission computed tomography and computed tomography and/or magnetic resonance imaging

    NARCIS (Netherlands)

    van Veen, S. A.; Balm, A. J.; Valdés Olmos, R. A.; Hoefnagel, C. A.; Hilgers, F. J.; Tan, I. B.; Pameijer, F. A.

    2001-01-01

    To determine the accuracy of thallium 201 single-photon emission computed tomography (thallium SPECT) and computed tomography and/or magnetic resonance imaging (CT/MRI) in the detection of occult primary tumors of the head and neck. Study of diagnostic tests. National Cancer Institute, Amsterdam,

  16. On the Organization of Parallel Operation of Some Algorithms for Finding the Shortest Path on a Graph on a Computer System with Multiple Instruction Stream and Single Data Stream

    Directory of Open Access Journals (Sweden)

    V. E. Podol'skii

    2015-01-01

    Full Text Available The paper considers the implementing Bellman-Ford and Lee algorithms to find the shortest graph path on a computer system with multiple instruction stream and single data stream (MISD. The MISD computer is a computer that executes commands of arithmetic-logical processing (on the CPU and commands of structures processing (on the structures processor in parallel on a single data stream. Transformation of sequential programs into the MISD programs is a labor intensity process because it requires a stream of the arithmetic-logical processing to be manually separated from that of the structures processing. Algorithms based on the processing of data structures (e.g., algorithms on graphs show high performance on a MISD computer. Bellman-Ford and Lee algorithms for finding the shortest path on a graph are representatives of these algorithms. They are applied to robotics for automatic planning of the robot movement in-situ. Modification of Bellman-Ford and Lee algorithms for finding the shortest graph path in coprocessor MISD mode and the parallel MISD modification of these algorithms were first obtained in this article. Thus, this article continues a series of studies on the transformation of sequential algorithms into MISD ones (Dijkstra and Ford-Fulkerson 's algorithms and has a pronouncedly applied nature. The article also presents the analysis results of Bellman-Ford and Lee algorithms in MISD mode. The paper formulates the basic trends of a technique for parallelization of algorithms into arithmetic-logical processing stream and structures processing stream. Among the key areas for future research, development of the mathematical approach to provide a subsequently formalized and automated process of parallelizing sequential algorithms between the CPU and structures processor is highlighted. Among the mathematical models that can be used in future studies there are graph models of algorithms (e.g., dependency graph of a program. Due to the high

  17. An Introduction to Parallel Computation R

    Indian Academy of Sciences (India)

    found in the Suggested Reading given at the end. Basic Programming Model. A parallel computer can be programmed by providing a program for each processor in it. In most common parallel computer organizations, a processor can only access its local memory. The program provided to each processor may perform ...

  18. Capsule-like voids in SiC single crystal: Phase contrast imaging and computer simulations

    Directory of Open Access Journals (Sweden)

    V. G. Kohn

    2014-09-01

    Full Text Available The results of observation of capsule-like voids in silicon carbide (6H-SiC single crystal by means of a phase contrast imaging technique with synchrotron radiation at the Pohang Light Source as well as computer simulations of such images are presented. A pink beam and a monochromated beam were used. The latter gives more pronounced images but they still are smoothed due to a finite detector resolution and the spatial coherence of the beam. Sizes and a structure of far field images are different from these of the objects. The computer simulations allow us to reproduce a shape and a size of the capsule-like void.

  19. Single-photon emission computed tomography in human immunodeficiency virus encephalopathy: A preliminary report

    International Nuclear Information System (INIS)

    Masdeu, J.C.; Yudd, A.; Van Heertum, R.L.; Grundman, M.; Hriso, E.; O'Connell, R.A.; Luck, D.; Camli, U.; King, L.N.

    1991-01-01

    Depression or psychosis in a previously asymptomatic individual infected with the human immunodeficiency virus (HIV) may be psychogenic, related to brain involvement by the HIV or both. Although prognosis and treatment differ depending on etiology, computed tomography (CT) and magnetic resonance imaging (MRI) are usually unrevealing in early HIV encephalopathy and therefore cannot differentiate it from psychogenic conditions. Thirty of 32 patients (94%) with HIV encephalopathy had single-photon emission computed tomography (SPECT) findings that differed from the findings in 15 patients with non-HIV psychoses and 6 controls. SPECT showed multifocal cortical and subcortical areas of hypoperfusion. In 4 cases, cognitive improvement after 6-8 weeks of zidovudine (AZT) therapy was reflected in amelioration of SPECT findings. CT remained unchanged. SPECT may be a useful technique for the evaluation of HIV encephalopathy

  20. The contribution of single photon emission computed tomography in the clinical assessment of Alzheimer type dementia

    International Nuclear Information System (INIS)

    Boudousq, V.; Collombier, L.; Kotzki, P.O.

    1999-01-01

    Interest of brain single-photon emission computed tomography to support clinical diagnosis of Alzheimer-type dementia is now established. Numerous studies have reported a decreased perfusion in the association cortex of the parietal lobe and the posterior temporal regions. In patients with mild cognitive complaints, the presence of focal hypoperfusion may increase substantially the probability of the disease. In addition, emission tomography emerges as a helpful tool in situation in which there is diagnostic doubt. In this case, the presence of temporo-parietal perfusion deficit associated with hippocampal atrophy on MRI or X-ray computed tomography contributes to diagnostic accuracy. However, some studies suggest that emission tomography may be useful for preclinical prediction of Alzheimer's disease and to predict cognitive decline. (author)

  1. The artificial compression method for computation of shocks and contact discontinuities. I - Single conservation laws

    Science.gov (United States)

    Harten, A.

    1977-01-01

    The paper discusses the use of the artificial compression method for the computation of discontinuous solutions of a single conservation law by finite difference methods. The single conservation law has either a shock or a contact discontinuity. Any monotone finite difference scheme applied to the original equation smears the discontinuity, while the same scheme applied to the equation modified by an artificial compression flux produces steady progressing profiles. If L is any finite difference scheme in conservation form and C is an artificial compressor, the split flux artificial compression method CL is a corrective scheme: L smears the discontinuity while propagating it; C compresses the smeared transition toward a sharp discontinuity. Numerical implementation of artificial compression is described.

  2. Single- versus dual-energy quantitative computed tomography for spinal densitometry in patients with rheumatoid arthritis

    International Nuclear Information System (INIS)

    Laan, R.F.J.M.; Erning, L.J.Th.O. van; Lemmens, J.A.M.; Putte, L.B.A. van de; Ruijs, S.H.J.; Riel, P.L.C.M. van

    1992-01-01

    Lumbar bone mineral density was measured by both single- and dual-energy quantitative computed tomography in 109 patients with rheumatoid arthritis. The results were corrected for the age-related increase in vertebral fat content by converting them to percentages of expected densities, using sex and energy-level specific regression equations obtained in a normal reference population. The percentages of expected density are approximately 10% lower in the single- than in the dual-energy mode, both in the patients with and without prednisone therapy. This difference is statistically highly significant, and is positively correlated with the duration of the disease and with the degree of radiological joint destruction. The data suggest that the vertebral fat content may be increased in patients with rheumatoid arthritis, as a consequence of disease-dependent mechanisms. (Author)

  3. Binary Factorization in Hopfield-Like Neural Networks: Single-Step Approximation and Computer Simulations

    Czech Academy of Sciences Publication Activity Database

    Frolov, A. A.; Sirota, A.M.; Húsek, Dušan; Muraviev, I. P.

    2004-01-01

    Roč. 14, č. 2 (2004), s. 139-152 ISSN 1210-0552 R&D Projects: GA ČR GA201/01/1192 Grant - others:BARRANDE(EU) 99010-2/99053; Intellectual computer Systems(EU) Grant 2.45 Institutional research plan: CEZ:AV0Z1030915 Keywords : nonlinear binary factor analysis * feature extraction * recurrent neural network * Single-Step approximation * neurodynamics simulation * attraction basins * Hebbian learning * unsupervised learning * neuroscience * brain function modeling Subject RIV: BA - General Mathematics

  4. A Computer Program for Flow-Log Analysis of Single Holes (FLASH)

    Science.gov (United States)

    Day-Lewis, F. D.; Johnson, C.D.; Paillet, Frederick L.; Halford, K.J.

    2011-01-01

    A new computer program, FLASH (Flow-Log Analysis of Single Holes), is presented for the analysis of borehole vertical flow logs. The code is based on an analytical solution for steady-state multilayer radial flow to a borehole. The code includes options for (1) discrete fractures and (2) multilayer aquifers. Given vertical flow profiles collected under both ambient and stressed (pumping or injection) conditions, the user can estimate fracture (or layer) transmissivities and far-field hydraulic heads. FLASH is coded in Microsoft Excel with Visual Basic for Applications routines. The code supports manual and automated model calibration. ?? 2011, The Author(s). Ground Water ?? 2011, National Ground Water Association.

  5. Bronchobiliary Fistula Localized by Cholescintigraphy with Single-Photon Emission Computed Tomography

    International Nuclear Information System (INIS)

    Artunduaga, Maddy; Patel, Niraj R.; Wendt, Julie A.; Guy, Elizabeth S.; Nachiappan, Arun C.

    2015-01-01

    Biliptysis is an important clinical feature to recognize as it is associated with bronchobiliary fistula, a rare entity. Bronchobiliary fistulas have been diagnosed with planar cholescintigraphy. However, cholescintigraphy with single-photon emission computed tomography (SPECT) can better spatially localize a bronchobiliary fistula as compared to planar cholescintigraphy alone, and is useful for preoperative planning if surgical treatment is required. Here, we present the case of a 23-year-old male who developed a bronchobiliary fistula in the setting of posttraumatic and postsurgical infection, which was diagnosed and localized by cholescintigraphy with SPECT

  6. Painful spondylolysis or spondylolisthesis studied by radiography and single-photon emission computed tomography

    International Nuclear Information System (INIS)

    Collier, B.D.; Johnson, R.P.; Carrera, G.F.

    1985-01-01

    Planar bone scintigraphy (PBS) and single-photon emission computed tomography (SPECT) were compared in 19 adults with radiographic evidence of spondylolysis and/or spondylolisthesis. SPECT was more sensitive than PBS when used to identify symptomatic patients and sites of painful defects in the pars interarticularis. In addition, SPECT allowed more accurate localization than PBS. In 6 patients, spondylolysis or spondylolisthesis was unrealted to low back pain, and SPECT images of the posterior neural arch were normal. The authors conclude that when spondylolysis or spondylolisthesis is the cause of low back pain, pars defects are frequently heralded by increased scintigraphic activity which is best detected and localized by SPECT

  7. Painful spondylolysis or spondylolisthesis studied by radiography and single-photon emission computed tomography

    Energy Technology Data Exchange (ETDEWEB)

    Collier, B.D.; Johnson, R.P.; Carrera, G.F.; Meyer, G.A.; Schwab, J.P.; Flatley, T.J.; Isitman, A.T.; Hellman, R.S.; Zielonka, J.S.; Knobel, J.

    1985-01-01

    Planar bone scintigraphy (PBS) and single-photon emission computed tomography (SPECT) were compared in 19 adults with radiographic evidence of spondylolysis and/or spondylolisthesis. SPECT was more sensitive than PBS when used to identify symptomatic patients and sites of painful defects in the pars interarticularis. In addition, SPECT allowed more accurate localization than PBS. In 6 patients, spondylolysis or spondylolisthesis was unrealted to low back pain, and SPECT images of the posterior neural arch were normal. The authors conclude that when spondylolysis or spondylolisthesis is the cause of low back pain, pars defects are frequently heralded by increased scintigraphic activity which is best detected and localized by SPECT.

  8. Structural Optimization in a Distributed Computing Environment

    National Research Council Canada - National Science Library

    Voon, B. K; Austin, M. A

    1991-01-01

    ...) optimization algorithm customized to a Distributed Numerical Computing environment (DNC). DNC utilizes networking technology and an ensemble of loosely coupled processors to compute structural analyses concurrently...

  9. Utility of single photon emission computed tomography/computed tomography imaging in evaluation of chronic low back pain

    International Nuclear Information System (INIS)

    Harisankar, Chidambaram Natrajan Balasubramanian; Mittal, Bhagwant Rai; Bhattacharya, Anish; Singh, Paramjeet; Sen, Ramesh

    2012-01-01

    Abnormal morphologic findings in imaging were thought to explain the etiology of low back pain (LBP). However, it is now known that variety of morphologic abnormalities is noted even in asymptomatic individuals. Single photon emission computed tomography/computed tomography (SPECT/CT) could be used to differentiate incidental findings from clinically significant findings. This study was performed to define the SPECT/CT patterns in patients with LBP and to correlate these with clinical and magnetic resonance imaging (MRI) findings. Thirty adult patients with LBP of duration 3 months or more were prospectively evaluated in this study. Patients with known or suspected malignancy, trauma or infectious processes were excluded. A detailed history of sensory and motor symptoms and neurologic examination was performed. All the patients were subjected to MRI and bone scintigraphy with hybrid SPECT/CT of the lumbo-sacral spine within 1 month of each other. The patients were classified into those with and without neurologic symptoms, activity limitation. The findings of clinical examination and imaging were compared. MRI and SPECT/CT findings were also compared. Thirty patients (18 men and 12 women; mean age 38 years; range 17-64 years) were eligible for the study. Clinically, 14 of 30 (46%) had neurologic signs and or symptoms. Six of the 30 patients (20%) had positive straight leg raising test (SLRT). Twenty-two of the 30 patients (73%) had SPECT abnormality. Most frequent SPECT/CT abnormality was tracer uptake in the anterior part of vertebral body with osteophytes/sclerotic changes. Significant positive agreement was noted between this finding and MRI evidence of degenerative disc disease. Only 13% of patients had more than one abnormality in SPECT. All 30 patients had MRI abnormalities. The most frequent abnormality was degenerative disc disease and facet joint arthropathy. MRI showed single intervertebral disc abnormality in 36% of the patients and more than one

  10. Dense and Sparse Matrix Operations on the Cell Processor

    Energy Technology Data Exchange (ETDEWEB)

    Williams, Samuel W.; Shalf, John; Oliker, Leonid; Husbands,Parry; Yelick, Katherine

    2005-05-01

    The slowing pace of commodity microprocessor performance improvements combined with ever-increasing chip power demands has become of utmost concern to computational scientists. Therefore, the high performance computing community is examining alternative architectures that address the limitations of modern superscalar designs. In this work, we examine STI's forthcoming Cell processor: a novel, low-power architecture that combines a PowerPC core with eight independent SIMD processing units coupled with a software-controlled memory to offer high FLOP/s/Watt. Since neither Cell hardware nor cycle-accurate simulators are currently publicly available, we develop an analytic framework to predict Cell performance on dense and sparse matrix operations, using a variety of algorithmic approaches. Results demonstrate Cell's potential to deliver more than an order of magnitude better GFLOP/s per watt performance, when compared with the Intel Itanium2 and Cray X1 processors.

  11. Programming massively parallel processors a hands-on approach

    CERN Document Server

    Kirk, David B

    2010-01-01

    Programming Massively Parallel Processors discusses basic concepts about parallel programming and GPU architecture. ""Massively parallel"" refers to the use of a large number of processors to perform a set of computations in a coordinated parallel way. The book details various techniques for constructing parallel programs. It also discusses the development process, performance level, floating-point format, parallel patterns, and dynamic parallelism. The book serves as a teaching guide where parallel programming is the main topic of the course. It builds on the basics of C programming for CUDA, a parallel programming environment that is supported on NVI- DIA GPUs. Composed of 12 chapters, the book begins with basic information about the GPU as a parallel computer source. It also explains the main concepts of CUDA, data parallelism, and the importance of memory access efficiency using CUDA. The target audience of the book is graduate and undergraduate students from all science and engineering disciplines who ...

  12. Optical symbolic processor for expert system execution

    Science.gov (United States)

    Guha, Aloke

    1987-11-01

    The goal of this program is to develop a concept for an optical computer architecture for symbolic computing by defining a computation model of a high level language, examining the possible devices for the ultimate construction of a processor, and by defining required optical operations. This quarter we investigated the implementation alternatives for an optical shuffle exchange network (SEN). Work in previous quarter had led to the conclusion that the SEN was most appropriate optical interconnection network topology for the symbolic processing architecture (SPARO). A more detailed analysis was therefore conducted to examine implementation possibilities. It was determined that while the shuffle connection of the SEN was very feasible in optics using passive devices, a full-scale exchange switch which handles conflict resolution among competing messages is much more difficult. More emphasis was therefore given to the exchange switch design. The functionalities required for the exchange switch and its controls were analyzed. These functionalities were then assessed for optical implementation. It is clear that even the basic exchange switch, that is, an exchange without the controls for conflict resolution, delivery, etc..., is quite a difficult problem in optics. We have proposed a number of optical techniques that appear to be good candidates for realizing the basic exchange switch. A reasonable approach appears to be to evaluate these techniques.

  13. The P4 Parallel Programming System, the Linda Environment, and Some Experiences with Parallel Computation

    Directory of Open Access Journals (Sweden)

    Allan R. Larrabee

    1993-01-01

    Full Text Available The first digital computers consisted of a single processor acting on a single stream of data. In this so-called "von Neumann" architecture, computation speed is limited mainly by the time required to transfer data between the processor and memory. This limiting factor has been referred to as the "von Neumann bottleneck". The concern that the miniaturization of silicon-based integrated circuits will soon reach theoretical limits of size and gate times has led to increased interest in parallel architectures and also spurred research into alternatives to silicon-based implementations of processors. Meanwhile, sequential processors continue to be produced that have increased clock rates and an increase in memory locally available to a processor, and an increase in the rate at which data can be transferred to and from memories, networks, and remote storage. The efficiency of compilers and operating systems is also improving over time. Although such characteristics limit maximum performance, a large improvement in the speed of scientific computations can often be achieved by utilizing more efficient algorithms, particularly those that support parallel computation. This work discusses experiences with two tools for large grain (or "macro task" parallelism.

  14. Hemodynamics of hepatocellular carcinoma with single-level dynamic computed tomography during hepatic arteriography

    International Nuclear Information System (INIS)

    Tanihata, Hirohiko

    2002-01-01

    The purpose of this study is to verify the hemodynamics of hepatocellular carcinoma (HCC) and to explore the draining pathway using single-level dynamic computed tomography during hepatic arteriography (single-level dynamic CTHA). One hundred one patients with 131 nodules of HCC underwent single level dynamic CTHA. Forty seven nodules were diagnosed by histological specimen and the other eighty four nodules by clinical findings of elevation in AFP and/or PIVKA II and hypervascular tumor in angiography. Single-level dynamic CTHA was performed under insertion of a catheter into proper hepatic artery or the more peripheral hepatic artery with a slice thickness of 3 mm at the same level. Each image of single level dynamic CTHA was continuously taken in a second for 40 seconds during injection of contrast medium at of 2 ml/sec for 10 seconds. The images of single-level dynamic CTHA were differentiated into three phases, as early phase 1 to 10 seconds, middle phase 11 to 20 seconds and late phase 21 to 40 seconds. After the analysis of the vascular pattern in each phase, the hemodynamics of HCC was classified into three patterns; hypovascular pattern in the 24 nodules whose average size was 13.4±4.2 mm, intermediate pattern in the 21 nodules whose average size was 20.8±7.8 mm and hypervascular pattern in 86 nodules whose average size was 31.6±16.3 mm. There were significant correlations between the tumor size and the vascular pattern. In the groups of hypovascular and intermediate pattern, the draining pathways were sinusoids. Of the 86 nodules of the group with hypervascular pattern, blood flow drained into portal vein including bright branch structure in 20 nodules, into portal vein and hepatic vein in 2 nodules, into portal vein and extrahepatic vein in 1 nodule, into hepatic vein in 11 nodules, into extrahepatic vein in 4 nodules and into sinusoids in 48 nodules. In conclusion, from a viewpoint of hemodynamics using single-level dynamic CTHA, I proposed the new

  15. DE-BLURRING SINGLE PHOTON EMISSION COMPUTED TOMOGRAPHY IMAGES USING WAVELET DECOMPOSITION

    Directory of Open Access Journals (Sweden)

    Neethu M. Sasi

    2016-02-01

    Full Text Available Single photon emission computed tomography imaging is a popular nuclear medicine imaging technique which generates images by detecting radiations emitted by radioactive isotopes injected in the human body. Scattering of these emitted radiations introduces blur in this type of images. This paper proposes an image processing technique to enhance cardiac single photon emission computed tomography images by reducing the blur in the image. The algorithm works in two main stages. In the first stage a maximum likelihood estimate of the point spread function and the true image is obtained. In the second stage Lucy Richardson algorithm is applied on the selected wavelet coefficients of the true image estimate. The significant contribution of this paper is that processing of images is done in the wavelet domain. Pre-filtering is also done as a sub stage to avoid unwanted ringing effects. Real cardiac images are used for the quantitative and qualitative evaluations of the algorithm. Blur metric, peak signal to noise ratio and Tenengrad criterion are used as quantitative measures. Comparison against other existing de-blurring algorithms is also done. The simulation results indicate that the proposed method effectively reduces blur present in the image.

  16. Online track processor for the CDF upgrade

    International Nuclear Information System (INIS)

    Ciobanu, C.; Gertenslager, J.; Hoftiezer, J.

    1999-01-01

    A trigger track processor is being designed for the CDF upgrade. This processor identifies high momentum (P T > 1.5 GeV/c) charged tracks in the new central outer tracking chamber for CDF II. The track processor is called the Extremely Fast Tracker (XFT). The XFT design is highly parallel to handle the input rate of 183 Gbits/sec and output rate of 44 Gbits/sec. The processor is pipelined and reports the results for a new event every 132 ns. The processor uses three stages, hit classification, segment finding, and segment linking. The pattern recognition algorithms for the three stages are implemented in programmable logic devices (PLDs) which allow for in-situ modification of the algorithm at any time. The PLDs reside on three different types of modules. Prototypes of each of these modules have been designed and built, and are presently undergoing testing. An overview of the track processor and results of testing are presented

  17. Configurable Multi-Purpose Processor

    Science.gov (United States)

    Valencia, J. Emilio; Forney, Chirstopher; Morrison, Robert; Birr, Richard

    2010-01-01

    Advancements in technology have allowed the miniaturization of systems used in aerospace vehicles. This technology is driven by the need for next-generation systems that provide reliable, responsive, and cost-effective range operations while providing increased capabilities such as simultaneous mission support, increased launch trajectories, improved launch, and landing opportunities, etc. Leveraging the newest technologies, the command and telemetry processor (CTP) concept provides for a compact, flexible, and integrated solution for flight command and telemetry systems and range systems. The CTP is a relatively small circuit board that serves as a processing platform for high dynamic, high vibration environments. The CTP can be reconfigured and reprogrammed, allowing it to be adapted for many different applications. The design is centered around a configurable field-programmable gate array (FPGA) device that contains numerous logic cells that can be used to implement traditional integrated circuits. The FPGA contains two PowerPC processors running the Vx-Works real-time operating system and are used to execute software programs specific to each application. The CTP was designed and developed specifically to provide telemetry functions; namely, the command processing, telemetry processing, and GPS metric tracking of a flight vehicle. However, it can be used as a general-purpose processor board to perform numerous functions implemented in either hardware or software using the FPGA s processors and/or logic cells. Functionally, the CTP was designed for range safety applications where it would ultimately become part of a vehicle s flight termination system. Consequently, the major functions of the CTP are to perform the forward link command processing, GPS metric tracking, return link telemetry data processing, error detection and correction, data encryption/ decryption, and initiate flight termination action commands. Also, the CTP had to be designed to survive and

  18. A Taxonomy of Reconfigurable Single-/Multiprocessor Systems-on-Chip

    Directory of Open Access Journals (Sweden)

    Diana Göhringer

    2009-01-01

    Full Text Available Runtime adaptivity of hardware in processor architectures is a novel trend, which is under investigation in a variety of research labs all over the world. The runtime exchange of modules, implemented on a reconfigurable hardware, affects the instruction flow (e.g., in reconfigurable instruction set processors or the data flow, which has a strong impact on the performance of an application. Furthermore, the choice of a certain processor architecture related to the class of target applications is a crucial point in application development. A simple example is the domain of high-performance computing applications found in meteorology or high-energy physics, where vector processors are the optimal choice. A classification scheme for computer systems was provided in 1966 by Flynn where single/multiple data and instruction streams were combined to four types of architectures. This classification is now used as a foundation for an extended classification scheme including runtime adaptivity as further degree of freedom for processor architecture design. The developed scheme is validated by a multiprocessor system implemented on reconfigurable hardware as well as by a classification of existing static and reconfigurable processor systems.

  19. Using of opportunities of graphic processors for acceleration of scientific and technical calculations

    International Nuclear Information System (INIS)

    Dudnik, V.A.; Kudryavtsev, V.I.; Sereda, T.M.; Us, S.A.; Shestakov, M.V.

    2009-01-01

    The new opportunities of modern graphic processors (GPU) for acceleration of the scientific and technical calculations with the help of paralleling of a calculating task between the central processor and GPU are described. The description of using the technology NVIDIA CUDA for connection of parallel computing opportunities of GPU within the programme of the some intensive mathematical tasks is resulted. The examples of comparison of parameters of productivity in the process of these tasks' calculation without application of GPU and with use of opportunities NVIDIA CUDA for graphic processor GeForce 8800 are resulted

  20. Digital Signal Processor For GPS Receivers

    Science.gov (United States)

    Thomas, J. B.; Meehan, T. K.; Srinivasan, J. M.

    1989-01-01

    Three innovative components combined to produce all-digital signal processor with superior characteristics: outstanding accuracy, high-dynamics tracking, versatile integration times, lower loss-of-lock signal strengths, and infrequent cycle slips. Three components are digital chip advancer, digital carrier downconverter and code correlator, and digital tracking processor. All-digital signal processor intended for use in receivers of Global Positioning System (GPS) for geodesy, geodynamics, high-dynamics tracking, and ionospheric calibration.

  1. DEMONIC programming: a computational language for single-particle equilibrium thermodynamics, and its formal semantics.

    Directory of Open Access Journals (Sweden)

    Samson Abramsky

    2015-11-01

    Full Text Available Maxwell's Demon, 'a being whose faculties are so sharpened that he can follow every molecule in its course', has been the centre of much debate about its abilities to violate the second law of thermodynamics. Landauer's hypothesis, that the Demon must erase its memory and incur a thermodynamic cost, has become the standard response to Maxwell's dilemma, and its implications for the thermodynamics of computation reach into many areas of quantum and classical computing. It remains, however, still a hypothesis. Debate has often centred around simple toy models of a single particle in a box. Despite their simplicity, the ability of these systems to accurately represent thermodynamics (specifically to satisfy the second law and whether or not they display Landauer Erasure, has been a matter of ongoing argument. The recent Norton-Ladyman controversy is one such example. In this paper we introduce a programming language to describe these simple thermodynamic processes, and give a formal operational semantics and program logic as a basis for formal reasoning about thermodynamic systems. We formalise the basic single-particle operations as statements in the language, and then show that the second law must be satisfied by any composition of these basic operations. This is done by finding a computational invariant of the system. We show, furthermore, that this invariant requires an erasure cost to exist within the system, equal to kTln2 for a bit of information: Landauer Erasure becomes a theorem of the formal system. The Norton-Ladyman controversy can therefore be resolved in a rigorous fashion, and moreover the formalism we introduce gives a set of reasoning tools for further analysis of Landauer erasure, which are provably consistent with the second law of thermodynamics.

  2. Processor farming method for multi-scale analysis of masonry structures

    Science.gov (United States)

    Krejčí, Tomáš; Koudelka, Tomáš

    2017-07-01

    This paper describes a processor farming method for a coupled heat and moisture transport in masonry using a two-level approach. The motivation for the two-level description comes from difficulties connected with masonry structures, where the size of stone blocks is much larger than the size of mortar layers and very fine finite element mesh has to be used. The two-level approach is suitable for parallel computing because nearly all computations can be performed independently with little synchronization. This approach is called processor farming. The master processor is dealing with the macro-scale level - the structure and the slave processors are dealing with a homogenization procedure on the meso-scale level which is represented by an appropriate representative volume element.

  3. Brain receptor single-photon emission computer tomography with 123I Datscan in Parkinson's disease

    International Nuclear Information System (INIS)

    Minchev, D.; Peshev, N.; Kostadinova, I.; Grigorova, O.; Trindev, P.; Shotekov, P.

    2005-01-01

    Clinical aspects of Parkinson's disease are not enough for the early diagnosis of the disease. Positron emission tomography and the receptor single - photon emission tomography can be used for imaging functional integrity of nigrostriatal dopaminergic structures. 24 patient (17 men and 7 women) were investigated. 20 of them are with Parkinson's disease and 4 are with essential tremor. The radiopharmaceutical - 123I-Datscan (ioflupane, bind with 123I) represent a cocaine analogue with selective affinity to dopamine transporters, located in the dopaminergic nigrostriatal terminals in the striatum. Single - photon emission computer tomography was performed with SPECT gamma camera (ADAC, SH Epic detector). The scintigraphic study was made 3 to 6 hours after intravenous injection of the radiopharmaceutical - 123I- Datscan in dose 185 MBq. 120 frames are registered with duration of each one 22 seconds and gamma camera rotation 360. After generation of transversal slices we generated two composites pictures. The first composite picture image the striatum, the second - the occipital region. Two ratios were calculated representing the uptake of the radiopharmaceutical in the left and right striatum. Qualitative and quantitative criteria were elaborated for evaluating the scintigraphic patterns. Decreased, nonhomogeneous and asymmetric uptake of the radiopharmaceutical coupled with low quantitative parameters in range from 1.44 to 2.87 represents the characteristic scintigraphic pattern for Parkinson's disease with clear clinical picture. Homogenous with high intensity and symmetric uptake of the radiopharmaceutical in the striatum coupled with his clear frontier and with quantitative parameters up to 4.40 represent the scintigraphic pattern in two patients with essential tremor. Receptor single - photon emission computer tomography with 123I - Datscan represents an accurate nuclear-medicine method for precise diagnosis of Parkinson's disease and for its differentiation from

  4. Data register and processor for multiwire chambers

    International Nuclear Information System (INIS)

    Karpukhin, V.V.

    1985-01-01

    A data register and a processor for data receiving and processing from drift chambers of a device for investigating relativistic positroniums are described. The data are delivered to the register input in the form of the Grey 8 bit code, memorized and transformed to a position code. The register information is delivered to the KAMAK trunk and to the front panel plug. The processor selects particle tracks in a horizontal plane of the facility. ΔY maximum coordinate divergence and minimum point quantity on the track are set from the processor front panel. Processor solution time is 16 μs maximum quantity of simultaneously analyzed coordinates is 16

  5. Allocating application to group of consecutive processors in fault-tolerant deadlock-free routing path defined by routers obeying same rules for path selection

    Science.gov (United States)

    Leung, Vitus J [Albuquerque, NM; Phillips, Cynthia A [Albuquerque, NM; Bender, Michael A [East Northport, NY; Bunde, David P [Urbana, IL

    2009-07-21

    In a multiple processor computing apparatus, directional routing restrictions and a logical channel construct permit fault tolerant, deadlock-free routing. Processor allocation can be performed by creating a linear ordering of the processors based on routing rules used for routing communications between the processors. The linear ordering can assume a loop configuration, and bin-packing is applied to this loop configuration. The interconnection of the processors can be conceptualized as a generally rectangular 3-dimensional grid, and the MC allocation algorithm is applied with respect to the 3-dimensional grid.

  6. Free vibration analysis of single-walled boron nitride nanotubes based on a computational mechanics framework

    Science.gov (United States)

    Yan, J. W.; Tong, L. H.; Xiang, Ping

    2017-12-01

    Free vibration behaviors of single-walled boron nitride nanotubes are investigated using a computational mechanics approach. Tersoff-Brenner potential is used to reflect atomic interaction between boron and nitrogen atoms. The higher-order Cauchy-Born rule is employed to establish the constitutive relationship for single-walled boron nitride nanotubes on the basis of higher-order gradient continuum theory. It bridges the gaps between the nanoscale lattice structures with a continuum body. A mesh-free modeling framework is constructed, using the moving Kriging interpolation which automatically satisfies the higher-order continuity, to implement numerical simulation in order to match the higher-order constitutive model. In comparison with conventional atomistic simulation methods, the established atomistic-continuum multi-scale approach possesses advantages in tackling atomic structures with high-accuracy and high-efficiency. Free vibration characteristics of single-walled boron nitride nanotubes with different boundary conditions, tube chiralities, lengths and radii are examined in case studies. In this research, it is pointed out that a critical radius exists for the evaluation of fundamental vibration frequencies of boron nitride nanotubes; opposite trends can be observed prior to and beyond the critical radius. Simulation results are presented and discussed.

  7. Endoleak detection using single-acquisition split-bolus dual-energy computer tomography (DECT)

    Energy Technology Data Exchange (ETDEWEB)

    Javor, D.; Wressnegger, A.; Unterhumer, S.; Kollndorfer, K.; Nolz, R.; Beitzke, D.; Loewe, C. [Medical University of Vienna, Department of Biomedical Imaging and Image-guided Therapy, Vienna (Austria)

    2017-04-15

    To assess a single-phase, dual-energy computed tomography (DECT) with a split-bolus technique and reconstruction of virtual non-enhanced images for the detection of endoleaks after endovascular aneurysm repair (EVAR). Fifty patients referred for routine follow-up post-EVAR CT and a history of at least one post-EVAR follow-up CT examination using our standard biphasic (arterial and venous phase) routine protocol (which was used as the reference standard) were included in this prospective trial. An in-patient comparison and an analysis of the split-bolus protocol and the previously used double-phase protocol were performed with regard to differences in diagnostic accuracy, radiation dose, and image quality. The analysis showed a significant reduction of radiation dose of up to 42 %, using the single-acquisition split-bolus protocol, while maintaining a comparable diagnostic accuracy (primary endoleak detection rate of 96 %). Image quality between the two protocols was comparable and only slightly inferior for the split-bolus scan (2.5 vs. 2.4). Using the single-acquisition, split-bolus approach allows for a significant dose reduction while maintaining high image quality, resulting in effective endoleak identification. (orig.)

  8. High-Resolution Computed Tomography of Single Breast Cancer Microcalcifications in Vivo

    Directory of Open Access Journals (Sweden)

    Kazumasa Inoue

    2011-07-01

    Full Text Available Microcalcification is a hallmark of breast cancer and a key diagnostic feature for mammography. We recently described the first robust animal model of breast cancer microcalcification. In this study, we hypothesized that high-resolution computed tomography (CT could potentially detect the genesis of a single microcalcification in vivo and quantify its growth over time. Using a commercial CT scanner, we systematically optimized acquisition and reconstruction parameters. Two ray-tracing image reconstruction algorithms were tested: a voxel-driven “fast” cone beam algorithm (FCBA and a detector-driven “exact” cone beam algorithm (ECBA. By optimizing acquisition and reconstruction parameters, we were able to achieve a resolution of 104 μm full width at half-maximum (FWHM. At an optimal detector sampling frequency, the ECBA provided a 28 μm (21% FWHM improvement in resolution over the FCBA. In vitro, we were able to image a single 300 μm X 100 μm hydroxyapatite crystal. In a syngeneic rat model of breast cancer, we were able to detect the genesis of a single microcalcification in vivo and follow its growth longitudinally over weeks. Taken together, this study provides an in vivo “gold standard” for the development of calcification-specific contrast agents and a model system for studying the mechanism of breast cancer microcalcification.

  9. High-resolution computed tomography of single breast cancer microcalcifications in vivo.

    Science.gov (United States)

    Inoue, Kazumasa; Liu, Fangbing; Hoppin, Jack; Lunsford, Elaine P; Lackas, Christian; Hesterman, Jacob; Lenkinski, Robert E; Fujii, Hirofumi; Frangioni, John V

    2011-08-01

    Microcalcification is a hallmark of breast cancer and a key diagnostic feature for mammography. We recently described the first robust animal model of breast cancer microcalcification. In this study, we hypothesized that high-resolution computed tomography (CT) could potentially detect the genesis of a single microcalcification in vivo and quantify its growth over time. Using a commercial CT scanner, we systematically optimized acquisition and reconstruction parameters. Two ray-tracing image reconstruction algorithms were tested: a voxel-driven "fast" cone beam algorithm (FCBA) and a detector-driven "exact" cone beam algorithm (ECBA). By optimizing acquisition and reconstruction parameters, we were able to achieve a resolution of 104 μm full width at half-maximum (FWHM). At an optimal detector sampling frequency, the ECBA provided a 28 μm (21%) FWHM improvement in resolution over the FCBA. In vitro, we were able to image a single 300 μm × 100 μm hydroxyapatite crystal. In a syngeneic rat model of breast cancer, we were able to detect the genesis of a single microcalcification in vivo and follow its growth longitudinally over weeks. Taken together, this study provides an in vivo "gold standard" for the development of calcification-specific contrast agents and a model system for studying the mechanism of breast cancer microcalcification.

  10. Compilation Techniques Specific for a Hardware Cryptography-Embedded Multimedia Mobile Processor

    Directory of Open Access Journals (Sweden)

    Masa-aki FUKASE

    2007-12-01

    Full Text Available The development of single chip VLSI processors is the key technology of ever growing pervasive computing to answer overall demands for usability, mobility, speed, security, etc. We have so far developed a hardware cryptography-embedded multimedia mobile processor architecture, HCgorilla. Since HCgorilla integrates a wide range of techniques from architectures to applications and languages, one-sided design approach is not always useful. HCgorilla needs more complicated strategy, that is, hardware/software (H/S codesign. Thus, we exploit the software support of HCgorilla composed of a Java interface and parallelizing compilers. They are assumed to be installed in servers in order to reduce the load and increase the performance of HCgorilla-embedded clients. Since compilers are the essence of software's responsibility, we focus in this article on our recent results about the design, specifications, and prototyping of parallelizing compilers for HCgorilla. The parallelizing compilers are composed of a multicore compiler and a LIW compiler. They are specified to abstract parallelism from executable serial codes or the Java interface output and output the codes executable in parallel by HCgorilla. The prototyping compilers are written in Java. The evaluation by using an arithmetic test program shows the reasonability of the prototyping compilers compared with hand compilers.

  11. Reward-based learning under hardware constraints-using a RISC processor embedded in a neuromorphic substrate.

    Science.gov (United States)

    Friedmann, Simon; Frémaux, Nicolas; Schemmel, Johannes; Gerstner, Wulfram; Meier, Karlheinz

    2013-01-01

    In this study, we propose and analyze in simulations a new, highly flexible method of implementing synaptic plasticity in a wafer-scale, accelerated neuromorphic hardware system. The study focuses on globally modulated STDP, as a special use-case of this method. Flexibility is achieved by embedding a general-purpose processor dedicated to plasticity into the wafer. To evaluate the suitability of the proposed system, we use a reward modulated STDP rule in a spike train learning task. A single layer of neurons is trained to fire at specific points in time with only the reward as feedback. This model is simulated to measure its performance, i.e., the increase in received reward after learning. Using this performance as baseline, we then simulate the model with various constraints imposed by the proposed implementation and compare the performance. The simulated constraints include discretized synaptic weights, a restricted interface between analog synapses and embedded processor, and mismatch of analog circuits. We find that probabilistic updates can increase the performance of low-resolution weights, a simple interface between analog synapses and processor is sufficient for learning, and performance is insensitive to mismatch. Further, we consider communication latency between wafer and the conventional control computer system that is simulating the environment. This latency increases the delay, with which the reward is sent to the embedded processor. Because of the time continuous operation of the analog synapses, delay can cause a deviation of the updates as compared to the not delayed situation. We find that for highly accelerated systems latency has to be kept to a minimum. This study demonstrates the suitability of the proposed implementation to emulate the selected reward modulated STDP learning rule. It is therefore an ideal candidate for implementation in an upgraded version of the wafer-scale system developed within the BrainScaleS project.

  12. Alternative Water Processor Test Development

    Science.gov (United States)

    Pickering, Karen D.; Mitchell, Julie L.; Adam, Niklas M.; Barta, Daniel; Meyer, Caitlin E.; Pensinger, Stuart; Vega, Leticia M.; Callahan, Michael R.; Flynn, Michael; Wheeler, Ray; hide

    2013-01-01

    The Next Generation Life Support Project is developing an Alternative Water Processor (AWP) as a candidate water recovery system for long duration exploration missions. The AWP consists of biological water processor (BWP) integrated with a forward osmosis secondary treatment system (FOST). The basis of the BWP is a membrane aerated biological reactor (MABR), developed in concert with Texas Tech University. Bacteria located within the MABR metabolize organic material in wastewater, converting approximately 90% of the total organic carbon to carbon dioxide. In addition, bacteria convert a portion of the ammonia-nitrogen present in the wastewater to nitrogen gas, through a combination of nitrification and denitrification. The effluent from the BWP system is low in organic contaminants, but high in total dissolved solids. The FOST system, integrated downstream of the BWP, removes dissolved solids through a combination of concentration-driven forward osmosis and pressure driven reverse osmosis. The integrated system is expected to produce water with a total organic carbon less than 50 mg/l and dissolved solids that meet potable water requirements for spaceflight. This paper describes the test definition, the design of the BWP and FOST subsystems, and plans for integrated testing.

  13. Alternative Water Processor Test Development

    Science.gov (United States)

    Pickering, Karen D.; Mitchell, Julie; Vega, Leticia; Adam, Niklas; Flynn, Michael; Wjee (er. Rau); Lunn, Griffin; Jackson, Andrew

    2012-01-01

    The Next Generation Life Support Project is developing an Alternative Water Processor (AWP) as a candidate water recovery system for long duration exploration missions. The AWP consists of biological water processor (BWP) integrated with a forward osmosis secondary treatment system (FOST). The basis of the BWP is a membrane aerated biological reactor (MABR), developed in concert with Texas Tech University. Bacteria located within the MABR metabolize organic material in wastewater, converting approximately 90% of the total organic carbon to carbon dioxide. In addition, bacteria convert a portion of the ammonia-nitrogen present in the wastewater to nitrogen gas, through a combination of nitrogen and denitrification. The effluent from the BWP system is low in organic contaminants, but high in total dissolved solids. The FOST system, integrated downstream of the BWP, removes dissolved solids through a combination of concentration-driven forward osmosis and pressure driven reverse osmosis. The integrated system is expected to produce water with a total organic carbon less than 50 mg/l and dissolved solids that meet potable water requirements for spaceflight. This paper describes the test definition, the design of the BWP and FOST subsystems, and plans for integrated testing.

  14. Lattice QCD with Domain Decomposition on Intel Xeon Phi Co-Processors

    Energy Technology Data Exchange (ETDEWEB)

    Heybrock, Simon; Joo, Balint; Kalamkar, Dhiraj D; Smelyanskiy, Mikhail; Vaidyanathan, Karthikeyan; Wettig, Tilo; Dubey, Pradeep

    2014-12-01

    The gap between the cost of moving data and the cost of computing continues to grow, making it ever harder to design iterative solvers on extreme-scale architectures. This problem can be alleviated by alternative algorithms that reduce the amount of data movement. We investigate this in the context of Lattice Quantum Chromodynamics and implement such an alternative solver algorithm, based on domain decomposition, on Intel Xeon Phi co-processor (KNC) clusters. We demonstrate close-to-linear on-chip scaling to all 60 cores of the KNC. With a mix of single- and half-precision the domain-decomposition method sustains 400-500 Gflop/s per chip. Compared to an optimized KNC implementation of a standard solver [1], our full multi-node domain-decomposition solver strong-scales to more nodes and reduces the time-to-solution by a factor of 5.

  15. Operation of commercial R3000 processors in the Low Earth Orbit (LEO) space environment

    Energy Technology Data Exchange (ETDEWEB)

    Kaschmitter, J.L.; Shaeffer, D.L.; Colella, N.J. [Lawrence Livermore National Lab., CA (United States); McKnett, C.L.; Coakley, P.G. [JAYCOR, Santa Monica, CA (United States)

    1990-08-09

    Spacecraft processors must operate with minimal degradation of performance in the Low Earth Orbit (LEO) radiation environment, which includes the effects of total accumulated ionizing dose and Single Event Phenomena (SEP) caused by protons and cosmic rays. Commercially available microprocessors can offer a number of advantages relative to radiation-hardened devices, including lower cost, reduced development and procurement time, extensive software support, higher density and performance. However, commercially available systems are not normally designed to tolerate effects induced by the LEO environment. Lawrence Livermore National Laboratory (LLNL) and others have extensively tested the MIPS R3000 Reduced Instruction Set Computer (RISC) microprocessor family for operation in LEO environments. We have characterized total dose and SEP effects for altitudes and inclinations of interest to systems operating in LEO, and we postulate techniques for detection and alleviation of SEP effects based on experimental results. 12 refs.

  16. Operation of commercial R3000 processors in the low earth orbit (LEO) space environment

    Energy Technology Data Exchange (ETDEWEB)

    Kaschmitter, J.L.; Shaeffer, D.L.; Colella, N.J. (Lawrence Livermore National Lab., CA (United States)); McKnett, C.L.; Coakley, P.G. (JAYCOR, Santa Monica, CA (United States))

    1991-12-01

    Spacecraft processors must operate with minimal degradation of performance in the Low Earth Orbit (LEO) radiation environment, which includes the effects of total accumulated ionizing dose and Single Event Phenomena (SEP) caused by protons and cosmic rays. Commercially available microprocessors can offer a number of advantages relative to radiation-hardened devices, including lower cost, reduced development and procurement time, extensive software support, higher density and performance. However, commercially available systems are not normally designed to tolerate effects induced by the LEO environments. Lawrence Livermore National Laboratory (LLNL) and others have extensively tested the MIPS R3000 Reduced Instruction Set Computer (RISC) microprocessor family for operation in LEO environments. In this paper the authors characterize total dose and SEP effects for altitudes and inclinations of interest to systems operating in LEO, and the authors postulate techniques for detection and alleviation of SEP effects based on experimental results.

  17. Optically stimulated luminescence sensitivity changes in quartz due to repeated use in single aliquot readout: Experiments and computer simulations

    DEFF Research Database (Denmark)

    McKeever, S.W.S.; Bøtter-Jensen, L.; Agersnap Larsen, N.

    1996-01-01

    As part of a study to examine sensitivity changes in single aliquot techniques using optically stimulated luminescence (OSL) a series of experiments has been conducted with single aliquots of natural quartz, and the data compared with the results of computer simulations of the type of processes...

  18. Partition-Based Hardware Transactional Memory for Many-Core Processors

    OpenAIRE

    Liu, Yi; Zhang, Xinwei; Wang, Yonghui; Qian, Depei; Chen, Yali; Wu, Jin

    2013-01-01

    Part 4: Session 4: Multi-core Computing and GPU; International audience; Transactional memory is an appealing technology which frees programmer from lock-based programming. However, most of current hardware transactional memory systems are proposed for multi-core processors, and may face some challenges with the increasing of processor cores in many-core systems, such as inefficient utilization of transactional buffers, unsolved problem of transactional buffer overflow, etc. This paper propos...

  19. "Defining Computer 'Speed': An Unsolved Challenge"

    CERN Document Server

    CERN. Geneva

    2012-01-01

    Abstract: The reason we use computers is their speed, and the reason we use parallel computers is that they're faster than single-processor computers. Yet, after 70 years of electronic digital computing, we still do not have a solid definition of what computer 'speed' means, or even what it means to be 'faster'. Unlike measures in physics, where the definition of speed is rigorous and unequivocal, in computing there is no definition of speed that is universally accepted. As a result, computer customers have made purchases misguided by dubious information, computer designers have optimized their designs for the wrong goals, and computer programmers have chosen methods that optimize the wrong things. This talk describes why some of the obvious and historical ways of defining 'speed' haven't served us well, and the things we've learned in the struggle to find a definition that works. Biography: Dr. John Gustafson is a Director ...

  20. A Fully Automatic Instantaneous Fire Hotspot Detection Processor Based on AVHRR Imagery—A TIMELINE Thematic Processor

    OpenAIRE

    Plank, Simon; Fuchs, Eva-Maria; Frey, Corinne

    2017-01-01

    The German Aerospace Center’s (DLR) TIMELINE project aims to develop an operational processing and data management environment to process 30 years of National Oceanic and Atmospheric Administration (NOAA) —Advanced Very High Resolution Radiometer (AVHRR) raw data into L1b, L2 and L3 products. This article presents the current status of the fully automated L2 active fire hotspot detection processor, which is based on single-temporal datasets in orbit geometry. Three different probability le...

  1. A programmable sound processor for advanced hearing aid research.

    Science.gov (United States)

    McDermott, H

    1998-03-01

    A portable sound processor has been developed to facilitate research on advanced hearing aids. Because it is based on a digital signal processing integrated circuit (Motorola DSP56001), it can readily be programmed to execute novel algorithms. Furthermore, the parameters of these algorithms can be adjusted quickly and easily to suit the specific hearing characteristics of users. In the processor, microphone signals are digitized to a precision of 12 bits at a sampling rate of approximately 12 kHz for input to the DSP device. Subsequently, processed samples are delivered to the earphone by a novel, fully-digital class-D driver. This driver provides the advantages of a conventional class-D amplifier (high maximum output, low power consumption, low distortion) without some of the disadvantages (such as the need for precise analog circuitry). In addition, a cochlear implant driver is provided so that the processor is suitable for hearing-impaired people who use an implant and an acoustic hearing aid together. To reduce the computational demands on the DSP device, and therefore the power consumption, a running spectral analysis of incoming signals is provided by a custom-designed switched-capacitor integrated circuit incorporating 20 bandpass filters. The complete processor is pocket-sized and powered by batteries. An example is described of its use in providing frequency-shaped amplification for aid users with severe hearing impairment. Speech perception tests confirmed that the processor performed significantly better than the subjects' own hearing aids, probably because the digital filter provided a frequency response generally closer to the optimum for each user than the simpler analog aids.

  2. Producing chopped firewood with firewood processors

    International Nuclear Information System (INIS)

    Kaerhae, K.; Jouhiaho, A.

    2009-01-01

    The TTS Institute's research and development project studied both the productivity of new, chopped firewood processors (cross-cutting and splitting machines) suitable for professional and independent small-scale production, and the costs of the chopped firewood produced. Seven chopped firewood processors were tested in the research, six of which were sawing processors and one shearing processor. The chopping work was carried out using wood feeding racks and a wood lifter. The work was also carried out without any feeding appliances. Altogether 132.5 solid m 3 of wood were chopped in the time studies. The firewood processor used had the most significant impact on chopping work productivity. In addition to the firewood processor, the stem mid-diameter, the length of the raw material, and of the firewood were also found to affect productivity. The wood feeding systems also affected productivity. If there is a feeding rack and hydraulic grapple loader available for use in chopping firewood, then it is worth using the wood feeding rack. A wood lifter is only worth using with the largest stems (over 20 cm mid-diameter) if a feeding rack cannot be used. When producing chopped firewood from small-diameter wood, i.e. with a mid-diameter less than 10 cm, the costs of chopping work were over 10 EUR solid m -3 with sawing firewood processors. The shearing firewood processor with a guillotine blade achieved a cost level of 5 EUR solid m -3 when the mid-diameter of the chopped stem was 10 cm. In addition to the raw material, the cost-efficient chopping work also requires several hundred annual operating hours with a firewood processor, which is difficult for individual firewood entrepreneurs to achieve. The operating hours of firewood processors can be increased to the required level by the joint use of the processors by a number of firewood entrepreneurs. (author)

  3. Choosing processor array configuration by performance modeling for a highly parallel linear algebra algorithm

    International Nuclear Information System (INIS)

    Littlefield, R.J.; Maschhoff, K.J.

    1991-04-01

    Many linear algebra algorithms utilize an array of processors across which matrices are distributed. Given a particular matrix size and a maximum number of processors, what configuration of processors, i.e., what size and shape array, will execute the fastest? The answer to this question depends on tradeoffs between load balancing, communication startup and transfer costs, and computational overhead. In this paper we analyze in detail one algorithm: the blocked factored Jacobi method for solving dense eigensystems. A performance model is developed to predict execution time as a function of the processor array and matrix sizes, plus the basic computation and communication speeds of the underlying computer system. In experiments on a large hypercube (up to 512 processors), this model has been found to be highly accurate (mean error ∼ 2%) over a wide range of matrix sizes (10 x 10 through 200 x 200) and processor counts (1 to 512). The model reveals, and direct experiment confirms, that the tradeoffs mentioned above can be surprisingly complex and counterintuitive. We propose decision procedures based directly on the performance model to choose configurations for fastest execution. The model-based decision procedures are compared to a heuristic strategy and shown to be significantly better. 7 refs., 8 figs., 1 tab

  4. A Single Camera Motion Capture System for Human-Computer Interaction

    Science.gov (United States)

    Okada, Ryuzo; Stenger, Björn

    This paper presents a method for markerless human motion capture using a single camera. It uses tree-based filtering to efficiently propagate a probability distribution over poses of a 3D body model. The pose vectors and associated shapes are arranged in a tree, which is constructed by hierarchical pairwise clustering, in order to efficiently evaluate the likelihood in each frame. Anew likelihood function based on silhouette matching is proposed that improves the pose estimation of thinner body parts, i. e. the limbs. The dynamic model takes self-occlusion into account by increasing the variance of occluded body-parts, thus allowing for recovery when the body part reappears. We present two applications of our method that work in real-time on a Cell Broadband Engine™: a computer game and a virtual clothing application.

  5. Brain perfusion single photon emission computed tomography in major psychiatric disorders: From basics to clinical practice

    International Nuclear Information System (INIS)

    Santra, Amburanjan; Kumar, Rakesh

    2014-01-01

    Brain single photon emission computed tomography (SPECT) is a well-established and reliable method to assess brain function through measurement of regional cerebral blood flow (rCBF). It can be used to define a patient's pathophysiological status when neurological or psychiatric symptoms cannot be explained by anatomical neuroimaging findings. Though there is ample evidence validating brain SPECT as a technique to track human behavior and correlating psychiatric disorders with dysfunction of specific brain regions, only few psychiatrists have adopted brain SPECT in routine clinical practice. It can be utilized to evaluate the involvement of brain regions in a particular patient, to individualize treatment on basis of SPECT findings, to monitor the treatment response and modify treatment, if necessary. In this article, we have reviewed the available studies in this regard from existing literature and tried to present the evidence for establishing the clinical role of brain SPECT in major psychiatric illnesses

  6. Brain perfusion single photon emission computed tomography in children after acute encephalopathy

    International Nuclear Information System (INIS)

    Kurihara, Mana; Nakae, Yoichiro; Kohagizawa, Toshitaka; Eto, Yoshikatsu

    2005-01-01

    We studied single photon emission computed tomography (SPECT) of 15 children with acute encephalopathy after more than 1 year from the onset, using technetium-99 m-L, L-ethyl cystinate dimer ( 99m Tc-ECD) and a three-dementional stereotaxic region of interest template. Regional cerebral blood flow was evaluated and divided in three groups according to the severity of disability: absent or mild, moderate, and severe. There was no abnormality on SPECT in the patients without disability or with mild disability. Diffuse hypoperfusion was shown in the groups with moderate and severe disability. The patients with severe disability showed hypoperfusion in the pericallosal, frontal and central areas which was more pronounced than in the patients with moderate disability. (author)

  7. Hot water epilepsy: Phenotype and single photon emission computed tomography observations

    Directory of Open Access Journals (Sweden)

    Mehul Patel

    2014-01-01

    Full Text Available We studied the anatomical correlates of reflex hot water epilepsy (HWE using multimodality investigations viz. magnetic resonance imaging (MRI, electroencephalography (EEG, and single photon emission computed tomography (SPECT. Five men (mean age: 27.0 ΁ 5.8 years with HWE were subjected to MRI of brain, video-EEG studies, and SPECT scan. These were correlated with phenotypic presentations. Seizures could be precipitated in three patients with pouring of hot water over the head and semiology of seizures was suggestive of temporal lobe epilepsy. Ictal SPECT showed hyperperfusion in: left medial temporal - one, left lateral temporal - one, and right parietal - one. Interictal SPECT was normal in all five patients and did not help in localization. MRI and interictal EEG was normal in all the patients. The clinical and SPECT studies suggested temporal lobe as the seizure onset zone in some of the patients with HWE.

  8. A precise goniometer/tensiometer using a low cost single-board computer

    Science.gov (United States)

    Favier, Benoit; Chamakos, Nikolaos T.; Papathanasiou, Athanasios G.

    2017-12-01

    Measuring the surface tension and the Young contact angle of a droplet is extremely important for many industrial applications. Here, considering the booming interest for small and cheap but precise experimental instruments, we have constructed a low-cost contact angle goniometer/tensiometer, based on a single-board computer (Raspberry Pi). The device runs an axisymmetric drop shape analysis (ADSA) algorithm written in Python. The code, here named DropToolKit, was developed in-house. We initially present the mathematical framework of our algorithm and then we validate our software tool against other well-established ADSA packages, including the commercial ramé-hart DROPimage Advanced as well as the DropAnalysis plugin in ImageJ. After successfully testing for various combinations of liquids and solid surfaces, we concluded that our prototype device would be highly beneficial for industrial applications as well as for scientific research in wetting phenomena compared to the commercial solutions.

  9. A CNN-Specific Integrated Processor

    Science.gov (United States)

    Malki, Suleyman; Spaanenburg, Lambert

    2009-12-01

    Integrated Processors (IP) are algorithm-specific cores that either by programming or by configuration can be re-used within many microelectronic systems. This paper looks at Cellular Neural Networks (CNN) to become realized as IP. First current digital implementations are reviewed, and the memoryprocessor bandwidth issues are analyzed. Then a generic view is taken on the structure of the network, and a new intra-communication protocol based on rotating wheels is proposed. It is shown that this provides for guaranteed high-performance with a minimal network interface. The resulting node is small and supports multi-level CNN designs, giving the system a 30-fold increase in capacity compared to classical designs. As it facilitates multiple operations on a single image, and single operations on multiple images, with minimal access to the external image memory, balancing the internal and external data transfer requirements optimizes the system operation. In conventional digital CNN designs, the treatment of boundary nodes requires additional logic to handle the CNN value propagation scheme. In the new architecture, only a slight modification of the existing cells is necessary to model the boundary effect. A typical prototype for visual pattern recognition will house 4096 CNN cells with a 2% overhead for making it an IP.

  10. A CNN-Specific Integrated Processor

    Directory of Open Access Journals (Sweden)

    Suleyman Malki

    2009-01-01

    Full Text Available Integrated Processors (IP are algorithm-specific cores that either by programming or by configuration can be re-used within many microelectronic systems. This paper looks at Cellular Neural Networks (CNN to become realized as IP. First current digital implementations are reviewed, and the memoryprocessor bandwidth issues are analyzed. Then a generic view is taken on the structure of the network, and a new intra-communication protocol based on rotating wheels is proposed. It is shown that this provides for guaranteed high-performance with a minimal network interface. The resulting node is small and supports multi-level CNN designs, giving the system a 30-fold increase in capacity compared to classical designs. As it facilitates multiple operations on a single image, and single operations on multiple images, with minimal access to the external image memory, balancing the internal and external data transfer requirements optimizes the system operation. In conventional digital CNN designs, the treatment of boundary nodes requires additional logic to handle the CNN value propagation scheme. In the new architecture, only a slight modification of the existing cells is necessary to model the boundary effect. A typical prototype for visual pattern recognition will house 4096 CNN cells with a 2% overhead for making it an IP.

  11. Transient computation fluid dynamics modeling of a single proton exchange membrane fuel cell with serpentine channel

    Science.gov (United States)

    Hu, Guilin; Fan, Jianren

    The proton exchange membrane fuel cell (PEMFC) has become a promising candidate for the power source of electrical vehicles because of its low pollution, low noise and especially fast startup and transient responses at low temperatures. A transient, three-dimensional, non-isothermal and single-phase mathematical model based on computation fluid dynamics has been developed to describe the transient process and the dynamic characteristics of a PEMFC with a serpentine fluid channel. The effects of water phase change and heat transfer, as well as electrochemical kinetics and multicomponent transport on the cell performance are taken into account simultaneously in this comprehensive model. The developed model was employed to simulate a single laboratory-scale PEMFC with an electrode area about 20 cm 2. The dynamic behavior of the characteristic parameters such as reactant concentration, pressure loss, temperature on the membrane surface of cathode side and current density during start-up process were computed and are discussed in detail. Furthermore, transient responses of the fuel cell characteristics during step changes and sinusoidal changes in the stoichiometric flow ratio of the cathode inlet stream, cathode inlet stream humidity and cell voltage are also studied and analyzed and interesting undershoot/overshoot behavior of some variables was found. It was also found that the startup and transient response time of a PEM fuel cell is of the order of a second, which is similar to the simulation results predicted by most models. The result is an important guide for the optimization of PEMFC designs and dynamic operation.

  12. Computational Analysis of Single Nucleotide Polymorphisms Associated with Altered Drug Responsiveness in Type 2 Diabetes

    Directory of Open Access Journals (Sweden)

    Valerio Costa

    2016-06-01

    Full Text Available Type 2 diabetes (T2D is one of the most frequent mortality causes in western countries, with rapidly increasing prevalence. Anti-diabetic drugs are the first therapeutic approach, although many patients develop drug resistance. Most drug responsiveness variability can be explained by genetic causes. Inter-individual variability is principally due to single nucleotide polymorphisms, and differential drug responsiveness has been correlated to alteration in genes involved in drug metabolism (CYP2C9 or insulin signaling (IRS1, ABCC8, KCNJ11 and PPARG. However, most genome-wide association studies did not provide clues about the contribution of DNA variations to impaired drug responsiveness. Thus, characterizing T2D drug responsiveness variants is needed to guide clinicians toward tailored therapeutic approaches. Here, we extensively investigated polymorphisms associated with altered drug response in T2D, predicting their effects in silico. Combining different computational approaches, we focused on the expression pattern of genes correlated to drug resistance and inferred evolutionary conservation of polymorphic residues, computationally predicting the biochemical properties of polymorphic proteins. Using RNA-Sequencing followed by targeted validation, we identified and experimentally confirmed that two nucleotide variations in the CAPN10 gene—currently annotated as intronic—fall within two new transcripts in this locus. Additionally, we found that a Single Nucleotide Polymorphism (SNP, currently reported as intergenic, maps to the intron of a new transcript, harboring CAPN10 and GPR35 genes, which undergoes non-sense mediated decay. Finally, we analyzed variants that fall into non-coding regulatory regions of yet underestimated functional significance, predicting that some of them can potentially affect gene expression and/or post-transcriptional regulation of mRNAs affecting the splicing.

  13. Further computer appreciation

    CERN Document Server

    Fry, T F

    2014-01-01

    Further Computer Appreciation is a comprehensive cover of the principles and aspects in computer appreciation. The book starts by describing the development of computers from the first to the third computer generations, to the development of processors and storage systems, up to the present position of computers and future trends. The text tackles the basic elements, concepts and functions of digital computers, computer arithmetic, input media and devices, and computer output. The basic central processor functions, data storage and the organization of data by classification of computer files,

  14. A Real-Time Sound Field Rendering Processor

    Directory of Open Access Journals (Sweden)

    Tan Yiyu

    2017-12-01

    Full Text Available Real-time sound field renderings are computationally intensive and memory-intensive. Traditional rendering systems based on computer simulations suffer from memory bandwidth and arithmetic units. The computation is time-consuming, and the sample rate of the output sound is low because of the long computation time at each time step. In this work, a processor with a hybrid architecture is proposed to speed up computation and improve the sample rate of the output sound, and an interface is developed for system scalability through simply cascading many chips to enlarge the simulated area. To render a three-minute Beethoven wave sound in a small shoe-box room with dimensions of 1.28 m × 1.28 m × 0.64 m, the field programming gate array (FPGA-based prototype machine with the proposed architecture carries out the sound rendering at run-time while the software simulation with the OpenMP parallelization takes about 12.70 min on a personal computer (PC with 32 GB random access memory (RAM and an Intel i7-6800K six-core processor running at 3.4 GHz. The throughput in the software simulation is about 194 M grids/s while it is 51.2 G grids/s in the prototype machine even if the clock frequency of the prototype machine is much lower than that of the PC. The rendering processor with a processing element (PE and interfaces consumes about 238,515 gates after fabricated by the 0.18 µm processing technology from the ROHM semiconductor Co., Ltd. (Kyoto Japan, and the power consumption is about 143.8 mW.

  15. The communication processor of TUMULT-64

    NARCIS (Netherlands)

    Smit, Gerardus Johannes Maria; Jansen, P.G.

    1988-01-01

    Tumult (Twente University MULTi-processor system) is a modular extendible multi-processor system designed and implemented at the Twente University of Technology in co-operation with Oce Nederland B.V. and the Dr. Neher Laboratories (Dutch PTT). Characteristics of the hardware are: MIMD type,

  16. Models of Communication for Multicore Processors

    DEFF Research Database (Denmark)

    Schoeberl, Martin; Sørensen, Rasmus Bo; Sparsø, Jens

    2015-01-01

    To efficiently use multicore processors we need to ensure that almost all data communication stays on chip, i.e., the bits moved between tasks executing on different processor cores do not leave the chip. Different forms of on-chip communication are supported by different hardware mechanism, e...

  17. A Study of Communication Processor Systems

    Science.gov (United States)

    1979-12-01

    by S . The processor and manually controlled switches mp Skp enable a link between each processor and controllers (K io) which in turn allow access to... proceso i S thle base leel wh Ichl scans all LIines And Initiates all non--interrut drvn rcsse0s . The voice switching functioni Is performed by one

  18. The TM3270 Media-processor

    NARCIS (Netherlands)

    van de Waerdt, J.W.

    2006-01-01

    I n this thesis, we present the TM3270 VLIW media-processor, the latest of TriMedia processors, and describe the innovations with respect to its prede- cessor: the TM3260. We describe enhancements to the load/store unit design, such as a new data prefetching technique, and architectural

  19. Diversity of bilateral synaptic assemblies for binaural computation in midbrain single neurons.

    Science.gov (United States)

    He, Na; Kong, Lingzhi; Lin, Tao; Wang, Shaohui; Liu, Xiuping; Qi, Jiyao; Yan, Jun

    2017-11-01

    Binaural hearing confers many beneficial functions but our understanding of its underlying neural substrates is limited. This study examines the bilateral synaptic assemblies and binaural computation (or integration) in the central nucleus of the inferior colliculus (ICc) of the auditory midbrain, a key convergent center. Using in-vivo whole-cell patch-clamp, the excitatory and inhibitory postsynaptic potentials (EPSPs/IPSPs) of single ICc neurons to contralateral, ipsilateral and bilateral stimulation were recorded. According to the contralateral and ipsilateral EPSP/IPSP, 7 types of bilateral synaptic assemblies were identified. These include EPSP-EPSP (EE), E-IPSP (EI), E-no response (EO), II, IE, IO and complex-mode (CM) neurons. The CM neurons showed frequency- and/or amplitude-dependent EPSPs/IPSPs to contralateral or ipsilateral stimulation. Bilateral stimulation induced EPSPs/IPSPs that could be larger than (facilitation), similar to (ineffectiveness) or smaller than (suppression) those induced by contralateral stimulation. Our findings have allowed our group to characterize novel neural circuitry for binaural computation in the midbrain. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. Kajian dan Implementasi Real Time Operating System pada Single Board Computer Berbasis Arm

    Directory of Open Access Journals (Sweden)

    Wiedjaja A

    2014-06-01

    Full Text Available Operating System is an important software in computer system. For personal and office use the operating system is sufficient. However, to critical mission applications such as nuclear power plants and braking system on the car (auto braking system which need a high level of reliability, it requires operating system which operates in real time. The study aims to assess the implementation of the Linux-based operating system on a Single Board Computer (SBC ARM-based, namely Pandaboard ES with the Dual-core ARM Cortex-A9, TI OMAP 4460 type. Research was conducted by the method of implementation of the General Purpose OS Ubuntu 12:04 OMAP4-armhf-RTOS and Linux 3.4.0-rt17 + on PandaBoard ES. Then research compared the latency value of each OS on no-load and with full-load condition. The results obtained show the maximum latency value of RTOS on full load condition is at 45 uS, much smaller than the maximum value of GPOS at full-load at 17.712 uS. The lower value of latency demontrates that the RTOS has ability to run the process in a certain period of time much better than the GPOS.

  1. Single Layer Bismuth Iodide: Computational Exploration of Structural, Electrical, Mechanical and Optical Properties

    Science.gov (United States)

    Ma, Fengxian; Zhou, Mei; Jiao, Yalong; Gao, Guoping; Gu, Yuantong; Bilic, Ante; Chen, Zhongfang; Du, Aijun

    2015-12-01

    Layered graphitic materials exhibit new intriguing electronic structure and the search for new types of two-dimensional (2D) monolayer is of importance for the fabrication of next generation miniature electronic and optoelectronic devices. By means of density functional theory (DFT) computations, we investigated in detail the structural, electronic, mechanical and optical properties of the single-layer bismuth iodide (BiI3) nanosheet. Monolayer BiI3 is dynamically stable as confirmed by the computed phonon spectrum. The cleavage energy (Ecl) and interlayer coupling strength of bulk BiI3 are comparable to the experimental values of graphite, which indicates that the exfoliation of BiI3 is highly feasible. The obtained stress-strain curve shows that the BiI3 nanosheet is a brittle material with a breaking strain of 13%. The BiI3 monolayer has an indirect band gap of 1.57 eV with spin orbit coupling (SOC), indicating its potential application for solar cells. Furthermore, the band gap of BiI3 monolayer can be modulated by biaxial strain. Most interestingly, interfacing electrically active graphene with monolayer BiI3 nanosheet leads to enhanced light absorption compared to that in pure monolayer BiI3 nanosheet, highlighting its great potential applications in photonics and photovoltaic solar cells.

  2. Single photon emission computed tomography using a regularizing iterative method for attenuation correction

    International Nuclear Information System (INIS)

    Soussaline, Francoise; Cao, A.; Lecoq, G.

    1981-06-01

    An analytically exact solution to the attenuated tomographic operator is proposed. Such a technique called Regularizing Iterative Method (RIM) belongs to the iterative class of procedures where a priori knowledge can be introduced on the evaluation of the size and shape of the activity domain to be reconstructed, and on the exact attenuation distribution. The relaxation factor used is so named because it leads to fast convergence and provides noise filtering for a small number of iteractions. The effectiveness of such a method was tested in the Single Photon Emission Computed Tomography (SPECT) reconstruction problem, with the goal of precise correction for attenuation before quantitative study. Its implementation involves the use of a rotating scintillation camera based SPECT detector connected to a mini computer system. Mathematical simulations of cylindical uniformly attenuated phantoms indicate that in the range of a priori calculated relaxation factor a fast converging solution can always be found with a (contrast) accuracy of the order of 0.2 to 4% given that numerical errors and noise are or not, taken into account. The sensitivity of the (RIM) algorithm to errors in the size of the reconstructed object and in the value of the attenuation coefficient μ was studied, using the same simulation data. Extreme variations of +- 15% in these parameters will lead to errors of the order of +- 20% in the quantitative results. Physical phantoms representing a variety of geometrical situations were also studied

  3. Three-dimensional integration of nanotechnologies for computing and data storage on a single chip

    Science.gov (United States)

    Shulaker, Max M.; Hills, Gage; Park, Rebecca S.; Howe, Roger T.; Saraswat, Krishna; Wong, H.-S. Philip; Mitra, Subhasish

    2017-07-01

    The computing demands of future data-intensive applications will greatly exceed the capabilities of current electronics, and are unlikely to be met by isolated improvements in transistors, data storage technologies or integrated circuit architectures alone. Instead, transformative nanosystems, which use new nanotechnologies to simultaneously realize improved devices and new integrated circuit architectures, are required. Here we present a prototype of such a transformative nanosystem. It consists of more than one million resistive random-access memory cells and more than two million carbon-nanotube field-effect transistors—promising new nanotechnologies for use in energy-efficient digital logic circuits and for dense data storage—fabricated on vertically stacked layers in a single chip. Unlike conventional integrated circuit architectures, the layered fabrication realizes a three-dimensional integrated circuit architecture with fine-grained and dense vertical connectivity between layers of computing, data storage, and input and output (in this instance, sensing). As a result, our nanosystem can capture massive amounts of data every second, store it directly on-chip, perform in situ processing of the captured data, and produce ‘highly processed’ information. As a working prototype, our nanosystem senses and classifies ambient gases. Furthermore, because the layers are fabricated on top of silicon logic circuitry, our nanosystem is compatible with existing infrastructure for silicon-based technologies. Such complex nano-electronic systems will be essential for future high-performance and highly energy-efficient electronic systems.

  4. Single-feature polymorphism discovery by computing probe affinity shape powers

    Directory of Open Access Journals (Sweden)

    Jia Haiyan

    2009-08-01

    Full Text Available Abstract Background Single-feature polymorphism (SFP discovery is a rapid and cost-effective approach to identify DNA polymorphisms. However, high false positive rates and/or low sensitivity are prevalent in previously described SFP detection methods. This work presents a new computing method for SFP discovery. Results The probe affinity differences and affinity shape powers formed by the neighboring probes in each probe set were computed into SFP weight scores. This method was validated by known sequence information and was comprehensively compared with previously-reported methods using the same datasets. A web application using this algorithm has been implemented for SFP detection. Using this method, we identified 364 SFPs in a barley near-isogenic line pair carrying either the wild type or the mutant uniculm2 (cul2 allele. Most of the SFP polymorphisms were identified on chromosome 6H in the vicinity of the Cul2 locus. Conclusion This SFP discovery method exhibits better performance in specificity and sensitivity over previously-reported methods. It can be used for other organisms for which GeneChip technology is available. The web-based tool will facilitate SFP discovery. The 364 SFPs discovered in a barley near-isogenic line pair provide a set of genetic markers for fine mapping and future map-based cloning of the Cul2 locus.

  5. Benchmarking Further Single Board Computers for Building a Mini Supercomputer for Simulation of Telecommunication Systems

    Directory of Open Access Journals (Sweden)

    Gábor Lencse

    2016-01-01

    Full Text Available Parallel Discrete Event Simulation (PDES with the conservative synchronization method can be efficiently used for the performance analysis of telecommunication systems because of their good lookahead properties. For PDES, a cost effective execution platform may be built by using single board computers (SBCs, which offer relatively high computation capacity compared to their price or power consumption and especially to the space they take up. A benchmarking method is proposed and its operation is demonstrated by benchmarking ten different SBCs, namely Banana Pi, Beaglebone Black, Cubieboard2, Odroid-C1+, Odroid-U3+, Odroid-XU3 Lite, Orange Pi Plus, Radxa Rock Lite, Raspberry Pi Model B+, and Raspberry Pi 2 Model B+. Their benchmarking results are compared to find out which one should be used for building a mini supercomputer for parallel discrete-event simulation of telecommunication systems. The SBCs are also used to build a heterogeneous cluster and the performance of the cluster is tested, too.

  6. Erica the Rhino: A Case Study in Using Raspberry Pi Single Board Computers for Interactive Art

    Directory of Open Access Journals (Sweden)

    Philip J. Basford

    2016-06-01

    Full Text Available Erica the Rhino is an interactive art exhibit created by the University of Southampton, UK. Erica was created as part of a city wide art trail in 2013 called “Go! Rhinos”, curated by Marwell Wildlife, to raise awareness of Rhino conservation. Erica arrived as a white fibreglass shell which was then painted and equipped with five Raspberry Pi Single Board Computers (SBC. These computers allowed the audience to interact with Erica through a range of sensors and actuators. In particular, the audience could feed and stroke her to prompt reactions, as well as send her Tweets to change her behaviour. Pi SBCs were chosen because of their ready availability and their educational pedigree. During the deployment, ‘coding clubs’ were run in the shopping centre where Erica was located, and these allowed children to experiment with and program the same components used in Erica. The experience gained through numerous deployments around the country has enabled Erica to be upgraded to increase reliability and ease of maintenance, whilst the release of the Pi 2 has allowed her responsiveness to be improved.

  7. Design and implementation of a high performance network security processor

    Science.gov (United States)

    Wang, Haixin; Bai, Guoqiang; Chen, Hongyi

    2010-03-01

    The last few years have seen many significant progresses in the field of application-specific processors. One example is network security processors (NSPs) that perform various cryptographic operations specified by network security protocols and help to offload the computation intensive burdens from network processors (NPs). This article presents a high performance NSP system architecture implementation intended for both internet protocol security (IPSec) and secure socket layer (SSL) protocol acceleration, which are widely employed in virtual private network (VPN) and e-commerce applications. The efficient dual one-way pipelined data transfer skeleton and optimised integration scheme of the heterogenous parallel crypto engine arrays lead to a Gbps rate NSP, which is programmable with domain specific descriptor-based instructions. The descriptor-based control flow fragments large data packets and distributes them to the crypto engine arrays, which fully utilises the parallel computation resources and improves the overall system data throughput. A prototyping platform for this NSP design is implemented with a Xilinx XC3S5000 based FPGA chip set. Results show that the design gives a peak throughput for the IPSec ESP tunnel mode of 2.85 Gbps with over 2100 full SSL handshakes per second at a clock rate of 95 MHz.

  8. Advanced topics in security computer system design

    International Nuclear Information System (INIS)

    Stachniak, D.E.; Lamb, W.R.

    1989-01-01

    The capability, performance, and speed of contemporary computer processors, plus the associated performance capability of the operating systems accommodating the processors, have enormously expanded the scope of possibilities for designers of nuclear power plant security computer systems. This paper addresses the choices that could be made by a designer of security computer systems working with contemporary computers and describes the improvement in functionality of contemporary security computer systems based on an optimally chosen design. Primary initial considerations concern the selection of (a) the computer hardware and (b) the operating system. Considerations for hardware selection concern processor and memory word length, memory capacity, and numerous processor features

  9. Single photon emission computed tomography scanning: A predictor of outcome in vegetative state of head injury

    Directory of Open Access Journals (Sweden)

    Pralaya Nayak

    2011-01-01

    Full Text Available Background: Neurotrauma is one of the most important causes of death and disability. Some of the severely head injured patients, failed to show significant improvement despite aggressive neurosurgical management and ended up in a vegetative state. Aims: To assess the outcome at six months and one year using Glasgow outcome scale (GOS, in this prospective study on patients with severe head injury, who remained vegetative at one month. Materials and Methods: This prospective study was carried out in the department of Neurosurgery, All India Institute of Medical Sciences (AIIMS, New Delhi, over a period of a year and a half (March 2002 through July 2003. Materials and Methods: In patients with severe head injury (GCS < 8, post resuscitation, neurological assessment was done with Glasgow coma scale (GCS, pupillary light reflex, doll′s eye movement and cold caloric test in all cases. Fifty patients, who remained vegetative post injury according to the criteria of Jennett and Plum, at one month, were considered for the study. Brain SPECT (Single Photon Emission Computed T omography Scanning was carried out in selected cases. Statistical analysis: Data analysis was done by Pearson′s chi-square test on computer software SPSS, Version 10 (California, USA. Results: Patients with preserved brainstem reflex and with no perfusion defect on SPECT scan had statistically significant favorable outcome. More than 40% of vegetative patients regained consciousness by the end of one year, of whom 24% had favorable outcome in the form of moderate disability and good recovery. Conclusion: SPECT is better than computed tomography/magnetic resonance imaging (CT/MRI as it assesses the cerebral perfusion and functional injury rather than detecting the lesions only. Further study with a control group is necessary to establish the role of SPECT in head injury.

  10. Computation and measurement of cell decision making errors using single cell data.

    Science.gov (United States)

    Habibi, Iman; Cheong, Raymond; Lipniacki, Tomasz; Levchenko, Andre; Emamian, Effat S; Abdi, Ali

    2017-04-01

    In this study a new computational method is developed to quantify decision making errors in cells, caused by noise and signaling failures. Analysis of tumor necrosis factor (TNF) signaling pathway which regulates the transcription factor Nuclear Factor κB (NF-κB) using this method identifies two types of incorrect cell decisions called false alarm and miss. These two events represent, respectively, declaring a signal which is not present and missing a signal that does exist. Using single cell experimental data and the developed method, we compute false alarm and miss error probabilities in wild-type cells and provide a formulation which shows how these metrics depend on the signal transduction noise level. We also show that in the presence of abnormalities in a cell, decision making processes can be significantly affected, compared to a wild-type cell, and the method is able to model and measure such effects. In the TNF-NF-κB pathway, the method computes and reveals changes in false alarm and miss probabilities in A20-deficient cells, caused by cell's inability to inhibit TNF-induced NF-κB response. In biological terms, a higher false alarm metric in this abnormal TNF signaling system indicates perceiving more cytokine signals which in fact do not exist at the system input, whereas a higher miss metric indicates that it is highly likely to miss signals that actually exist. Overall, this study demonstrates the ability of the developed method for modeling cell decision making errors under normal and abnormal conditions, and in the presence of transduction noise uncertainty. Compared to the previously reported pathway capacity metric, our results suggest that the introduced decision error metrics characterize signaling failures more accurately. This is mainly because while capacity is a useful metric to study information transmission in signaling pathways, it does not capture the overlap between TNF-induced noisy response curves.

  11. Quantitation of postexercise lung thallium-201 uptake during single photon emission computed tomography

    International Nuclear Information System (INIS)

    Kahn, J.K.; Carry, M.M.; McGhie, I.; Pippin, J.J.; Akers, M.S.; Corbett, J.R.

    1989-01-01

    To test the hypothesis that analysis of lung thallium uptake measured during single photon emission computed tomography (SPECT) yields supplementary clinical information as reported for planar imaging, quantitative analysis of lung thallium uptake following maximal exercise was performed in 40 clinically normal subjects (Group 1) and 15 angiographically normal subjects (Group 2). Lung thallium uptake was measured from anterior projection images using a ratio of heart-to-lung activities. Seventy subjects with coronary artery disease (CAD) (Group 3) determined by angiography (greater than or equal to 70% luminal stenosis) underwent thallium perfusion SPECT. Thirty-nine percent of these subjects had multivessel and 61% had single vessel CAD. Lung thallium uptake was elevated in 47 of 70 (67%) Group 3 subjects. Group 3 subjects with elevated lung thallium uptake did not differ from Group 3 subjects with normal lung thallium uptake with respect to extent or distribution of coronary artery disease, left ventricular function, or severity of myocardial ischemia as determined by exercise and redistribution thallium SPECT. Thus, the measurement of thallium lung uptake from anterior projection images obtained during SPECT frequently identifies patients with CAD, but it may not provide supplementary information regarding the extent of myocardial ischemia or ventricular dysfunction

  12. The role of single photon emission computed tomography in bone imaging.

    Science.gov (United States)

    Sarikaya, I; Sarikaya, A; Holder, L E

    2001-01-01

    Single photon emission computed tomography (SPECT) of the bone is the second most frequently performed SPECT examination in routine nuclear medicine practice, with cardiac SPECT being the most frequent. Compared with planar scintigraphy, SPECT increases image contrast and improves lesion detection and localization. Studies have documented the unique diagnostic information provided by SPECT, particularly for avascular necrosis of the femoral head, in patients with back pain, for the differential diagnosis between malignant and benign spinal lesions, in the detection of metastatic cancer in the spine, for the diagnosis of temporomandibular joint internal derangement, and for the evaluation of acute and chronic knee pain. Although less rigorously documented, SPECT is being increasingly used in all types of situations that demand more precise anatomic localization of abnormal tracer uptake. The effectiveness of bone SPECT increases with the selection of the proper collimator, which allows one to acquire adequate counts and minimize the patient-to-detector distance. Low-energy, ultrahigh-resolution or high-resolution collimation is preferred over all-purpose collimators. Multihead gamma cameras can increase the counts obtained or shorten acquisition time, making SPECT acquisitions more practical in busy departments and also increasing image quality compared with single-head cameras. Iterative reconstruction, with the use of ordered subsets estimation maximization, provides better quality images than classical filtered back projection algorithms. Three-dimensional image analysis often aids lesion localization.

  13. Computational atomic and nuclear physics

    International Nuclear Information System (INIS)

    Bottcher, C.; Strayer, M.R.; McGrory, J.B.

    1990-01-01

    The evolution of parallel processor supercomputers in recent years provides opportunities to investigate in detail many complex problems, in many branches of physics, which were considered to be intractable only a few years ago. But to take advantage of these new machines, one must have a better understanding of how the computers organize their work than was necessary with previous single processor machines. Equally important, the scientist must have this understanding as well as a good understanding of the structure of the physics problem under study. In brief, a new field of computational physics is evolving, which will be led by investigators who are highly literate both computationally and physically. A Center for Computationally Intensive Problems has been established with the collaboration of the University of Tennessee Science Alliance, Vanderbilt University, and the Oak Ridge National Laboratory. The objective of this Center is to carry out forefront research in computationally intensive areas of atomic, nuclear, particle, and condensed matter physics. An important part of this effort is the appropriate training of students. An early effort of this Center was to conduct a Summer School of Computational Atomic and Nuclear Physics. A distinguished faculty of scientists in atomic, nuclear, and particle physics gave lectures on the status of present understanding of a number of topics at the leading edge in these fields, and emphasized those areas where computational physics was in a position to make a major contribution. In addition, there were lectures on numerical techniques which are particularly appropriate for implementation on parallel processor computers and which are of wide applicability in many branches of science

  14. SCAN secure processor and its biometric capabilities

    Science.gov (United States)

    Kannavara, Raghudeep; Mertoguno, Sukarno; Bourbakis, Nikolaos

    2011-04-01

    This paper presents the design of the SCAN secure processor and its extended instruction set to enable secure biometric authentication. The SCAN secure processor is a modified SparcV8 processor architecture with a new instruction set to handle voice, iris, and fingerprint-based biometric authentication. The algorithms for processing biometric data are based on the local global graph methodology. The biometric modules are synthesized in reconfigurable logic and the results of the field-programmable gate array (FPGA) synthesis are presented. We propose to implement the above-mentioned modules in an off-chip FPGA co-processor. Further, the SCAN-secure processor will offer a SCAN-based encryption and decryption of 32 bit instructions and data.

  15. A fully reconfigurable photonic integrated signal processor

    Science.gov (United States)

    Liu, Weilin; Li, Ming; Guzzon, Robert S.; Norberg, Erik J.; Parker, John S.; Lu, Mingzhi; Coldren, Larry A.; Yao, Jianping

    2016-03-01

    Photonic signal processing has been considered a solution to overcome the inherent electronic speed limitations. Over the past few years, an impressive range of photonic integrated signal processors have been proposed, but they usually offer limited reconfigurability, a feature highly needed for the implementation of large-scale general-purpose photonic signal processors. Here, we report and experimentally demonstrate a fully reconfigurable photonic integrated signal processor based on an InP-InGaAsP material system. The proposed photonic signal processor is capable of performing reconfigurable signal processing functions including temporal integration, temporal differentiation and Hilbert transformation. The reconfigurability is achieved by controlling the injection currents to the active components of the signal processor. Our demonstration suggests great potential for chip-scale fully programmable all-optical signal processing.

  16. FPGA Based Intelligent Co-operative Processor in Memory Architecture

    DEFF Research Database (Denmark)

    Ahmed, Zaki; Sotudeh, Reza; Hussain, Dil Muhammad Akbar

    2011-01-01

    In a continuing effort to improve computer system performance, Processor-In-Memory (PIM) architecture has emerged as an alternative solution. PIM architecture incorporates computational units and control logic directly on the memory to provide immediate access to the data. To exploit the potential...... benefits of PIM, a concept of Co-operative Intelligent Memory (CIM) was developed by the intelligent system group of University of Hertfordshire, based on the previously developed Co-operative Pseudo Intelligent Memory (CPIM). This paper provides an overview on previous works (CPIM, CIM) and realization...

  17. Optical computed tomography for spatially isotropic four-dimensional imaging of live single cells.

    Science.gov (United States)

    Kelbauskas, Laimonas; Shetty, Rishabh; Cao, Bin; Wang, Kuo-Chen; Smith, Dean; Wang, Hong; Chao, Shi-Hui; Gangaraju, Sandhya; Ashcroft, Brian; Kritzer, Margaret; Glenn, Honor; Johnson, Roger H; Meldrum, Deirdre R

    2017-12-01

    Quantitative three-dimensional (3D) computed tomography (CT) imaging of living single cells enables orientation-independent morphometric analysis of the intricacies of cellular physiology. Since its invention, x-ray CT has become indispensable in the clinic for diagnostic and prognostic purposes due to its quantitative absorption-based imaging in true 3D that allows objects of interest to be viewed and measured from any orientation. However, x-ray CT has not been useful at the level of single cells because there is insufficient contrast to form an image. Recently, optical CT has been developed successfully for fixed cells, but this technology called Cell-CT is incompatible with live-cell imaging due to the use of stains, such as hematoxylin, that are not compatible with cell viability. We present a novel development of optical CT for quantitative, multispectral functional 4D (three spatial + one spectral dimension) imaging of living single cells. The method applied to immune system cells offers truly isotropic 3D spatial resolution and enables time-resolved imaging studies of cells suspended in aqueous medium. Using live-cell optical CT, we found a heterogeneous response to mitochondrial fission inhibition in mouse macrophages and differential basal remodeling of small (0.1 to 1 fl) and large (1 to 20 fl) nuclear and mitochondrial structures on a 20- to 30-s time scale in human myelogenous leukemia cells. Because of its robust 3D measurement capabilities, live-cell optical CT represents a powerful new tool in the biomedical research field.

  18. NMRFx Processor: a cross-platform NMR data processing program

    International Nuclear Information System (INIS)

    Norris, Michael; Fetler, Bayard; Marchant, Jan; Johnson, Bruce A.

    2016-01-01

    NMRFx Processor is a new program for the processing of NMR data. Written in the Java programming language, NMRFx Processor is a cross-platform application and runs on Linux, Mac OS X and Windows operating systems. The application can be run in both a graphical user interface (GUI) mode and from the command line. Processing scripts are written in the Python programming language and executed so that the low-level Java commands are automatically run in parallel on computers with multiple cores or CPUs. Processing scripts can be generated automatically from the parameters of NMR experiments or interactively constructed in the GUI. A wide variety of processing operations are provided, including methods for processing of non-uniformly sampled datasets using iterative soft thresholding. The interactive GUI also enables the use of the program as an educational tool for teaching basic and advanced techniques in NMR data analysis.

  19. NMRFx Processor: a cross-platform NMR data processing program.

    Science.gov (United States)

    Norris, Michael; Fetler, Bayard; Marchant, Jan; Johnson, Bruce A

    2016-08-01

    NMRFx Processor is a new program for the processing of NMR data. Written in the Java programming language, NMRFx Processor is a cross-platform application and runs on Linux, Mac OS X and Windows operating systems. The application can be run in both a graphical user interface (GUI) mode and from the command line. Processing scripts are written in the Python programming language and executed so that the low-level Java commands are automatically run in parallel on computers with multiple cores or CPUs. Processing scripts can be generated automatically from the parameters of NMR experiments or interactively constructed in the GUI. A wide variety of processing operations are provided, including methods for processing of non-uniformly sampled datasets using iterative soft thresholding. The interactive GUI also enables the use of the program as an educational tool for teaching basic and advanced techniques in NMR data analysis.

  20. Ingredients of Adaptability: A Survey of Reconfigurable Processors

    Directory of Open Access Journals (Sweden)

    Anupam Chattopadhyay

    2013-01-01

    Full Text Available For a design to survive unforeseen physical effects like aging, temperature variation, and/or emergence of new application standards, adaptability needs to be supported. Adaptability, in its complete strength, is present in reconfigurable processors, which makes it an important IP in modern System-on-Chips (SoCs. Reconfigurable processors have risen to prominence as a dominant computing platform across embedded, general-purpose, and high-performance application domains during the last decade. Significant advances have been made in many areas such as, identifying the advantages of reconfigurable platforms, their modeling, implementation flow and finally towards early commercial acceptance. This paper reviews these progresses from various perspectives with particular emphasis on fundamental challenges and their solutions. Empowered with the analysis of past, the future research roadmap is proposed.

  1. LASIP-III, a generalized processor for standard interface files

    International Nuclear Information System (INIS)

    Bosler, G.E.; O'Dell, R.D.; Resnik, W.M.

    1976-03-01

    The LASIP-III code was developed for processing Version III standard interface data files which have been specified by the Committee on Computer Code Coordination. This processor performs two distinct tasks, namely, transforming free-field format, BCD data into well-defined binary files and providing for printing and punching data in the binary files. While LASIP-III is exported as a complete free-standing code package, techniques are described for easily separating the processor into two modules, viz., one for creating the binary files and one for printing the files. The two modules can be separated into free-standing codes or they can be incorporated into other codes. Also, the LASIP-III code can be easily expanded for processing additional files, and procedures are described for such an expansion. 2 figures, 8 tables

  2. SPP: A data base processor data communications protocol

    Science.gov (United States)

    Fishwick, P. A.

    1983-01-01

    The design and implementation of a data communications protocol for the Intel Data Base Processor (DBP) is defined. The protocol is termed SPP (Service Port Protocol) since it enables data transfer between the host computer and the DBP service port. The protocol implementation is extensible in that it is explicitly layered and the protocol functionality is hierarchically organized. Extensive trace and performance capabilities have been supplied with the protocol software to permit optional efficient monitoring of the data transfer between the host and the Intel data base processor. Machine independence was considered to be an important attribute during the design and implementation of SPP. The protocol source is fully commented and is included in Appendix A of this report.

  3. A Time-Composable Operating System for the Patmos Processor

    DEFF Research Database (Denmark)

    Ziccardi, Marco; Schoeberl, Martin; Vardanega, Tullio

    2015-01-01

    -composable operating system, on top of a time-composable processor, facilitates incremental development, which is highly desirable for industry. This paper makes a twofold contribution. First, we present enhancements to the Patmos processor to allow achieving time composability at the operating system level. Second......, we extend an existing time-composable operating system, TiCOS, to make best use of advanced Patmos hardware features in the pursuit of time composability.......In the last couple of decades we have witnessed a steady growth in the complexity and widespread of real-time systems. In order to master the rising complexity in the timing behaviour of those systems, rightful attention has been given to the development of time-predictable computer architectures...

  4. NMRFx Processor: a cross-platform NMR data processing program

    Energy Technology Data Exchange (ETDEWEB)

    Norris, Michael; Fetler, Bayard [One Moon Scientific, Inc. (United States); Marchant, Jan [University of Maryland Baltimore County, Howard Hughes Medical Institute (United States); Johnson, Bruce A., E-mail: bruce.johnson@asrc.cuny.edu [One Moon Scientific, Inc. (United States)

    2016-08-15

    NMRFx Processor is a new program for the processing of NMR data. Written in the Java programming language, NMRFx Processor is a cross-platform application and runs on Linux, Mac OS X and Windows operating systems. The application can be run in both a graphical user interface (GUI) mode and from the command line. Processing scripts are written in the Python programming language and executed so that the low-level Java commands are automatically run in parallel on computers with multiple cores or CPUs. Processing scripts can be generated automatically from the parameters of NMR experiments or interactively constructed in the GUI. A wide variety of processing operations are provided, including methods for processing of non-uniformly sampled datasets using iterative soft thresholding. The interactive GUI also enables the use of the program as an educational tool for teaching basic and advanced techniques in NMR data analysis.

  5. High performance parallel computers for science: New developments at the Fermilab advanced computer program

    International Nuclear Information System (INIS)

    Nash, T.; Areti, H.; Atac, R.

    1988-08-01

    Fermilab's Advanced Computer Program (ACP) has been developing highly cost effective, yet practical, parallel computers for high energy physics since 1984. The ACP's latest developments are proceeding in two directions. A Second Generation ACP Multiprocessor System for experiments will include $3500 RISC processors each with performance over 15 VAX MIPS. To support such high performance, the new system allows parallel I/O, parallel interprocess communication, and parallel host processes. The ACP Multi-Array Processor, has been developed for theoretical physics. Each $4000 node is a FORTRAN or C programmable pipelined 20 MFlops (peak), 10 MByte single board computer. These are plugged into a 16 port crossbar switch crate which handles both inter and intra crate communication. The crates are connected in a hypercube. Site oriented applications like lattice gauge theory are supported by system software called CANOPY, which makes the hardware virtually transparent to users. A 256 node, 5 GFlop, system is under construction. 10 refs., 7 figs

  6. Implementation of comprehensive address generator for digital signal processor

    Science.gov (United States)

    Kini, Ramesh M.; David, Sumam S.

    2013-03-01

    The performance of signal-processing algorithms implemented in hardware depends on the efficiency of datapath, memory speed and address computation. Pattern of data access in signal-processing applications is complex and it is desirable to execute the innermost loop of a kernel in a single-clock cycle. This necessitates the generation of typically three addresses per clock: two addresses for data sample/coefficient and one for the storage of processed data. Most of the Reconfigurable Processors, designed for multimedia, focus on mapping the multimedia applications written in a high-level language directly on to the reconfigurable fabric, implying the use of same datapath resources for kernel processing and address generation. This results in inconsistent and non-optimal use of finite datapath resources. Presence of a set of dedicated, efficient Address Generator Units (AGUs) helps in better utilisation of the datapath elements by using them only for kernel operations; and will certainly enhance the performance. This article focuses on the design and application-specific integrated circuit implementation of address generators for complex addressing modes required by multimedia signal-processing kernels. A novel algorithm and hardware for AGU is developed for accessing data and coefficients in a bit-reversed order for fast Fourier transform kernel spanning over log 2 N stages, AGUs for zig-zag-ordered data access for entropy coding after Discrete Cosine Transform (DCT), convolution kernels with stored/streaming data, accessing data for motion estimation using the block-matching technique and other conventional addressing modes. When mapped to hardware, they scale linearly in gate complexity with increase in the size.

  7. A high-accuracy optical linear algebra processor for finite element applications

    Science.gov (United States)

    Casasent, D.; Taylor, B. K.

    1984-01-01

    Optical linear processors are computationally efficient computers for solving matrix-matrix and matrix-vector oriented problems. Optical system errors limit their dynamic range to 30-40 dB, which limits their accuray to 9-12 bits. Large problems, such as the finite element problem in structural mechanics (with tens or hundreds of thousands of variables) which can exploit the speed of optical processors, require the 32 bit accuracy obtainable from digital machines. To obtain this required 32 bit accuracy with an optical processor, the data can be digitally encoded, thereby reducing the dynamic range requirements of the optical system (i.e., decreasing the effect of optical errors on the data) while providing increased accuracy. This report describes a new digitally encoded optical linear algebra processor architecture for solving finite element and banded matrix-vector problems. A linear static plate bending case study is described which quantities the processor requirements. Multiplication by digital convolution is explained, and the digitally encoded optical processor architecture is advanced.

  8. Design of a dataway processor for a parallel image signal processing system

    Science.gov (United States)

    Nomura, Mitsuru; Fujii, Tetsuro; Ono, Sadayasu

    1995-04-01

    Recently, demands for high-speed signal processing have been increasing especially in the field of image data compression, computer graphics, and medical imaging. To achieve sufficient power for real-time image processing, we have been developing parallel signal-processing systems. This paper describes a communication processor called 'dataway processor' designed for a new scalable parallel signal-processing system. The processor has six high-speed communication links (Dataways), a data-packet routing controller, a RISC CORE, and a DMA controller. Each communication link operates at 8-bit parallel in a full duplex mode at 50 MHz. Moreover, data routing, DMA, and CORE operations are processed in parallel. Therefore, sufficient throughput is available for high-speed digital video signals. The processor is designed in a top- down fashion using a CAD system called 'PARTHENON.' The hardware is fabricated using 0.5-micrometers CMOS technology, and its hardware is about 200 K gates.

  9. Discrete Fourier transformation processor based on complex radix (−1 + j number system

    Directory of Open Access Journals (Sweden)

    Anidaphi Shadap

    2017-02-01

    Full Text Available Complex radix (−1 + j allows the arithmetic operations of complex numbers to be done without treating the divide and conquer rules, which offers the significant speed improvement of complex numbers computation circuitry. Design and hardware implementation of complex radix (−1 + j converter has been introduced in this paper. Extensive simulation results have been incorporated and an application of this converter towards the implementation of discrete Fourier transformation (DFT processor has been presented. The functionality of the DFT processor have been verified in Xilinx ISE design suite version 14.7 and performance parameters like propagation delay and dynamic switching power consumption have been calculated by Virtuoso platform in Cadence. The proposed DFT processor has been implemented through conversion, multiplication and addition. The performance parameter matrix in terms of delay and power consumption offered a significant improvement over other traditional implementation of DFT processor.

  10. A Fully Automatic Instantaneous Fire Hotspot Detection Processor Based on AVHRR Imagery—A TIMELINE Thematic Processor

    Directory of Open Access Journals (Sweden)

    Simon Plank

    2017-01-01

    Full Text Available The German Aerospace Center’s (DLR TIMELINE project aims to develop an operational processing and data management environment to process 30 years of National Oceanic and Atmospheric Administration (NOAA—Advanced Very High Resolution Radiometer (AVHRR raw data into L1b, L2 and L3 products. This article presents the current status of the fully automated L2 active fire hotspot detection processor, which is based on single-temporal datasets in orbit geometry. Three different probability levels of fire detection are provided. The results of the hotspot processor were tested with simulated fire data. Moreover, the processing results of real AVHRR imagery were validated with five different datasets: MODIS hotspots, visually confirmed MODIS hotspots, fire-news data from the European Forest Fire Information System (EFFIS, burnt area mapping of the Copernicus Emergency Management Service (EMS and data of the Piedmont fire database.

  11. Effect of processor temperature on film dosimetry.

    Science.gov (United States)

    Srivastava, Shiv P; Das, Indra J

    2012-01-01

    Optical density (OD) of a radiographic film plays an important role in radiation dosimetry, which depends on various parameters, including beam energy, depth, field size, film batch, dose, dose rate, air film interface, postexposure processing time, and temperature of the processor. Most of these parameters have been studied for Kodak XV and extended dose range (EDR) films used in radiation oncology. There is very limited information on processor temperature, which is investigated in this study. Multiple XV and EDR films were exposed in the reference condition (d(max.), 10 × 10 cm(2), 100 cm) to a given dose. An automatic film processor (X-Omat 5000) was used for processing films. The temperature of the processor was adjusted manually with increasing temperature. At each temperature, a set of films was processed to evaluate OD at a given dose. For both films, OD is a linear function of processor temperature in the range of 29.4-40.6°C (85-105°F) for various dose ranges. The changes in processor temperature are directly related to the dose by a quadratic function. A simple linear equation is provided for the changes in OD vs. processor temperature, which could be used for correcting dose in radiation dosimetry when film is used. Copyright © 2012 American Association of Medical Dosimetrists. Published by Elsevier Inc. All rights reserved.

  12. Effect of processor temperature on film dosimetry

    International Nuclear Information System (INIS)

    Srivastava, Shiv P.; Das, Indra J.

    2012-01-01

    Optical density (OD) of a radiographic film plays an important role in radiation dosimetry, which depends on various parameters, including beam energy, depth, field size, film batch, dose, dose rate, air film interface, postexposure processing time, and temperature of the processor. Most of these parameters have been studied for Kodak XV and extended dose range (EDR) films used in radiation oncology. There is very limited information on processor temperature, which is investigated in this study. Multiple XV and EDR films were exposed in the reference condition (d max. , 10 × 10 cm 2 , 100 cm) to a given dose. An automatic film processor (X-Omat 5000) was used for processing films. The temperature of the processor was adjusted manually with increasing temperature. At each temperature, a set of films was processed to evaluate OD at a given dose. For both films, OD is a linear function of processor temperature in the range of 29.4–40.6°C (85–105°F) for various dose ranges. The changes in processor temperature are directly related to the dose by a quadratic function. A simple linear equation is provided for the changes in OD vs. processor temperature, which could be used for correcting dose in radiation dosimetry when film is used.

  13. The ATLAS fast tracker processor design

    CERN Document Server

    Volpi, Guido; Albicocco, Pietro; Alison, John; Ancu, Lucian Stefan; Anderson, James; Andari, Nansi; Andreani, Alessandro; Andreazza, Attilio; Annovi, Alberto; Antonelli, Mario; Asbah, Needa; Atkinson, Markus; Baines, J; Barberio, Elisabetta; Beccherle, Roberto; Beretta, Matteo; Biesuz, Nicolo Vladi; Blair, R E; Bogdan, Mircea; Boveia, Antonio; Britzger, Daniel; Bryant, Partick; Burghgrave, Blake; Calderini, Giovanni; Camplani, Alessandra; Cavaliere, Viviana; Cavasinni, Vincenzo; Chakraborty, Dhiman; Chang, Philip; Cheng, Yangyang; Citraro, Saverio; Citterio, Mauro; Crescioli, Francesco; Dawe, Noel; Dell'Orso, Mauro; Donati, Simone; Dondero, Paolo; Drake, G; Gadomski, Szymon; Gatta, Mauro; Gentsos, Christos; Giannetti, Paola; Gkaitatzis, Stamatios; Gramling, Johanna; Howarth, James William; Iizawa, Tomoya; Ilic, Nikolina; Jiang, Zihao; Kaji, Toshiaki; Kasten, Michael; Kawaguchi, Yoshimasa; Kim, Young Kee; Kimura, Naoki; Klimkovich, Tatsiana; Kolb, Mathis; Kordas, K; Krizka, Karol; Kubota, T; Lanza, Agostino; Li, Ho Ling; Liberali, Valentino; Lisovyi, Mykhailo; Liu, Lulu; Love, Jeremy; Luciano, Pierluigi; Luongo, Carmela; Magalotti, Daniel; Maznas, Ioannis; Meroni, Chiara; Mitani, Takashi; Nasimi, Hikmat; Negri, Andrea; Neroutsos, Panos; Neubauer, Mark; Nikolaidis, Spiridon; Okumura, Y; Pandini, Carlo; Petridou, Chariclia; Piendibene, Marco; Proudfoot, James; Rados, Petar Kevin; Roda, Chiara; Rossi, Enrico; Sakurai, Yuki; Sampsonidis, Dimitrios; Saxon, James; Schmitt, Stefan; Schoening, Andre; Shochet, Mel; Shoijaii, Jafar; Soltveit, Hans Kristian; Sotiropoulou, Calliope-Louisa; Stabile, Alberto; Swiatlowski, Maximilian J; Tang, Fukun; Taylor, Pierre Thor Elliot; Testa, Marianna; Tompkins, Lauren; Vercesi, V; Wang, Rui; Watari, Ryutaro; Zhang, Jianhong; Zeng, Jian Cong; Zou, Rui; Bertolucci, Federico

    2015-01-01

    The extended use of tracking information at the trigger level in the LHC is crucial for the trigger and data acquisition (TDAQ) system to fulfill its task. Precise and fast tracking is important to identify specific decay products of the Higgs boson or new phenomena, as well as to distinguish the contributions coming from the many collisions that occur at every bunch crossing. However, track reconstruction is among the most demanding tasks performed by the TDAQ computing farm; in fact, complete reconstruction at full Level-1 trigger accept rate (100 kHz) is not possible. In order to overcome this limitation, the ATLAS experiment is planning the installation of a dedicated processor, the Fast Tracker (FTK), which is aimed at achieving this goal. The FTK is a pipeline of high performance electronics, based on custom and commercial devices, which is expected to reconstruct, with high resolution, the trajectories of charged-particle tracks with a transverse momentum above 1 GeV, using the ATLAS inner tracker info...

  14. Comparison of bone volume measurements using conventional single and dual energy computed tomography

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Yung Kyoon; Park, Sang Hoon [Dept. of Radiology, Samsung Medical Center, Seoul (Korea, Republic of); Kim, Yon Min [Dept. of Radiotechnology, Wonkwang Health Science University, Iksan (Korea, Republic of)

    2017-06-15

    The study examines changes in calcium volume on born by comparing two figures; one is measured by dual energy computed tomography(DECT) followed by applying variation in monochromatic energy selection( keV), material decomposition(MD), and material suppressed iodine(MSI) analysis, and the other is measured by conventional single source computed tomography(CSCT). For this study, based on CSCT images taken by using human mimicked phantom, 70, 100, 140 keV and MSI, MD material calcium weighting( MCW) and MD material iodine weighting(MIW) of DECT were applied respectively. Then calculated calcium volume was converted to Agatston score for comparison. Volume of human mimicked phantom was in inverse proportion to keV. The volume decreased while keV increased(p<0.05). The most similar DECT volumes were reconstructed at 70 keV, the difference was showed 35.8±12.2 for rib, femur (16.1±24.1), pelvis(13.7±18.8), and spine(179.0±61.8). However, the volume of MSI was down for each organ; the volume of rib was 5.55%, femur(76.34%), pelvis(55.16%) and spine(87.58%). The volume of MSI decreased 55.9% for rib, femur(80.7%), pelvis(69.6%) and spine(54.2%) while MD MIW reduced for rib(83.51%), femur(87.68%), pelvis(86.64%), and spine(82.62%). With the results, the study found that outcomes were affected by the method which examiners employed. When using DECT, calcium volume of born dropped with keV increased. It also found that the most similar DECT images were reconstructed at 70 keV. The results of experiments implied that the users of MSI and MD should be cautious of errors as there are big differences in scores between those two methods.

  15. Safety of ventilation/perfusion single photon emission computed tomography for pulmonary embolism diagnosis

    Energy Technology Data Exchange (ETDEWEB)

    Le Roux, Pierre-Yves; Palard, Xavier; Robin, Philippe; Abgral, Ronan; Querellou, Solene; Salaun, Pierre-Yves [Universite Europeenne de Bretagne, Brest (France); Universite de Brest, Brest (France); CHRU de la Cavale Blanche, Service de medecine nucleaire, Brest (France); Delluc, Aurelien; Couturaud, Francis [Universite Europeenne de Bretagne, Brest (France); Universite de Brest, Brest (France); CHRU de la Cavale Blanche, Departement de medecine interne et de pneumologie, Brest (France); Le Gal, Gregoire [Universite Europeenne de Bretagne, Brest (France); University of Ottawa, Ottawa Hospital Research Institute, Ottawa (Canada); CHRU de la Cavale Blanche, Departement de medecine interne et de pneumologie, Brest (France); Universite de Brest, Brest (France)

    2014-10-15

    The aim of this management outcome study was to assess the safety of ventilation/perfusion single photon emission computed tomography (V/Q SPECT) for the diagnosis of pulmonary embolism (PE) using for interpretation the criteria proposed in the European Association of Nuclear Medicine (EANM) guidelines for V/Q scintigraphy. A total of 393 patients with clinically suspected PE referred to the Nuclear Medicine Department of Brest University Hospital from April 2011 to March 2013, with either a high clinical probability or a low or intermediate clinical probability but positive D-dimer, were retrospectively analysed. V/Q SPECT were interpreted by the attending nuclear medicine physician using a diagnostic cut-off of one segmental or two subsegmental mismatches. The final diagnostic conclusion was established by the physician responsible for patient care, based on clinical symptoms, laboratory test, V/Q SPECT and other imaging procedures performed. Patients in whom PE was deemed absent were not treated with anticoagulants and were followed up for 3 months. Of the 393 patients, the prevalence of PE was 28 %. V/Q SPECT was positive for PE in 110 patients (28 %) and negative in 283 patients (72 %). Of the 110 patients with a positive V/Q SPECT, 78 (71 %) had at least one additional imaging test (computed tomography pulmonary angiography or ultrasound) and the diagnosis of PE was eventually excluded in one patient. Of the 283 patients with a negative V/Q SPECT, 74 (26 %) patients had another test. The diagnosis of PE was finally retained in one patient and excluded in 282 patients. The 3-month thromboembolic risk in the patients not treated with anticoagulants was 1/262: 0.38 % (95 % confidence interval 0.07-2.13). A diagnostic management including V/Q SPECT interpreted with a diagnostic cut-off of ''one segmental or two subsegmental mismatches'' appears safe to exclude PE. (orig.)

  16. Computational local stiffness analysis of biological cell: High aspect ratio single wall carbon nanotube tip

    Energy Technology Data Exchange (ETDEWEB)

    TermehYousefi, Amin, E-mail: at.tyousefi@gmail.com [Department of Human Intelligence Systems, Graduate School of Life Science and Systems Engineering, Kyushu Institute of Technology (Kyutech) (Japan); Bagheri, Samira; Shahnazar, Sheida [Nanotechnology & Catalysis Research Centre (NANOCAT), IPS Building, University Malaya, 50603 Kuala Lumpur (Malaysia); Rahman, Md. Habibur [Department of Computer Science and Engineering, University of Asia Pacific, Green Road, Dhaka-1215 (Bangladesh); Kadri, Nahrizul Adib [Department of Biomedical Engineering, Faculty of Engineering, University Malaya, 50603 Kuala Lumpur (Malaysia)

    2016-02-01

    Carbon nanotubes (CNTs) are potentially ideal tips for atomic force microscopy (AFM) due to the robust mechanical properties, nanoscale diameter and also their ability to be functionalized by chemical and biological components at the tip ends. This contribution develops the idea of using CNTs as an AFM tip in computational analysis of the biological cells. The proposed software was ABAQUS 6.13 CAE/CEL provided by Dassault Systems, which is a powerful finite element (FE) tool to perform the numerical analysis and visualize the interactions between proposed tip and membrane of the cell. Finite element analysis employed for each section and displacement of the nodes located in the contact area was monitored by using an output database (ODB). Mooney–Rivlin hyperelastic model of the cell allows the simulation to obtain a new method for estimating the stiffness and spring constant of the cell. Stress and strain curve indicates the yield stress point which defines as a vertical stress and plan stress. Spring constant of the cell and the local stiffness was measured as well as the applied force of CNT-AFM tip on the contact area of the cell. This reliable integration of CNT-AFM tip process provides a new class of high performance nanoprobes for single biological cell analysis. - Graphical abstract: This contribution develops the idea of using CNTs as an AFM tip in computational analysis of the biological cells. The proposed software was ABAQUS 6.13 CAE/CEL provided by Dassault Systems. Finite element analysis employed for each section and displacement of the nodes located in the contact area was monitored by using an output database (ODB). Mooney–Rivlin hyperelastic model of the cell allows the simulation to obtain a new method for estimating the stiffness and spring constant of the cell. Stress and strain curve indicates the yield stress point which defines as a vertical stress and plan stress. Spring constant of the cell and the local stiffness was measured as well

  17. Evaporation of freely suspended single droplets: experimental, theoretical and computational simulations

    International Nuclear Information System (INIS)

    Hołyst, R; Litniewski, M; Jakubczyk, D; Kolwas, K; Kolwas, M; Kowalski, K; Migacz, S; Palesa, S; Zientara, M

    2013-01-01

    Evaporation is ubiquitous in nature. This process influences the climate, the formation of clouds, transpiration in plants, the survival of arctic organisms, the efficiency of car engines, the structure of dried materials and many other phenomena. Recent experiments discovered two novel mechanisms accompanying evaporation: temperature discontinuity at the liquid–vapour interface during evaporation and equilibration of pressures in the whole system during evaporation. None of these effects has been predicted previously by existing theories despite the fact that after 130 years of investigation the theory of evaporation was believed to be mature. These two effects call for reanalysis of existing experimental data and such is the goal of this review. In this article we analyse the experimental and the computational simulation data on the droplet evaporation of several different systems: water into its own vapour, water into the air, diethylene glycol into nitrogen and argon into its own vapour. We show that the temperature discontinuity at the liquid–vapour interface discovered by Fang and Ward (1999 Phys. Rev. E 59 417–28) is a rule rather than an exception. We show in computer simulations for a single-component system (argon) that this discontinuity is due to the constraint of momentum/pressure equilibrium during evaporation. For high vapour pressure the temperature is continuous across the liquid–vapour interface, while for small vapour pressures the temperature is discontinuous. The temperature jump at the interface is inversely proportional to the vapour density close to the interface. We have also found that all analysed data are described by the following equation: da/dt = P 1 /(a + P 2 ), where a is the radius of the evaporating droplet, t is time and P 1 and P 2 are two parameters. P 1 = −λΔT/(q eff ρ L ), where λ is the thermal conductivity coefficient in the vapour at the interface, ΔT is the temperature difference between the liquid droplet

  18. [Analysis of single-photon emission computed tomography in patients with hypertensive encephalopathy complicated with previous hypertensive crisis].

    Science.gov (United States)

    Kustkova, H S

    2012-01-01

    In cerebrovascular diseases pefuzionnaya single photon emission computed tomography with lipophilic amines used for the diagnosis of functional disorders of cerebral blood flow. Quantitative calculations helps clarify the nature of vascular disease and clarify the adequacy and effectiveness of the treatment. In this modern program for SPECT ensure conduct not only as to the calculation of blood flow, but also make it possible to compute also the absolute values of cerebral blood flow.

  19. The GF-3 SAR Data Processor.

    Science.gov (United States)

    Han, Bing; Ding, Chibiao; Zhong, Lihua; Liu, Jiayin; Qiu, Xiaolan; Hu, Yuxin; Lei, Bin

    2018-03-10

    The Gaofen-3 (GF-3) data processor was developed as a workstation-based GF-3 synthetic aperture radar (SAR) data processing system. The processor consists of two vital subsystems of the GF-3 ground segment, which are referred to as data ingesting subsystem (DIS) and product generation subsystem (PGS). The primary purpose of DIS is to record and catalogue GF-3 raw data with a transferring format, and PGS is to produce slant range or geocoded imagery from the signal data. This paper presents a brief introduction of the GF-3 data processor, including descriptions of the system architecture, the processing algorithms and its output format.

  20. Making CSB+-Tree Processor Conscious

    DEFF Research Database (Denmark)

    Samuel, Michael; Pedersen, Anders Uhl; Bonnet, Philippe

    2005-01-01

    Cache-conscious indexes, such as CSB+-tree, are sensitive to the underlying processor architecture. In this paper, we focus on how to adapt the CSB+-tree so that it performs well on a range of different processor architectures. Previous work has focused on the impact of node size on the performance...... of the CSB+-tree. We argue that it is necessary to consider a larger group of parameters in order to adapt CSB+-tree to processor architectures as different as Pentium and Itanium. We identify this group of parameters and study how it impacts the performance of CSB+-tree on Itanium 2. Finally, we propose...

  1. Hardware trigger processor for the MDT system

    CERN Document Server

    AUTHOR|(SzGeCERN)757787; The ATLAS collaboration; Hazen, Eric; Butler, John; Black, Kevin; Gastler, Daniel Edward; Ntekas, Konstantinos; Taffard, Anyes; Martinez Outschoorn, Verena; Ishino, Masaya; Okumura, Yasuyuki

    2017-01-01

    We are developing a low-latency hardware trigger processor for the Monitored Drift Tube system in the Muon spectrometer. The processor will fit candidate Muon tracks in the drift tubes in real time, improving significantly the momentum resolution provided by the dedicated trigger chambers. We present a novel pure-FPGA implementation of a Legendre transform segment finder, an associative-memory alternative implementation, an ARM (Zynq) processor-based track fitter, and compact ATCA carrier board architecture. The ATCA architecture is designed to allow a modular, staged approach to deployment of the system and exploration of alternative technologies.

  2. Visual mismatch and predictive coding: A computational single-trial ERP study.

    Science.gov (United States)

    Stefanics, Gabor; Heinzle, Jakob; Attila Horváth, András; Enno Stephan, Klaas

    2018-03-26

    Predictive coding (PC) posits that the brain employs a generative model to infer the environmental causes of its sensory data and uses precision-weighted prediction errors (pwPE) to continuously update this model. While supported by much circumstantial evidence, experimental tests grounded in formal trial-by-trial predictions are rare. One partial exception are event-related potential (ERP) studies of the auditory mismatch negativity (MMN), where computational models have found signatures of pwPEs and related model-updating processes.Here, we tested this hypothesis in the visual domain, examining possible links between visual mismatch responses and pwPEs. We used a novel visual 'roving standard' paradigm to elicit mismatch responses in humans (of both sexes) by unexpected changes in either color or emotional expression of faces. Using a hierarchical Bayesian model, we simulated pwPE trajectories of a Bayes-optimal observer and used these to conduct a comprehensive trial-by-trial analysis across the time×sensor space. We found significant modulation of brain activity by both color and emotion pwPEs. The scalp distribution and timing of these single-trial pwPE responses were in agreement with visual mismatch responses obtained by traditional averaging and subtraction (deviant-minus-standard) approaches. Finally, we compared the Bayesian model to a more classical change detection (CD) model of MMN. Model comparison revealed that trial-wise pwPEs explained the observed mismatch responses better than categorical change detection.Our results suggest that visual mismatch responses reflect trial-wise pwPEs, as postulated by PC. These findings go beyond classical ERP analyses of visual mismatch and illustrate the utility of computational analyses for studying automatic perceptual processes. SIGNIFICANCE STATEMENT Human perception is thought to rely on a predictive model of the environment which is updated via precision-weighted prediction errors (pwPE) when events violate

  3. Wing-Body Aeroelasticity Using Finite-Difference Fluid/Finite-Element Structural Equations on Parallel Computers

    Science.gov (United States)

    Byun, Chansup; Guruswamy, Guru P.; Kutler, Paul (Technical Monitor)

    1994-01-01

    In recent years significant advances have been made for parallel computers in both hardware and software. Now parallel computers have become viable tools in computational mechanics. Many application codes developed on conventional computers have been modified to benefit from parallel computers. Significant speedups in some areas have been achieved by parallel computations. For single-discipline use of both fluid dynamics and structural dynamics, computations have been made on wing-body configurations using parallel computers. However, only a limited amount of work has been completed in combining these two disciplines for multidisciplinary applications. The prime reason is the increased level of complication associated with a multidisciplinary approach. In this work, procedures to compute aeroelasticity on parallel computers using direct coupling of fluid and structural equations will be investigated for wing-body configurations. The parallel computer selected for computations is an Intel iPSC/860 computer which is a distributed-memory, multiple-instruction, multiple data (MIMD) computer with 128 processors. In this study, the computational efficiency issues of parallel integration of both fluid and structural equations will be investigated in detail. The fluid and structural domains will be modeled using finite-difference and finite-element approaches, respectively. Results from the parallel computer will be compared with those from the conventional computers using a single processor. This study will provide an efficient computational tool for the aeroelastic analysis of wing-body structures on MIMD type parallel computers.

  4. Development of Innovative Design Processor

    International Nuclear Information System (INIS)

    Park, Y.S.; Park, C.O.

    2004-01-01

    The nuclear design analysis requires time-consuming and erroneous model-input preparation, code run, output analysis and quality assurance process. To reduce human effort and improve design quality and productivity, Innovative Design Processor (IDP) is being developed. Two basic principles of IDP are the document-oriented design and the web-based design. The document-oriented design is that, if the designer writes a design document called active document and feeds it to a special program, the final document with complete analysis, table and plots is made automatically. The active documents can be written with ordinary HTML editors or created automatically on the web, which is another framework of IDP. Using the proper mix-up of server side and client side programming under the LAMP (Linux/Apache/MySQL/PHP) environment, the design process on the web is modeled as a design wizard style so that even a novice designer makes the design document easily. This automation using the IDP is now being implemented for all the reload design of Korea Standard Nuclear Power Plant (KSNP) type PWRs. The introduction of this process will allow large reduction in all reload design efforts of KSNP and provide a platform for design and R and D tasks of KNFC. (authors)

  5. Single photon emission computed tomography in the diagnosis of Alzheimer's disease

    International Nuclear Information System (INIS)

    Hanyu, Haruo; Asano, Tetsuichi; Abe, Shin'e; Arai, Hisayuki; Iwamoto, Toshihiko; Takasaki, Masaru; Shindo, Hiroaki; Abe, Kimihiko

    1997-01-01

    Studies with single photon emission computed tomography (SPECT) have shown temporoparietal (TP) hypoperfusion in patients with Alzheimer's disease (AD). We evaluated the utility of this findings in the diagnosis of AD. SPECT images with 123 I-iodoamphetamine were analyzed qualitatively by a rater without knowledge of the subject's clinical status. Sixty-seven of 302 consecutive patients were judged as having TP hypoperfusion by SPECT imaging. This perfusion pattern was observed in 44 of 51 patients with AD, in 5 with mixed dementia, 8 with cerebrovascular disease (including 5 with dementia), 4 with Parkinson's disease (including 2 with dementia), 1 with normal pressure hydrocephalus, 1 with slowly progressive aphasia, 1 with progressive autonomic failure, 2 with age-associated memory impairment, and 1 with unclassified dementia. The sensitivity for AD was 86.3% (44 of 51 AD), and the specificity was 91.2% (229 of 251 non-AD). Next, we looked for differences in perfusion images between patients with AD and without AD. Some patients without AD had additional hypoperfusion beyond TP areas: deep gray matter hypoperfusion and diffuse frontal hypoperfusion, which could be used to differentiate them from the patients with AD. Others could not be distinguished from patients with AD by their perfusion pattern. Although patients with other cerebral disorders occasionally have TP hypoperfusion, this finding makes the diagnosis of AD very likely. (author)

  6. Single photon emission computed tomography in the diagnosis of Alzheimer`s disease

    Energy Technology Data Exchange (ETDEWEB)

    Hanyu, Haruo; Asano, Tetsuichi; Abe, Shin`e; Arai, Hisayuki; Iwamoto, Toshihiko; Takasaki, Masaru; Shindo, Hiroaki; Abe, Kimihiko [Tokyo Medical Coll. (Japan)

    1997-06-01

    Studies with single photon emission computed tomography (SPECT) have shown temporoparietal (TP) hypoperfusion in patients with Alzheimer`s disease (AD). We evaluated the utility of this findings in the diagnosis of AD. SPECT images with {sup 123}I-iodoamphetamine were analyzed qualitatively by a rater without knowledge of the subject`s clinical status. Sixty-seven of 302 consecutive patients were judged as having TP hypoperfusion by SPECT imaging. This perfusion pattern was observed in 44 of 51 patients with AD, in 5 with mixed dementia, 8 with cerebrovascular disease (including 5 with dementia), 4 with Parkinson`s disease (including 2 with dementia), 1 with normal pressure hydrocephalus, 1 with slowly progressive aphasia, 1 with progressive autonomic failure, 2 with age-associated memory impairment, and 1 with unclassified dementia. The sensitivity for AD was 86.3% (44 of 51 AD), and the specificity was 91.2% (229 of 251 non-AD). Next, we looked for differences in perfusion images between patients with AD and without AD. Some patients without AD had additional hypoperfusion beyond TP areas: deep gray matter hypoperfusion and diffuse frontal hypoperfusion, which could be used to differentiate them from the patients with AD. Others could not be distinguished from patients with AD by their perfusion pattern. Although patients with other cerebral disorders occasionally have TP hypoperfusion, this finding makes the diagnosis of AD very likely. (author)

  7. Gated single photon emission computer tomography for the detection of silent myocardial ischemia

    International Nuclear Information System (INIS)

    Pena Q, Yamile; Coca P, Marco Antonio; Batista C, Juan Felipe; Fernandez-Britto, Jose; Quesada P, Rodobaldo; Pena C; Andria

    2009-01-01

    Background: Asymptomatic patients with severe coronary atherosclerosis may have a normal resting electrocardiogram and stress test. Aim: To assess the yield of Gated Single Photon Emission Computer Tomography (SPECT) for the screening of silent myocardial ischemia in type 2 diabetic patients. Material and methods: Electrocardiogram, stress test and gated-SPECT were performed on 102 type 2 diabetic patients aged 60 ± 8 years without cardiovascular symptoms. All subjects were also subjected to a coronary angiography, whose results were used as gold standard. Results: Gated-SPECT showed myocardial ischemia on 26.5% of studied patients. The sensibility, specificity, accuracy, positive predictive value and negative predictive value were 92.3%, 96%, 95%, 88.8%, 97.3%, respectively. In four and six patients ischemia was detected on resting electrocardiogram and stress test, respectively. Eighty percent of patients with doubtful resting electrocardiogram results and 70% with a doubtful stress test had a silent myocardial ischemia detected by gated-SPECT. There was a good agreement between the results of gated-SPECT and coronary angiography (k =0.873). Conclusions: Gated-SPECT was an useful tool for the screening of silent myocardial ischemia

  8. Single-photon emission computed tomographic findings and motor neuron signs in amyotrophic lateral sclerosis

    International Nuclear Information System (INIS)

    Terao, Shin-ichi; Sobue, Gen; Higashi, Naoki; Takahashi, Masahiko; Suga, Hidemichi; Mitsuma, Terunori

    1995-01-01

    123 I-amphetamine-single photon emission computed tomography (SPECT) was performed on 16 patients with amyotrophic lateral sclerosis (ALS) to investigate the correlation between regional cerebral blood flow (rCBF) and upper motor neuron signs. Significant decreased blood flow less than 2 SDs below the mean of controls was observed in the frontal lobe in 4 patients (25%) and in the frontoparietal lobe including the cortical motor area in 4 patients, respectively. The severity of extermity muscular weakness was significantly correlate with decrease in blood flow through the frontal lobe (p<0.05) and through the frontoparietal lobe (p<0.001). A significant correlation was also noted to exist between the severity of bulbar paralysis and decrease in blood flow through the frontoparietal lobe. No correlation, however, was observed between rCBF and severity of spasticity, presence or absence of Babinski's sign and the duration of illness. Although muscular weakness in the limbs and bulbar paralysis are not pure upper motor neuron signs, the observed reduction in blood flow through the frontal or frontoparietal lobes appears to reflect extensive progression of functional or organic lesions of cortical neurons including the motor area. (author)

  9. Musicogenic epilepsy: review of the literature and case report with ictal single photon emission computed tomography.

    Science.gov (United States)

    Wieser, H G; Hungerbühler, H; Siegel, A M; Buck, A

    1997-02-01

    We report a case of musicogenic epilepsy with ictal single photon emission computed tomography (SPECT) study and discuss the findings of this patient in the context of 76 cases with musicogenic epilepsy described in the literature and seven other cases followed in Zurich. We analyzed the 83 patients according to the precipitating musical factors, type of epilepsy, presumed localization of seizure onset, and demographic data. Fourteen of 83 patients (17%) had seizures triggered exclusively by music. At time of examination, music was the only known precipitating stimulus in 65 of 83 patients (78%). Various characteristics of the musical stimulus were significant, e.g., musical category, familiarity, and instruments. Musicogenic epilepsy is a particular form of epilepsy with a strong correlation to the temporal lobe and a right-sided preponderance. A high musial standard might predispose for musicogenic epilepsy. Moreover, the majority of cases do not fall into the category of a strictly defined "reflex epilepsy," but appear to depend on the indermediary of a certain emotional reaction mediated through limbic mesial temporal lobe structures.

  10. Comparison of Ga-67 planar imaging and single photon emission computed tomography in malignant chest disease

    International Nuclear Information System (INIS)

    Tumeh, S.S.; Rosenthal, D.; Kaplan, W.D.; English, R.E.; Holman, B.L.

    1985-01-01

    To determine the value of Ga-67 single photon emission computed tomography (SPECT) in patients (pts) with malignant chest disease, the authors compared Ga-67 planar scans (ps) and SPECT with the medical records in twenty-five consecutive patients. Twenty-three examinations were performed on 17 pts with Hodgkin's disease (HD) and three pts with non-Hodgkin's lymphoma. Five examinations were performed on 5 pts with bronchogenic carcinoma (BC). The two modalities were evaluated for (1) presence or absence of disease, (2) number of foci of abnormal uptake and (3) extent of disease. In pts with lymphoma, SPECT defined the extent of disease better than planar imaging in eight examinations; it demonstrated para-cardial involvement in one pt, separated hilar from mediastinal disease in 4, and demonstrated posterior mediastinal disease in 3. SPECT clarified suspicious foci on planar images in seven examinations, correctly ruled out disease in two pts with equivocal planar images and did not exchange planar image findings in six examinations. In pts with bronchogenic carcinoma, both medalities correctly ruled out mediastinal involvement in three pts. SPECT detected mediastinal lymph node involvement in one pt with equivocal planar images. Both SPECT and planar imaging missed direct tumor extension to the mediastinum in one pt. They conclude that Ga-67 with SPECT is better than planar images for staging of chest lymphoma and BC. Since it defines different lymph node groups it carries a good potential for staging as well as follow up of those pts

  11. Single photon emission computed tomography (SPECT) in seizure disorders in childhood

    International Nuclear Information System (INIS)

    Vles, J.S.H.; Demandt, E.; Ceulemans, B.; de Roo, M.; Casaer, P.J.M.

    1990-01-01

    In 38 children with partial seizures, the EEG, CT and NMR findings were compared to the results obtained with Tc99m HMPAO single photon emission computed tomography (SPECT) in order to determine whether SPECT is a useful adjunct to EEG, CT and NMR in this age group. In 3 out of 7 patients with a normal EEG, SPECT showed focal abnormalities. Nine patients whose EEGs did not show adequate lateralization had an abnormal SPECT which revealed a focus. In 14 out of 21 patients with a normal CT, SPECT showed focal changes in 13 patients and diffuse changes in the other one. In 7 out of 12 patients with a normal NMR, SPECT showed focal abnormalities. Although clinical history and a careful description of the seizures are the most valuable information in partial seizure disorders, SPECT imaging gives valuable additional information, which might target treatment. SPECT was superior to CT and NMR with respect to the depiction of some kind of abnormality. (author)

  12. Restoration filtering based on projection power spectrum for single-photon emission computed tomography

    International Nuclear Information System (INIS)

    Kubo, Naoki

    1995-01-01

    To improve the quality of single-photon emission computed tomographic (SPECT) images, a restoration filter has been developed. This filter was designed according to practical 'least squares filter' theory. It is necessary to know the object power spectrum and the noise power spectrum. The power spectrum is estimated from the power spectrum of a projection, when the high-frequency power spectrum of a projection is adequately approximated as a polynomial exponential expression. A study of the restoration with the filter based on a projection power spectrum was conducted, and compared with that of the 'Butterworth' filtering method (cut-off frequency of 0.15 cycles/pixel), and 'Wiener' filtering (signal-to-noise power spectrum ratio was a constant). Normalized mean-squared errors (NMSE) of the phantom, two line sources located in a 99m Tc filled cylinder, were used. NMSE of the 'Butterworth' filter, 'Wiener' filter, and filtering based on a power spectrum were 0.77, 0.83, and 0.76 respectively. Clinically, brain SPECT images utilizing this new restoration filter improved the contrast. Thus, this filter may be useful in diagnosis of SPECT images. (author)

  13. Astigmatic single photon emission computed tomography imaging with a displaced center of rotation

    International Nuclear Information System (INIS)

    Wang, H.; Smith, M.F.; Stone, C.D.; Jaszczak, R.J.

    1998-01-01

    A filtered backprojection algorithm is developed for single photon emission computed tomography (SPECT) imaging with an astigmatic collimator having a displaced center of rotation. The astigmatic collimator has two perpendicular focal lines, one that is parallel to the axis of rotation of the gamma camera and one that is perpendicular to this axis. Using SPECT simulations of projection data from a hot rod phantom and point source arrays, it is found that a lack of incorporation of the mechanical shift in the reconstruction algorithm causes errors and artifacts in reconstructed SPECT images. The collimator and acquisition parameters in the astigmatic reconstruction formula, which include focal lengths, radius of rotation, and mechanical shifts, are often partly unknown and can be determined using the projections of a point source at various projection angles. The accurate determination of these parameters by a least squares fitting technique using projection data from numerically simulated SPECT acquisitions is studied. These studies show that the accuracy of parameter determination is improved as the distance between the point source and the axis of rotation of the gamma camera is increased. The focal length to the focal line perpendicular to the axis of rotation is determined more accurately than the focal length to the focal line parallel to this axis. copyright 1998 American Association of Physicists in Medicine

  14. Time sequential single photon emission computed tomography studies in brain tumour using thallium-201

    International Nuclear Information System (INIS)

    Ueda, Takashi; Kaji, Yasuhiro; Wakisaka, Shinichiro; Watanabe, Katsushi; Hoshi, Hiroaki; Jinnouchi, Seishi; Futami, Shigemi

    1993-01-01

    Time sequential single photon emission computed tomography (SPECT) studies using thallium-201 were performed in 25 patients with brain tumours to evaluate the kinetics of thallium in the tumour and the biological malignancy grade preoperatively. After acquisition and reconstruction of SPECT data from 1 min post injection to 48 h (1, 2, 3, 4, 5, 6, 7, 8, 9, 10 and 15-20 min, followed by 4-6, 24 and 48 h), the thallium uptake ratio in the tumour versus the homologous contralateral area of the brain was calculated and compared with findings of X-ray CT, magnetic resonance imaging, cerebral angiography and histological investigations. Early uptake of thallium in tumours was related to tumour vascularity and the disruption of the blood-brain barrier. High and rapid uptake and slow reduction of thallium indicated a hypervascular malignant tumour; however, high and rapid uptake but rapid reduction of thallium indicated a hypervascular benign tumour, such as meningioma. Hypovascular and benign tumours tended to show low uptake and slow reduction of thallium. Long-lasting retention or uptake of thallium indicates tumour malignancy. (orig.)

  15. Critical examination of the uniformity requirements for single-photon emission computed tomography.

    Science.gov (United States)

    O'Connor, M K; Vermeersch, C

    1991-01-01

    It is generally recognized that single-photon emission computed tomography (SPECT) imposes very stringent requirements on gamma camera uniformity to prevent the occurrence of ring artifacts. The purpose of this study was to examine the relationship between nonuniformities in the planar data and the magnitude of the consequential ring artifacts in the transaxial data, and how the perception of these artifacts is influenced by factors such as reconstruction matrix size, reconstruction filter, and image noise. The study indicates that the relationship between ring artifact magnitude and image noise is essentially independent of the acquisition or reconstruction matrix sizes, but is strongly dependent upon the type of smoothing filter applied during the reconstruction process. Furthermore, the degree to which a ring artifact can be perceived above image noise is dependent on the size and location of the nonuniformity in the planar data, with small nonuniformities (1-2 pixels wide) close to the center of rotation being less perceptible than those further out (8-20 pixels). Small defects or nonuniformities close to the center of rotation are thought to cause the greatest potential corruption to tomographic data. The study indicates that such may not be the case. Hence the uniformity requirements for SPECT may be less demanding than was previously thought.

  16. Single photon emission computed tomography and albumin colloid imaging of the liver

    International Nuclear Information System (INIS)

    Croft, B.Y.; Teates, C.D.; Honeyman, J.C.

    1984-01-01

    A single photon emission computed tomography (ECT) system using the GE 400T Anger camera with 37 PM tubes and the SPETS software has been installed in our clinical laboratory. It has been used in the study of liver imaging with Tc-99m albumin colloid and other agents. The object of the study is to define what improvement in liver diagnosis might be made using ECT. Patients were injected with 3-4 mCi (ca 120 MBq) of colloid; five standard liver-spleen views and a 64-image ECT study were acquired. The ECT images were acquired either in a circle of the radius of the longer transverse axis of the patient or in an ellipse to match the patient contour. Studies were corrected for the attenuation of the Tc-99m gamma rays by tissue. A series of normal and abnormal patients have been studied and the data analyzed. The significant change in the technique of ECT imaging is the elliptical motion of the camera head which allows a better approximation of the patient contour and improves the spatial resolution of the images. (orig.)

  17. Bone single photon emission computed tomography (SPECT in a patient with Pancoast tumor: a case report

    Directory of Open Access Journals (Sweden)

    Hamid Javadi

    Full Text Available CONTEXT: Non-small cell lung carcinomas (NSCLCs of the superior sulcus are considered to be the most challenging type of malignant thoracic disease. In this disease, neoplasms originating mostly from the extreme apex of the lung expand to the chest wall and thoracic inlet structures. Multiple imaging procedures have been applied to identify tumors and to stage and predict tumor resectability in surgical operations. Clinical examinations to localize pain complaints in shoulders and down the arms, and to screen for Horner's syndrome and abnormalities seen in paraclinical assessments, have been applied extensively for differential diagnosis of superior sulcus tumors. Although several types of imaging have been utilized for diagnosing and staging Pancoast tumors, there have been almost no reports on the efficiency of whole-body bone scans (WBBS for detecting the level of abnormality in cases of superior sulcus tumors. CASE REPORT: We describe a case of Pancoast tumor in which technetium-99m methylene diphosphonate (Tc-99m MDP bone single-photon emission-computed tomography (SPECT was able to accurately detect multiple areas of abnormality in the vertebrae and ribs. In describing this case, we stress the clinical and diagnostic points, in the hope of stimulating a higher degree of suspicion and thereby facilitating appropriate diagnosis and treatment. From the results of this study, further clinical trials to evaluate the potential of SPECT as an efficient imaging tool for the work-up on cases of Pancoast tumor are recommended.

  18. Statistical evaluation of single-photon emission computed tomography image using smoothed bootstrap method

    International Nuclear Information System (INIS)

    Tsukamoto, Megumi; Hatabu, Asuka; Takahashi, Yoshitake; Matsuda, Hiroshi; Okamoto, Kousuke; Yamashita, Noriyuki; Takagi, Tatsuya

    2013-01-01

    Many of the neurodegenerative diseases associated with a decrease in regional cerebral blood flow (rCBF) are untreatable, and the appropriate therapeutic strategy is to slow the progression of the disease. Therefore, it is important that a definitive diagnosis is made as soon as possible when such diseases are suspected. Diagnostic imaging methods, such as positron emission tomography (PET) and single-photon emission computed tomography (SPECT), play an important role in such a definitive diagnosis. Since several problems arise when evaluating these images visually, a procedure to evaluate them objectively is necessary, and studies of image analyses using statistical evaluations have been suggested. However, the assumed data distribution in a statistical procedure may occasionally be inappropriate. Therefore, to evaluate the decrease of rCBF, it is important to use a statistical procedure without assumptions about the data distribution. In this study, we propose a new procedure that uses nonparametric or smoothed bootstrap methods to calculate a standardized distribution of the Z-score without assumptions about the data distribution. To test whether the judgment of the proposed procedure is equivalent to that of an evaluation based on the Z-score with a fixed threshold, the procedure was applied to a sample data set whose size was large enough to be appropriate for the assumption of the Z-score. As a result, the evaluations of the proposed procedure were equivalent to that of an evaluation based on the Z-score. (author)

  19. [Restoration filtering based on projection power spectrum for single-photon emission computed tomography].

    Science.gov (United States)

    Kubo, N

    1995-04-01

    To improve the quality of single-photon emission computed tomographic (SPECT) images, a restoration filter has been developed. This filter was designed according to practical "least squares filter" theory. It is necessary to know the object power spectrum and the noise power spectrum. The power spectrum is estimated from the power spectrum of a projection, when the high-frequency power spectrum of a projection is adequately approximated as a polynomial exponential expression. A study of the restoration with the filter based on a projection power spectrum was conducted, and compared with that of the "Butterworth" filtering method (cut-off frequency of 0.15 cycles/pixel), and "Wiener" filtering (signal-to-noise power spectrum ratio was a constant). Normalized mean-squared errors (NMSE) of the phantom, two line sources located in a 99mTc filled cylinder, were used. NMSE of the "Butterworth" filter, "Wiener" filter, and filtering based on a power spectrum were 0.77, 0.83, and 0.76 respectively. Clinically, brain SPECT images utilizing this new restoration filter improved the contrast. Thus, this filter may be useful in diagnosis of SPECT images.

  20. Intelligence quotient-adjusted memory impairment is associated with abnormal single photon emission computed tomography perfusion.

    Science.gov (United States)

    Rentz, Dorene M; Huh, Terri J; Sardinha, Lisa M; Moran, Erin K; Becker, John A; Daffner, Kirk R; Sperling, Reisa A; Johnson, Keith A

    2007-09-01

    Cognitive reserve among highly intelligent older individuals makes detection of early Alzheimer's disease (AD) difficult. We tested the hypothesis that mild memory impairment determined by IQ-adjusted norms is associated with single photon emission computed tomography (SPECT) perfusion abnormality at baseline and predictive of future decline. Twenty-three subjects with a Clinical Dementia Rating (CDR) score of 0, were reclassified after scores were adjusted for IQ into two groups, 10 as having mild memory impairments for ability (IQ-MI) and 13 as memory-normal (IQ-MN). Subjects underwent cognitive and functional assessments at baseline and annual follow-up for 3 years. Perfusion SPECT was acquired at baseline. At follow-up, the IQ-MI subjects demonstrated decline in memory, visuospatial processing, and phonemic fluency, and 6 of 10 had progressed to a CDR of 0.5, while the IQ-MN subjects did not show decline. The IQ-MI group had significantly lower perfusion than the IQ-MN group in parietal/precuneus, temporal, and opercular frontal regions. In contrast, higher perfusion was observed in IQ-MI compared with IQ-MN in the left medial frontal and rostral anterior cingulate regions. IQ-adjusted memory impairment in individuals with high cognitive reserve is associated with baseline SPECT abnormality in a pattern consistent with prodromal AD and predicts subsequent cognitive and functional decline.

  1. STATSLAB: An open-source EEG toolbox for computing single-subject effects using robust statistics.

    Science.gov (United States)

    Campopiano, Allan; van Noordt, Stefon J R; Segalowitz, Sidney J

    2018-03-21

    Research on robust statistics during the past half century provides concrete evidence that classical hypothesis tests that rely on the sample mean and variance are problematic. Even seemingly minor departures from normality are now known to create major problems in terms of increased error rates and decreased power. Fortunately, numerous robust estimation techniques have been developed that circumvent the need for strict assumptions of normality and equal variances, leading to increased power and accuracy when testing hypotheses. Two robust methods that have been shown to have practical value across a wide range of applied situations are the trimmed mean and percentile bootstrap test. To facilitate the uptake of robust methods into the behavioural sciences, especially when dealing with trial-based data such as EEG, we introduce STATSLAB: An open-source EEG toolbox for computing single-subject effects using robust statistics. With the STATSLAB toolbox users can apply the percentile bootstrap test, with trimmed means, to a variety of neural signals including voltages, global field amplitude, and spectral features for both scalp channels and independent components. The toolbox offers a range of analytical strategies and is packaged with a fully functional graphical user interface that includes documentation. Copyright © 2018 Elsevier B.V. All rights reserved.

  2. Single Qubit Spin Readout and Initialization in a Quantum Dot Quantum Computer: Design and Simulation

    Science.gov (United States)

    Tahan, Charles; Friesen, Mark; Joynt, Robert; Eriksson, M. A.

    2003-03-01

    Although electron spin qubits in semiconductors are attractive from the viewpoint of low environmental coupling and long coherence times, spin readout remains a challenge for quantum dot quantum computing. Unfortunately, promising schemes based on spin-charge transduction introduce external couplings in the form of reference qubits or Coulomb blockade leads. Here, we propose a twist on the spin-charge transduction scheme, converting spin information to orbital information within a single quantum dot (QD). The same QD can be used for initialization, gating, and readout, without unnecessary external couplings. We present detailed investigations into such a scheme in both SiGe and GaAs systems: simulations, including capacitive coupling to a RF-SET, calculations of coherent oscillation times which determine the read-out speed, and calculations of electron spin relaxation times which determine the initialization speed. We find that both initialization and readout can be performed within the same architecture. Work supported by NSF-QuBIC and MRSEC programs, ARDA, and NSA.

  3. Regional cerebral blood flow analysis of vascular dementia by the single photon emission computed tomography

    International Nuclear Information System (INIS)

    Miyakawa, Kouichi; Watanabe, Sho; Suzuki, Michiyo; Kamijima, Gonbei

    1989-01-01

    In order to evaluate the relationship between the regional cerebral blood flow (CBF) and cerebrovascular dementia, eleven patients with vascular dementia and eight patients with non-demented infarction were studied and regional CBF were measured quantitatively with single photon emission computed tomography (SPECT) by using N-isopropyl-p-(I-123) iodoamphetamine. All cases were basal infarction and vascular dementia were diagnosed by less than 21.5 of the Hasegawa's dementia score and more than 7 of Hachinsk's ischemic score. The results of the present study were as follows: (1) Cerebrovascular dementia showed lower mean CBF value compared with non-demented group. (2) Regional CBF of bilateral frontal areas and affected basal ganglia were significantly reduced than occipital area in the dementia group. (3) A comparison of regional CBF and the Hasegawa's dementia score revealed a statistically significant correlation at the bilateral frontal areas in the dementia group. It is possible that measuring the regional CBF quantitatively by IMP-SPECT is useful for clinical analysis of vascular dementia. (author)

  4. Brain blood flow studies with single photon emission computed tomography in patients with plateau waves

    International Nuclear Information System (INIS)

    Hayashi, Minoru; Kobayashi, Hidenori; Kawano, Hirokazu; Handa, Yuji; Noguchi, Yoshiyuki; Shirasaki, Naoki; Hirose, Satoshi

    1986-01-01

    The authors studied brain blood flow with single photon emission computed tomography (SPECT) in two patients with plateau waves. The intracranial pressure and blood pressure were also monitored continuously in these patients. They included one patient with brain-tumor (rt. sphenoid ridge meningioma) and another with hydrocephalus after subarachnoid hemorrhage due to rupture of lt. internal carotid aneurysm. The intracranial pressure was monitored through an indwelling ventricular catheter attached to a pressure transducer. The blood pressure was recorded through an intraarterial catheter placed in the dorsalis pedis artery. Brain blood flow was studied with Headtome SET-011 (manufactured by Shimazu Co., Ltd.). For this flow measurement study, an intravenous injection of Xenon-133 of about 30 mCi was given via an antecubital vein. The position of the slice for the SPECT was selected so as to obtain information not only from the cerebral hemisphere but also from the brain stem : a cross section 25 deg over the orbito-meatal line, passing through the inferior aspect of the frontal horn, the basal ganglia, the lower recessus of the third ventricle and the brain stem. The results indicated that, in the cerebral hemisphere, plateau waves were accompanied by a decrease in blood flow, whereas, in the brain stem, the blood flow showed little change during plateau waves as compared with the interval phase between two plateau waves. These observations may explain why there is no rise in the blood pressure and why patients are often alert during plateau waves. (author)

  5. Estimation of 99mTc-dimercaptosuccinic acid renal uptake using single photon emission computed tomography

    International Nuclear Information System (INIS)

    Ohishi, Yukihiko; Machida, Toyohei; Kido, Akira

    1986-01-01

    Renal function was assessed by measurement of 99m Tc-dimercaptosuccinic acid (DMSA) uptake by the kidney based on the transectional tomographic image obtained by single photon emission computed tomography (SPECT). The renal uptake was expressed as percentage of the total radioactivity detected in the kidney, the volume of which was measured by convolution method, against the amount dosed. Absorption was corrected by GE-STAR method using cut off level of 42 %. In order to determine normal range, measurement was made for 40 kidneys of each of 10 male and female volunteers confirmed of having normal kidneys both morphologically and functionally. The average volume of the kidney was 220.4 ml for the right and 239.3 ml for the left for males, and 205.9 ml and 236.5 ml, respectively for females. The renal uptake of radioactivity (at 2 h after injection), was 26.8 % for the right and 27.6 % for the left for males, with corresponding figures for females being 26.4 % and 27.9 %, respectively. Distribution range of renal volume and renal uptake was obtained by bivariate analysis with 90 % and 95 % probability. From these results, our method of renal function determination based on renal uptake of 99m Tc-DMSA obtained from renal transactional tomogram by SPECT is considered to be accurate and potentially useful for clinical purpose. (author)

  6. Cone beam tomography of the heart using single-photon emission-computed tomography

    International Nuclear Information System (INIS)

    Gullberg, G.T.; Christian, P.E.; Zeng, G.L.; Datz, F.L.; Morgan, H.T.

    1991-01-01

    The authors evaluated cone beam single-photon emission-computed tomography (SPECT) of the heart. A new cone beam reconstruction algorithm was used to reconstruct data collected from short scan acquisitions (of slightly more than 180 degrees) of a detector anteriorally traversing a noncircular orbit. The less than 360 degrees acquisition was used to minimize the attenuation artifacts that result from reconstructing posterior projections of 201T1 emissions from the heart. The algorithm includes a new method for reconstructing truncated projections of background tissue activity that eliminates reconstruction ring artifacts. Phantom and patient results are presented which compare a high-resolution cone beam collimator (50-cm focal length; 6.0-mm full width at half maximum [FWHM] at 10 cm) to a low-energy general purpose (LEGP) parallel hole collimator (8.2-mm FWHM at 10 cm) which is 1.33 times more sensitive. The cone beam tomographic results are free of reconstruction artifacts and show improved spatial and contrast resolution over that obtained with the LEGP parallel hole collimator. The limited angular sampling restrictions and truncation problems associated with cone beam tomography do not deter from obtaining diagnostic information. However, even though these preliminary results are encouraging, a thorough clinical study is still needed to investigate the specificity and sensitivity of cone beam tomography

  7. Experimental Adiabatic Quantum Factorization under Ambient Conditions Based on a Solid-State Single Spin System.

    Science.gov (United States)

    Xu, Kebiao; Xie, Tianyu; Li, Zhaokai; Xu, Xiangkun; Wang, Mengqi; Ye, Xiangyu; Kong, Fei; Geng, Jianpei; Duan, Changkui; Shi, Fazhan; Du, Jiangfeng

    2017-03-31

    The adiabatic quantum computation is a universal and robust method of quantum computing. In this architecture, the problem can be solved by adiabatically evolving the quantum processor from the ground state of a simple initial Hamiltonian to that of a final one, which encodes the solution of the problem. Adiabatic quantum computation has been proved to be a compatible candidate for scalable quantum computation. In this Letter, we report on the experimental realization of an adiabatic quantum algorithm on a single solid spin system under ambient conditions. All elements of adiabatic quantum computation, including initial state preparation, adiabatic evolution (simulated by optimal control), and final state read-out, are realized experimentally. As an example, we found the ground state of the problem Hamiltonian S_{z}I_{z} on our adiabatic quantum processor, which can be mapped to the factorization of 35 into its prime factors 5 and 7.

  8. Point and track-finding processors for multiwire chambers

    CERN Document Server

    Hansroul, M

    1973-01-01

    The hardware processors described below are designed to be used in conjunction with multi-wire chambers. They have the characteristic of being based on computational methods in contrast to analogue procedures. In a sense, they are hardware implementations of computer programs. But, being specially designed for their purpose, they are free of the restrictions imposed by the architecture of the computer on which the equivalent program is to run. The parallelism inherent in the algorithms can thus be fully exploited. Combined with the use of fast access scratch-pad memories and the non-sequential nature of the control program, the parallelism accounts for the fact that these processors are expected to execute 2-3 orders of magnitude faster than the equivalent Fortran programs on a CDC 7600 or 6600. As a consequence, methods which are simple and straightforward, but which are impractical because they require an exorbitant amount of computer time can on the contrary be very attractive for hardware implementation. ...

  9. Broadband set-top box using MAP-CA processor

    Science.gov (United States)

    Bush, John E.; Lee, Woobin; Basoglu, Chris

    2001-12-01

    Advances in broadband access are expected to exert a profound impact in our everyday life. It will be the key to the digital convergence of communication, computer and consumer equipment. A common thread that facilitates this convergence comprises digital media and Internet. To address this market, Equator Technologies, Inc., is developing the Dolphin broadband set-top box reference platform using its MAP-CA Broadband Signal ProcessorT chip. The Dolphin reference platform is a universal media platform for display and presentation of digital contents on end-user entertainment systems. The objective of the Dolphin reference platform is to provide a complete set-top box system based on the MAP-CA processor. It includes all the necessary hardware and software components for the emerging broadcast and the broadband digital media market based on IP protocols. Such reference design requires a broadband Internet access and high-performance digital signal processing. By using the MAP-CA processor, the Dolphin reference platform is completely programmable, allowing various codecs to be implemented in software, such as MPEG-2, MPEG-4, H.263 and proprietary codecs. The software implementation also enables field upgrades to keep pace with evolving technology and industry demands.

  10. Electric prototype power processor for a 30cm ion thruster

    Science.gov (United States)

    Biess, J. J.; Inouye, L. Y.; Schoenfeld, A. D.

    1977-01-01

    An electrical prototype power processor unit was designed, fabricated and tested with a 30 cm mercury ion engine for primary space propulsion. The power processor unit used the thyristor series resonant inverter as the basic power stage for the high power beam and discharge supplies. A transistorized series resonant inverter processed the remaining power for the low power outputs. The power processor included a digital interface unit to process all input commands and internal telemetry signals so that electric propulsion systems could be operated with a central computer system. The electrical prototype unit included design improvement in the power components such as thyristors, transistors, filters and resonant capacitors, and power transformers and inductors in order to reduce component weight, to minimize losses, and to control the component temperature rise. A design analysis for the electrical prototype is also presented on the component weight, losses, part count and reliability estimate. The electrical prototype was tested in a thermal vacuum environment. Integration tests were performed with a 30 cm ion engine and demonstrated operational compatibility. Electromagnetic interference data was also recorded on the design to provide information for spacecraft integration.

  11. On developing B-spline registration algorithms for multi-core processors.

    Science.gov (United States)

    Shackleford, J A; Kandasamy, N; Sharp, G C

    2010-11-07

    Spline-based deformable registration methods are quite popular within the medical-imaging community due to their flexibility and robustness. However, they require a large amount of computing time to obtain adequate results. This paper makes two contributions towards accelerating B-spline-based registration. First, we propose a grid-alignment scheme and associated data structures that greatly reduce the complexity of the registration algorithm. Based on this grid-alignment scheme, we then develop highly data parallel designs for B-spline registration within the stream-processing model, suitable for implementation on multi-core processors such as graphics processing units (GPUs). Particular attention is focused on an optimal method for performing analytic gradient computations in a data parallel fashion. CPU and GPU versions are validated for execution time and registration quality. Performance results on large images show that our GPU algorithm achieves a speedup of 15 times over the single-threaded CPU implementation whereas our multi-core CPU algorithm achieves a speedup of 8 times over the single-threaded implementation. The CPU and GPU versions achieve near-identical registration quality in terms of RMS differences between the generated vector fields.

  12. On developing B-spline registration algorithms for multi-core processors

    International Nuclear Information System (INIS)

    Shackleford, J A; Kandasamy, N; Sharp, G C

    2010-01-01

    Spline-based deformable registration methods are quite popular within the medical-imaging community due to their flexibility and robustness. However, they require a large amount of computing time to obtain adequate results. This paper makes two contributions towards accelerating B-spline-based registration. First, we propose a grid-alignment scheme and associated data structures that greatly reduce the complexity of the registration algorithm. Based on this grid-alignment scheme, we then develop highly data parallel designs for B-spline registration within the stream-processing model, suitable for implementation on multi-core processors such as graphics processing units (GPUs). Particular attention is focused on an optimal method for performing analytic gradient computations in a data parallel fashion. CPU and GPU versions are validated for execution time and registration quality. Performance results on large images show that our GPU algorithm achieves a speedup of 15 times over the single-threaded CPU implementation whereas our multi-core CPU algorithm achieves a speedup of 8 times over the single-threaded implementation. The CPU and GPU versions achieve near-identical registration quality in terms of RMS differences between the generated vector fields.

  13. A Geometric Algebra Co-Processor for Color Edge Detection

    Directory of Open Access Journals (Sweden)

    Biswajit Mishra

    2015-01-01

    Full Text Available This paper describes advancement in color edge detection, using a dedicated Geometric Algebra (GA co-processor implemented on an Application Specific Integrated Circuit (ASIC. GA provides a rich set of geometric operations, giving the advantage that many signal and image processing operations become straightforward and the algorithms intuitive to design. The use of GA allows images to be represented with the three R, G, B color channels defined as a single entity, rather than separate quantities. A novel custom ASIC is proposed and fabricated that directly targets GA operations and results in significant performance improvement for color edge detection. Use of the hardware described in this paper also shows that the convolution operation with the rotor masks within GA belongs to a class of linear vector filters and can be applied to image or speech signals. The contribution of the proposed approach has been demonstrated by implementing three different types of edge detection schemes on the proposed hardware. The overall performance gains using the proposed GA Co-Processor over existing software approaches are more than 3.2× faster than GAIGEN and more than 2800× faster than GABLE. The performance of the fabricated GA co-processor is approximately an order of magnitude faster than previously published results for hardware implementations.

  14. Role of single photon emission computed tomography/computed tomography in diagnostic iodine-131 scintigraphy before initial radioiodine ablation in differentiated thyroid cancer

    International Nuclear Information System (INIS)

    Agrawal, Kanhaiyalal; Bhattacharya, Anish; Mittal, Bhagwant Rai

    2005-01-01

    The study was performed to evaluate the incremental value of single photon emission computed tomography/computed tomography (SPECT/CT) over planar radioiodine imaging before radioiodine ablation in the staging, management and stratification of risk of recurrence (ROR) in differentiated thyroid cancer (DTC) patients. Totally, 83 patients (21 male, 62 female) aged 17–75 (mean 39.9) years with DTC were included consecutively in this prospective study. They underwent postthyroidectomy planar and SPECT/CT scans after oral administration of 37–114 MBq iodine-131 (I-131). The scans were interpreted as positive, negative or suspicious for tracer uptake in the thyroid bed, cervical lymph nodes and sites outside the neck. In each case, the findings on planar images were recorded first, without knowledge of SPECT/CT findings. Operative and pathological findings were used for postsurgical tumor–node–metastasis staging. The tumor staging was reassessed after each of these two scans. Single photon emission computed tomography/computed tomography localized radioiodine uptake in the thyroid bed in 9/83 (10.8%) patients, neck nodes in 24/83 (28.9%) patients and distant metastases in 8/83 (9.6%) patients in addition to the planar study. Staging was changed in 8/83 (9.6%), ROR in 11/83 (13.2%) and management in 26/83 (31.3%) patients by the pretherapy SPECT/CT in comparison to planar imaging. SPECT/CT had incremental value in 32/83 patients (38.5%) over the planar scan. Single photon emission computed tomography/computed tomography is feasible during a diagnostic I-131 scan with a low amount of radiotracer. It improved the interpretation of pretherapy I-131 scintigraphy and changed the staging and subsequent patient management

  15. Data register and processor for multiwire chambers

    International Nuclear Information System (INIS)

    Karpukhin, V.V.

    1985-01-01

    A data register and processor for acquisition and processing of data from drift chambers if apparatus for studying relativistic positrona are described. Data are input to the register in eight-bit Gray code, stored, and converted to position code. Data are output from the register to a CAMAC highway and to a front-panel connector. The processor selects the tracks of particles that lie in the horizontal plane of the apparatus. The maximum coordinate spread delta Y and the minimum number of points on a track are set from the front panel of the processor. The resolving time of the processor is 16 microsec and the maximum number of simultaneously analyzable coordinates is 16

  16. Limit characteristics of digital optoelectronic processor

    Science.gov (United States)

    Kolobrodov, V. G.; Tymchik, G. S.; Kolobrodov, M. S.

    2018-01-01

    In this article, the limiting characteristics of a digital optoelectronic processor are explored. The limits are defined by diffraction effects and a matrix structure of the devices for input and output of optical signals. The purpose of a present research is to optimize the parameters of the processor's components. The developed physical and mathematical model of DOEP allowed to establish the limit characteristics of the processor, restricted by diffraction effects and an array structure of the equipment for input and output of optical signals, as well as to optimize the parameters of the processor's components. The diameter of the entrance pupil of the Fourier lens is determined by the size of SLM and the pixel size of the modulator. To determine the spectral resolution, it is offered to use a concept of an optimum phase when the resolved diffraction maxima coincide with the pixel centers of the radiation detector.

  17. Photonics and Fiber Optics Processor Lab

    Data.gov (United States)

    Federal Laboratory Consortium — The Photonics and Fiber Optics Processor Lab develops, tests and evaluates high speed fiber optic network components as well as network protocols. In addition, this...

  18. Radiation Tolerant Software Defined Video Processor Project

    Data.gov (United States)

    National Aeronautics and Space Administration — MaXentric's is proposing a radiation tolerant Software Define Video Processor, codenamed SDVP, for the problem of advanced motion imaging in the space environment....

  19. SCELib3.0: The new revision of SCELib, the parallel computational library of molecular properties in the Single Center Approach

    Science.gov (United States)

    Sanna, N.; Baccarelli, I.; Morelli, G.

    2009-12-01

    SCELib is a computer program which implements the Single Center Expansion (SCE) method to describe molecular electronic densities and the interaction potentials between a charged projectile (electron or positron) and a target molecular system. The first version (CPC Catalog identifier ADMG_v1_0) was submitted to the CPC Program Library in 2000, and version 2.0 (ADMG_v2_0) was submitted in 2004. We here announce the new release 3.0 which presents additional features with respect to the previous versions aiming at a significative enhance of its capabilities to deal with larger molecular systems. SCELib 3.0 allows for ab initio effective core potential (ECP) calculations of the molecular wavefunctions to be used in the SCE method in addition to the standard all-electron description of the molecule. The list of supported architectures has been updated and the code has been ported to platforms based on accelerating coprocessors, such as the NVIDIA GPGPU and the new parallel model adopted is able to efficiently run on a mixed many-core computing system. Program summaryProgram title: SCELib3.0 Catalogue identifier: ADMG_v3_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADMG_v3_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 2 018 862 No. of bytes in distributed program, including test data, etc.: 4 955 014 Distribution format: tar.gz Programming language: C Compilers used: xlc V8.x, Intel C V10.x, Portland Group V7.x, nvcc V2.x Computer: All SMP platforms based on AIX, Linux and SUNOS operating systems over SPARC, POWER, Intel Itanium2, X86, em64t and Opteron processors Operating system: SUNOS, IBM AIX, Linux RedHat (Enterprise), Linux SuSE (SLES) Has the code been vectorized or parallelized?: Yes. 1 to 32 (CPU or GPU) used RAM: Up to 32 GB depending on the molecular

  20. Advanced Avionics and Processor Systems for a Flexible Space Exploration Architecture

    Science.gov (United States)

    Keys, Andrew S.; Adams, James H.; Smith, Leigh M.; Johnson, Michael A.; Cressler, John D.

    2010-01-01

    The Advanced Avionics and Processor Systems (AAPS) project, formerly known as the Radiation Hardened Electronics for Space Environments (RHESE) project, endeavors to develop advanced avionic and processor technologies anticipated to be used by NASA s currently evolving space exploration architectures. The AAPS project is a part of the Exploration Technology Development Program, which funds an entire suite of technologies that are aimed at enabling NASA s ability to explore beyond low earth orbit. NASA s Marshall Space Flight Center (MSFC) manages the AAPS project. AAPS uses a broad-scoped approach to developing avionic and processor systems. Investment areas include advanced electronic designs and technologies capable of providing environmental hardness, reconfigurable computing techniques, software tools for radiation effects assessment, and radiation environment modeling tools. Near-term emphasis within the multiple AAPS tasks focuses on developing prototype components using semiconductor processes and materials (such as Silicon-Germanium (SiGe)) to enhance a device s tolerance to radiation events and low temperature environments. As the SiGe technology will culminate in a delivered prototype this fiscal year, the project emphasis shifts its focus to developing low-power, high efficiency total processor hardening techniques. In addition to processor development, the project endeavors to demonstrate techniques applicable to reconfigurable computing and partially reconfigurable Field Programmable Gate Arrays (FPGAs). This capability enables avionic architectures the ability to develop FPGA-based, radiation tolerant processor boards that can serve in multiple physical locations throughout the spacecraft and perform multiple functions during the course of the mission. The individual tasks that comprise AAPS are diverse, yet united in the common endeavor to develop electronics capable of operating within the harsh environment of space. Specifically, the AAPS tasks for

  1. Recovery Act - CAREER: Sustainable Silicon -- Energy-Efficient VLSI Interconnect for Extreme-Scale Computing

    Energy Technology Data Exchange (ETDEWEB)

    Chiang, Patrick [Oregon State Univ., Corvallis, OR (United States)

    2014-01-31

    The research goal of this CAREER proposal is to develop energy-efficient, VLSI interconnect circuits and systems that will facilitate future massively-parallel, high-performance computing. Extreme-scale computing will exhibit massive parallelism on multiple vertical levels, from thou­ sands of computational units on a single processor to thousands of processors in a single data center. Unfortunately, the energy required to communicate between these units at every level (on­ chip, off-chip, off-rack) will be the critical limitation to energy efficiency. Therefore, the PI's career goal is to become a leading researcher in the design of energy-efficient VLSI interconnect for future computing systems.

  2. Real time monitoring of electron processors

    International Nuclear Information System (INIS)

    Nablo, S.V.; Kneeland, D.R.; McLaughlin, W.L.

    1995-01-01

    A real time radiation monitor (RTRM) has been developed for monitoring the dose rate (current density) of electron beam processors. The system provides continuous monitoring of processor output, electron beam uniformity, and an independent measure of operating voltage or electron energy. In view of the device's ability to replace labor-intensive dosimetry in verification of machine performance on a real-time basis, its application to providing archival performance data for in-line processing is discussed. (author)

  3. Matrix Manipulation Algorithms for Hasse Processor Implementation

    OpenAIRE

    Hahanov, Vladimir; Dahiri, Farid

    2014-01-01

    The processor is implemented in software-hardware modules, which are based on the use of programming languages: C ++, Verilog, Python 2.7 and platforms: Microsoft Windows, X Window (in Unix and Linux) and Macintosh OS X. HDL-code generator makes it possible to automatically synthesize HDL-code of the processor structure from 1 to 16 bits for parallel processing corresponding number of input vectors or words.

  4. Demonstration of two-qubit algorithms with a superconducting quantum processor.

    Science.gov (United States)

    DiCarlo, L; Chow, J M; Gambetta, J M; Bishop, Lev S; Johnson, B R; Schuster, D I; Majer, J; Blais, A; Frunzio, L; Girvin, S M; Schoelkopf, R J

    2009-07-09

    Quantum computers, which harness the superposition and entanglement of physical states, could outperform their classical counterparts in solving problems with technological impact-such as factoring large numbers and searching databases. A quantum processor executes algorithms by applying a programmable sequence of gates to an initialized register of qubits, which coherently evolves into a final state containing the result of the computation. Building a quantum processor is challenging because of the need to meet simultaneously requirements that are in conflict: state preparation, long coherence times, universal gate operations and qubit readout. Processors based on a few qubits have been demonstrated using nuclear magnetic resonance, cold ion trap and optical systems, but a solid-state realization has remained an outstanding challenge. Here we demonstrate a two-qubit superconducting processor and the implementation of the Grover search and Deutsch-Jozsa quantum algorithms. We use a two-qubit interaction, tunable in strength by two orders of magnitude on nanosecond timescales, which is mediated by a cavity bus in a circuit quantum electrodynamics architecture. This interaction allows the generation of highly entangled states with concurrence up to 94 per cent. Although this processor constitutes an important step in quantum computing with integrated circuits, continuing efforts to increase qubit coherence times, gate performance and register size will be required to fulfil the promise of a scalable technology.

  5. Few-view single photon emission computed tomography (SPECT) reconstruction based on a blurred piecewise constant object model

    DEFF Research Database (Denmark)

    Wolf, Paul A.; Jørgensen, Jakob Sauer; Schmidt, Taly G.

    2013-01-01

    A sparsity-exploiting algorithm intended for few-view Single Photon Emission Computed Tomography (SPECT) reconstruction is proposed and characterized. The algorithm models the object as piecewise constant subject to a blurring operation. To validate that the algorithm closely approximates the true...

  6. Diagnosing Alzheimer's disease in elderly, mildly demented patients: the impact of routine single photon emission computed tomography

    NARCIS (Netherlands)

    van Gool, W. A.; Walstra, G. J.; Teunisse, S.; van der Zant, F. M.; Weinstein, H. C.; van Royen, E. A.

    1995-01-01

    Based on the observation of bilateral temporoparietal hypoperfusion in Alzheimer's disease (AD), single photon emission computed tomography (SPECT) is advocated by some as a powerful diagnostic tool in the evaluation of demented patients. We studied whether routine brain SPECT in elderly, mildly

  7. Computation of a near-optimal service policy for a single-server queue with homogeneous jobs

    DEFF Research Database (Denmark)

    Johansen, Søren Glud; Larsen, Christian

    2000-01-01

    We present an algorithm for computing a near optimal service policy for a single-server queueing system when the service cost is a convex function of the service time. The policy has state-dependent service times, and it includes the options to remove jobs from the system and to let the server...

  8. Computation of a near-optimal service policy for a single-server queue with homogeneous jobs

    DEFF Research Database (Denmark)

    Johansen, Søren Glud; Larsen, Christian

    2001-01-01

    We present an algorithm for computing a near-optimal service policy for a single-server queueing system when the service cost is a convex function of the service time. The policy has state-dependent service times, and it includes the options to remove jobs from the system and to let the server...

  9. Exercise echocardiography and single photon emission computed tomography in patients with left anterior descending coronary artery stenosis

    NARCIS (Netherlands)

    A. Salustri (Alessandro); M. Pozzoli (M.); B. Ilmer; W.R.M. Hermans (Walter); A.E.M. Reijs (Ambroos); J.H.C. Reiber (Johan); J.R.T.C. Roelandt (Jos); P.M. Fioretti (Paolo)

    1992-01-01

    textabstractTo compare the diagnostic value of exercise echocardiography and perfusion single photon emission computed tomography (SPECT) in the detection of the presence and the severity of coronary artery disease, we studied 21 patients with isolated stenosis of different degree of the left

  10. Accuracy Limitations in Optical Linear Algebra Processors

    Science.gov (United States)

    Batsell, Stephen Gordon

    1990-01-01

    One of the limiting factors in applying optical linear algebra processors (OLAPs) to real-world problems has been the poor achievable accuracy of these processors. Little previous research has been done on determining noise sources from a systems perspective which would include noise generated in the multiplication and addition operations, noise from spatial variations across arrays, and from crosstalk. In this dissertation, we propose a second-order statistical model for an OLAP which incorporates all these system noise sources. We now apply this knowledge to determining upper and lower bounds on the achievable accuracy. This is accomplished by first translating the standard definition of accuracy used in electronic digital processors to analog optical processors. We then employ our second-order statistical model. Having determined a general accuracy equation, we consider limiting cases such as for ideal and noisy components. From the ideal case, we find the fundamental limitations on improving analog processor accuracy. From the noisy case, we determine the practical limitations based on both device and system noise sources. These bounds allow system trade-offs to be made both in the choice of architecture and in individual components in such a way as to maximize the accuracy of the processor. Finally, by determining the fundamental limitations, we show the system engineer when the accuracy desired can be achieved from hardware or architecture improvements and when it must come from signal pre-processing and/or post-processing techniques.

  11. 7 CFR 1160.108 - Fluid milk processor.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 9 2010-01-01 2009-01-01 true Fluid milk processor. 1160.108 Section 1160.108... Order Definitions § 1160.108 Fluid milk processor. (a) Fluid milk processor means any person who... term fluid milk processor shall not include in each of the respective fiscal periods those persons who...

  12. 7 CFR 1435.310 - Sharing processors' allocations with producers.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 10 2010-01-01 2010-01-01 false Sharing processors' allocations with producers. 1435... Flexible Marketing Allotments For Sugar § 1435.310 Sharing processors' allocations with producers. (a) Every sugar beet and sugarcane processor must provide CCC a certification that: (1) The processor...

  13. 21 CFR 120.25 - Process verification for certain processors.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 2 2010-04-01 2010-04-01 false Process verification for certain processors. 120... Pathogen Reduction § 120.25 Process verification for certain processors. Each juice processor that relies... covered by this section, processors shall take subsamples according to paragraph (a) of this section for...

  14. DFT algorithms for bit-serial GaAs array processor architectures

    Science.gov (United States)

    Mcmillan, Gary B.

    1988-01-01

    Systems and Processes Engineering Corporation (SPEC) has developed an innovative array processor architecture for computing Fourier transforms and other commonly used signal processing algorithms. This architecture is designed to extract the highest possible array performance from state-of-the-art GaAs technology. SPEC's architectural design includes a high performance RISC processor implemented in GaAs, along with a Floating Point Coprocessor and a unique Array Communications Coprocessor, also implemented in GaAs technology. Together, these data processors represent the latest in technology, both from an architectural and implementation viewpoint. SPEC has examined numerous algorithms and parallel processing architectures to determine the optimum array processor architecture. SPEC has developed an array processor architecture with integral communications ability to provide maximum node connectivity. The Array Communications Coprocessor embeds communications operations directly in the core of the processor architecture. A Floating Point Coprocessor architecture has been defined that utilizes Bit-Serial arithmetic units, operating at very high frequency, to perform floating point operations. These Bit-Serial devices reduce the device integration level and complexity to a level compatible with state-of-the-art GaAs device technology.

  15. Non-Boolean computing with nanomagnets for computer vision applications.

    Science.gov (United States)

    Bhanja, Sanjukta; Karunaratne, D K; Panchumarthy, Ravi; Rajaram, Srinath; Sarkar, Sudeep

    2016-02-01

    The field of nanomagnetism has recently attracted tremendous attention as it can potentially deliver low-power, high-speed and dense non-volatile memories. It is now possible to engineer the size, shape, spacing, orientation and composition of sub-100 nm magnetic structures. This has spurred the exploration of nanomagnets for unconventional computing paradigms. Here, we harness the energy-minimization nature of nanomagnetic systems to solve the quadratic optimization problems that arise in computer vision applications, which are computationally expensive. By exploiting the magnetization states of nanomagnetic disks as state representations of a vortex and single domain, we develop a magnetic Hamiltonian and implement it in a magnetic system that can identify the salient features of a given image with more than 85% true positive rate. These results show the potential of this alternative computing method to develop a magnetic coprocessor that might solve complex problems in fewer clock cycles than traditional processors.

  16. Single-trial detection of visual evoked potentials by common spatial patterns and wavelet filtering for brain-computer interface.

    Science.gov (United States)

    Tu, Yiheng; Huang, Gan; Hung, Yeung Sam; Hu, Li; Hu, Yong; Zhang, Zhiguo

    2013-01-01

    Event-related potentials (ERPs) are widely used in brain-computer interface (BCI) systems as input signals conveying a subject's intention. A fast and reliable single-trial ERP detection method can be used to develop a BCI system with both high speed and high accuracy. However, most of single-trial ERP detection methods are developed for offline EEG analysis and thus have a high computational complexity and need manual operations. Therefore, they are not applicable to practical BCI systems, which require a low-complexity and automatic ERP detection method. This work presents a joint spatial-time-frequency filter that combines common spatial patterns (CSP) and wavelet filtering (WF) for improving the signal-to-noise (SNR) of visual evoked potentials (VEP), which can lead to a single-trial ERP-based BCI.

  17. FPGA Implementation of Decimal Processors for Hardware Acceleration

    DEFF Research Database (Denmark)

    Borup, Nicolas; Dindorp, Jonas; Nannarelli, Alberto

    2011-01-01

    Applications in non-conventional number systems can benefit from accelerators implemented on reconfigurable platforms, such as Field Programmable Gate-Arrays (FPGAs). In this paper, we show that applications requiring decimal operations, such as the ones necessary in accounting or financial...... transactions, can be accelerated by Application Specific Processors (ASPs) implemented on FPGAs. For the case of a telephone billing application, we demonstrate that by accelerating the program execution on a FPGA board connected to the computer by a standard bus, we obtain a significant speed-up over its...

  18. Adaptive Optoelectronic Eyes: Hybrid Sensor/Processor Architectures

    Science.gov (United States)

    2006-11-13

    Ju 1988). The primate visual cortex is a high-performance image processor designed in part to solve this problem through long-range, multi-scale...arrays, as described in the section on photonic multichip module (PMCM) integration below. VCSELVCSEL Vbias GND VDD PD1 PD2 M1 M2 M3 M4 M5 M6 M7 M8 M301...2000), IEEE Computer Society, pp. 142-149, (2000). 17.  E. Peterhans and R. von der Heydt, “Mechanisms of Contour Perception in Monkey Visual Cortex

  19. Sn transport calculations on vector and parallel processors

    International Nuclear Information System (INIS)

    Rhoades, W.A.; Childs, R.L.

    1987-01-01

    The transport of radiation from the source to the location of people or equipment gives rise to some of the most challenging of calculations. A problem may involve as many as a billion unknowns, each evaluated several times to resolve interdependence. Such calculations run many hours on a Cray computer, and a typical study involves many such calculations. This paper will discuss the steps taken to vectorize the DOT code, which solves transport problems in two space dimensions (2-D); the extension of this code to 3-D; and the plans for extension to parallel processors

  20. A lock circuit for a multi-core processor

    DEFF Research Database (Denmark)

    2015-01-01

    An integrated circuit comprising a multiple processor cores and a lock circuit that comprises a queue register with respective bits set or reset via respective, connections dedicated to respective processor cores, whereby the queue register identifies those among the multiple processor cores...... that are enqueued in the queue register. Furthermore, the integrated circuit comprises a current register and a selector circuit configured to select a processor core and identify that processor core by a value in the current register. A selected processor core is a prioritized processor core among the cores...... that have a bit that is set in the queue register. The processor cores are connected to receive a signal from the current register. Correspondingly: a method of synchronizing access to software and/or hardware resources by a core of a multi-core processor by means of a lock circuit; a multi-core processor...