WorldWideScience

Sample records for central processing units computers

  1. Exploiting graphics processing units for computational biology and bioinformatics.

    Science.gov (United States)

    Payne, Joshua L; Sinnott-Armstrong, Nicholas A; Moore, Jason H

    2010-09-01

    Advances in the video gaming industry have led to the production of low-cost, high-performance graphics processing units (GPUs) that possess more memory bandwidth and computational capability than central processing units (CPUs), the standard workhorses of scientific computing. With the recent release of generalpurpose GPUs and NVIDIA's GPU programming language, CUDA, graphics engines are being adopted widely in scientific computing applications, particularly in the fields of computational biology and bioinformatics. The goal of this article is to concisely present an introduction to GPU hardware and programming, aimed at the computational biologist or bioinformaticist. To this end, we discuss the primary differences between GPU and CPU architecture, introduce the basics of the CUDA programming language, and discuss important CUDA programming practices, such as the proper use of coalesced reads, data types, and memory hierarchies. We highlight each of these topics in the context of computing the all-pairs distance between instances in a dataset, a common procedure in numerous disciplines of scientific computing. We conclude with a runtime analysis of the GPU and CPU implementations of the all-pairs distance calculation. We show our final GPU implementation to outperform the CPU implementation by a factor of 1700.

  2. Graphics processing units in bioinformatics, computational biology and systems biology.

    Science.gov (United States)

    Nobile, Marco S; Cazzaniga, Paolo; Tangherloni, Andrea; Besozzi, Daniela

    2017-09-01

    Several studies in Bioinformatics, Computational Biology and Systems Biology rely on the definition of physico-chemical or mathematical models of biological systems at different scales and levels of complexity, ranging from the interaction of atoms in single molecules up to genome-wide interaction networks. Traditional computational methods and software tools developed in these research fields share a common trait: they can be computationally demanding on Central Processing Units (CPUs), therefore limiting their applicability in many circumstances. To overcome this issue, general-purpose Graphics Processing Units (GPUs) are gaining an increasing attention by the scientific community, as they can considerably reduce the running time required by standard CPU-based software, and allow more intensive investigations of biological systems. In this review, we present a collection of GPU tools recently developed to perform computational analyses in life science disciplines, emphasizing the advantages and the drawbacks in the use of these parallel architectures. The complete list of GPU-powered tools here reviewed is available at http://bit.ly/gputools. © The Author 2016. Published by Oxford University Press.

  3. Security central processing unit applications in the protection of nuclear facilities

    International Nuclear Information System (INIS)

    Goetzke, R.E.

    1987-01-01

    New or upgraded electronic security systems protecting nuclear facilities or complexes will be heavily computer dependent. Proper planning for new systems and the employment of new state-of-the-art 32 bit processors in the processing of subsystem reports are key elements in effective security systems. The processing of subsystem reports represents only a small segment of system overhead. In selecting a security system to meet the current and future needs for nuclear security applications the central processing unit (CPU) applied in the system architecture is the critical element in system performance. New 32 bit technology eliminates the need for program overlays while providing system programmers with well documented program tools to develop effective systems to operate in all phases of nuclear security applications

  4. Massively Parallel Signal Processing using the Graphics Processing Unit for Real-Time Brain-Computer Interface Feature Extraction.

    Science.gov (United States)

    Wilson, J Adam; Williams, Justin C

    2009-01-01

    The clock speeds of modern computer processors have nearly plateaued in the past 5 years. Consequently, neural prosthetic systems that rely on processing large quantities of data in a short period of time face a bottleneck, in that it may not be possible to process all of the data recorded from an electrode array with high channel counts and bandwidth, such as electrocorticographic grids or other implantable systems. Therefore, in this study a method of using the processing capabilities of a graphics card [graphics processing unit (GPU)] was developed for real-time neural signal processing of a brain-computer interface (BCI). The NVIDIA CUDA system was used to offload processing to the GPU, which is capable of running many operations in parallel, potentially greatly increasing the speed of existing algorithms. The BCI system records many channels of data, which are processed and translated into a control signal, such as the movement of a computer cursor. This signal processing chain involves computing a matrix-matrix multiplication (i.e., a spatial filter), followed by calculating the power spectral density on every channel using an auto-regressive method, and finally classifying appropriate features for control. In this study, the first two computationally intensive steps were implemented on the GPU, and the speed was compared to both the current implementation and a central processing unit-based implementation that uses multi-threading. Significant performance gains were obtained with GPU processing: the current implementation processed 1000 channels of 250 ms in 933 ms, while the new GPU method took only 27 ms, an improvement of nearly 35 times.

  5. Graphics processing unit based computation for NDE applications

    Science.gov (United States)

    Nahas, C. A.; Rajagopal, Prabhu; Balasubramaniam, Krishnan; Krishnamurthy, C. V.

    2012-05-01

    Advances in parallel processing in recent years are helping to improve the cost of numerical simulation. Breakthroughs in Graphical Processing Unit (GPU) based computation now offer the prospect of further drastic improvements. The introduction of 'compute unified device architecture' (CUDA) by NVIDIA (the global technology company based in Santa Clara, California, USA) has made programming GPUs for general purpose computing accessible to the average programmer. Here we use CUDA to develop parallel finite difference schemes as applicable to two problems of interest to NDE community, namely heat diffusion and elastic wave propagation. The implementations are for two-dimensions. Performance improvement of the GPU implementation against serial CPU implementation is then discussed.

  6. Computing the Density Matrix in Electronic Structure Theory on Graphics Processing Units.

    Science.gov (United States)

    Cawkwell, M J; Sanville, E J; Mniszewski, S M; Niklasson, Anders M N

    2012-11-13

    The self-consistent solution of a Schrödinger-like equation for the density matrix is a critical and computationally demanding step in quantum-based models of interatomic bonding. This step was tackled historically via the diagonalization of the Hamiltonian. We have investigated the performance and accuracy of the second-order spectral projection (SP2) algorithm for the computation of the density matrix via a recursive expansion of the Fermi operator in a series of generalized matrix-matrix multiplications. We demonstrate that owing to its simplicity, the SP2 algorithm [Niklasson, A. M. N. Phys. Rev. B2002, 66, 155115] is exceptionally well suited to implementation on graphics processing units (GPUs). The performance in double and single precision arithmetic of a hybrid GPU/central processing unit (CPU) and full GPU implementation of the SP2 algorithm exceed those of a CPU-only implementation of the SP2 algorithm and traditional matrix diagonalization when the dimensions of the matrices exceed about 2000 × 2000. Padding schemes for arrays allocated in the GPU memory that optimize the performance of the CUBLAS implementations of the level 3 BLAS DGEMM and SGEMM subroutines for generalized matrix-matrix multiplications are described in detail. The analysis of the relative performance of the hybrid CPU/GPU and full GPU implementations indicate that the transfer of arrays between the GPU and CPU constitutes only a small fraction of the total computation time. The errors measured in the self-consistent density matrices computed using the SP2 algorithm are generally smaller than those measured in matrices computed via diagonalization. Furthermore, the errors in the density matrices computed using the SP2 algorithm do not exhibit any dependence of system size, whereas the errors increase linearly with the number of orbitals when diagonalization is employed.

  7. Energy- and cost-efficient lattice-QCD computations using graphics processing units

    Energy Technology Data Exchange (ETDEWEB)

    Bach, Matthias

    2014-07-01

    Quarks and gluons are the building blocks of all hadronic matter, like protons and neutrons. Their interaction is described by Quantum Chromodynamics (QCD), a theory under test by large scale experiments like the Large Hadron Collider (LHC) at CERN and in the future at the Facility for Antiproton and Ion Research (FAIR) at GSI. However, perturbative methods can only be applied to QCD for high energies. Studies from first principles are possible via a discretization onto an Euclidean space-time grid. This discretization of QCD is called Lattice QCD (LQCD) and is the only ab-initio option outside of the high-energy regime. LQCD is extremely compute and memory intensive. In particular, it is by definition always bandwidth limited. Thus - despite the complexity of LQCD applications - it led to the development of several specialized compute platforms and influenced the development of others. However, in recent years General-Purpose computation on Graphics Processing Units (GPGPU) came up as a new means for parallel computing. Contrary to machines traditionally used for LQCD, graphics processing units (GPUs) are a massmarket product. This promises advantages in both the pace at which higher-performing hardware becomes available and its price. CL2QCD is an OpenCL based implementation of LQCD using Wilson fermions that was developed within this thesis. It operates on GPUs by all major vendors as well as on central processing units (CPUs). On the AMD Radeon HD 7970 it provides the fastest double-precision D kernel for a single GPU, achieving 120GFLOPS. D - the most compute intensive kernel in LQCD simulations - is commonly used to compare LQCD platforms. This performance is enabled by an in-depth analysis of optimization techniques for bandwidth-limited codes on GPUs. Further, analysis of the communication between GPU and CPU, as well as between multiple GPUs, enables high-performance Krylov space solvers and linear scaling to multiple GPUs within a single system. LQCD

  8. Energy- and cost-efficient lattice-QCD computations using graphics processing units

    International Nuclear Information System (INIS)

    Bach, Matthias

    2014-01-01

    Quarks and gluons are the building blocks of all hadronic matter, like protons and neutrons. Their interaction is described by Quantum Chromodynamics (QCD), a theory under test by large scale experiments like the Large Hadron Collider (LHC) at CERN and in the future at the Facility for Antiproton and Ion Research (FAIR) at GSI. However, perturbative methods can only be applied to QCD for high energies. Studies from first principles are possible via a discretization onto an Euclidean space-time grid. This discretization of QCD is called Lattice QCD (LQCD) and is the only ab-initio option outside of the high-energy regime. LQCD is extremely compute and memory intensive. In particular, it is by definition always bandwidth limited. Thus - despite the complexity of LQCD applications - it led to the development of several specialized compute platforms and influenced the development of others. However, in recent years General-Purpose computation on Graphics Processing Units (GPGPU) came up as a new means for parallel computing. Contrary to machines traditionally used for LQCD, graphics processing units (GPUs) are a massmarket product. This promises advantages in both the pace at which higher-performing hardware becomes available and its price. CL2QCD is an OpenCL based implementation of LQCD using Wilson fermions that was developed within this thesis. It operates on GPUs by all major vendors as well as on central processing units (CPUs). On the AMD Radeon HD 7970 it provides the fastest double-precision D kernel for a single GPU, achieving 120GFLOPS. D - the most compute intensive kernel in LQCD simulations - is commonly used to compare LQCD platforms. This performance is enabled by an in-depth analysis of optimization techniques for bandwidth-limited codes on GPUs. Further, analysis of the communication between GPU and CPU, as well as between multiple GPUs, enables high-performance Krylov space solvers and linear scaling to multiple GPUs within a single system. LQCD

  9. Optimization of the coherence function estimation for multi-core central processing unit

    Science.gov (United States)

    Cheremnov, A. G.; Faerman, V. A.; Avramchuk, V. S.

    2017-02-01

    The paper considers use of parallel processing on multi-core central processing unit for optimization of the coherence function evaluation arising in digital signal processing. Coherence function along with other methods of spectral analysis is commonly used for vibration diagnosis of rotating machinery and its particular nodes. An algorithm is given for the function evaluation for signals represented with digital samples. The algorithm is analyzed for its software implementation and computational problems. Optimization measures are described, including algorithmic, architecture and compiler optimization, their results are assessed for multi-core processors from different manufacturers. Thus, speeding-up of the parallel execution with respect to sequential execution was studied and results are presented for Intel Core i7-4720HQ и AMD FX-9590 processors. The results show comparatively high efficiency of the optimization measures taken. In particular, acceleration indicators and average CPU utilization have been significantly improved, showing high degree of parallelism of the constructed calculating functions. The developed software underwent state registration and will be used as a part of a software and hardware solution for rotating machinery fault diagnosis and pipeline leak location with acoustic correlation method.

  10. Parallel Implementation of the Discrete Green's Function Formulation of the FDTD Method on a Multicore Central Processing Unit

    Directory of Open Access Journals (Sweden)

    T. Stefański

    2014-12-01

    Full Text Available Parallel implementation of the discrete Green's function formulation of the finite-difference time-domain (DGF-FDTD method was developed on a multicore central processing unit. DGF-FDTD avoids computations of the electromagnetic field in free-space cells and does not require domain termination by absorbing boundary conditions. Computed DGF-FDTD solutions are compatible with the FDTD grid enabling the perfect hybridization of FDTD with the use of time-domain integral equation methods. The developed implementation can be applied to simulations of antenna characteristics. For the sake of example, arrays of Yagi-Uda antennas were simulated with the use of parallel DGF-FDTD. The efficiency of parallel computations was investigated as a function of the number of current elements in the FDTD grid. Although the developed method does not apply the fast Fourier transform for convolution computations, advantages stemming from the application of DGF-FDTD instead of FDTD can be demonstrated for one-dimensional wire antennas when simulation results are post-processed by the near-to-far-field transformation.

  11. Using Systems Theory to Examine Patient and Nurse Structures, Processes, and Outcomes in Centralized and Decentralized Units.

    Science.gov (United States)

    Real, Kevin; Fay, Lindsey; Isaacs, Kathy; Carll-White, Allison; Schadler, Aric

    2018-01-01

    This study utilizes systems theory to understand how changes to physical design structures impact communication processes and patient and staff design-related outcomes. Many scholars and researchers have noted the importance of communication and teamwork for patient care quality. Few studies have examined changes to nursing station design within a systems theory framework. This study employed a multimethod, before-and-after, quasi-experimental research design. Nurses completed surveys in centralized units and later in decentralized units ( N = 26 pre , N = 51 post ). Patients completed surveys ( N = 62 pre ) in centralized units and later in decentralized units ( N = 49 post ). Surveys included quantitative measures and qualitative open-ended responses. Patients preferred the decentralized units because of larger single-occupancy rooms, greater privacy/confidentiality, and overall satisfaction with design. Nurses had a more complex response. Nurses approved the patient rooms, unit environment, and noise levels in decentralized units. However, they reported reduced access to support spaces, lower levels of team/mentoring communication, and less satisfaction with design than in centralized units. Qualitative findings supported these results. Nurses were more positive about centralized units and patients were more positive toward decentralized units. The results of this study suggest a need to understand how system components operate in concert. A major contribution of this study is the inclusion of patient satisfaction with design, an important yet overlooked fact in patient satisfaction. Healthcare design researchers and practitioners may consider how changing system interdependencies can lead to unexpected changes to communication processes and system outcomes in complex systems.

  12. All-optical quantum computing with a hybrid solid-state processing unit

    International Nuclear Information System (INIS)

    Pei Pei; Zhang Fengyang; Li Chong; Song Heshan

    2011-01-01

    We develop an architecture of a hybrid quantum solid-state processing unit for universal quantum computing. The architecture allows distant and nonidentical solid-state qubits in distinct physical systems to interact and work collaboratively. All the quantum computing procedures are controlled by optical methods using classical fields and cavity QED. Our methods have a prominent advantage of the insensitivity to dissipation process benefiting from the virtual excitation of subsystems. Moreover, the quantum nondemolition measurements and state transfer for the solid-state qubits are proposed. The architecture opens promising perspectives for implementing scalable quantum computation in a broader sense that different solid-state systems can merge and be integrated into one quantum processor afterward.

  13. Experience with a mobile data storage device for transfer of studies from the critical care unit to a central nuclear medicine computer

    International Nuclear Information System (INIS)

    Cradduck, T.D.; Driedger, A.A.

    1981-01-01

    The introduction of mobile scintillation cameras has enabled the more immediate provision of nuclear medicine services in areas remote from the central nuclear medicine laboratory. Since a large number of such studies involve the use of a computer for data analysis, the concurrent problem of how to transmit those data to the computer becomes critical. A device is described using hard magnetic discs as the recording media and which can be wheeled from the patient's bedside to the central computer for playback. Some initial design problems, primarily associated with the critical timing which is necessary for the collection of gated studies, were overcome and the unit has been in service for the past two years. The major limitations are the relatively small capacity of the discs and the fact that the data are recorded in list mode. These constraints result in studies having poor statistical validity. The slow turn-around time, which results from the necessity to transport the system to the department and replay the study into the computer before analysis can begin, is also of particular concern. The use of this unit has clearly demonstrated the very important role that nuclear medicine can play in the care of the critically ill patient. The introduction of a complete acquisition and analysis unit is planned so that prompt diagnostic decisions can be made available within the intensive care unit. (author)

  14. Computation of large covariance matrices by SAMMY on graphical processing units and multicore CPUs

    International Nuclear Information System (INIS)

    Arbanas, G.; Dunn, M.E.; Wiarda, D.

    2011-01-01

    Computational power of Graphical Processing Units and multicore CPUs was harnessed by the nuclear data evaluation code SAMMY to speed up computations of large Resonance Parameter Covariance Matrices (RPCMs). This was accomplished by linking SAMMY to vendor-optimized implementations of the matrix-matrix multiplication subroutine of the Basic Linear Algebra Library to compute the most time-consuming step. The 235 U RPCM computed previously using a triple-nested loop was re-computed using the NVIDIA implementation of the subroutine on a single Tesla Fermi Graphical Processing Unit, and also using the Intel's Math Kernel Library implementation on two different multicore CPU systems. A multiplication of two matrices of dimensions 16,000×20,000 that had previously taken days, took approximately one minute on the GPU. Comparable performance was achieved on a dual six-core CPU system. The magnitude of the speed-up suggests that these, or similar, combinations of hardware and libraries may be useful for large matrix operations in SAMMY. Uniform interfaces of standard linear algebra libraries make them a promising candidate for a programming framework of a new generation of SAMMY for the emerging heterogeneous computing platforms. (author)

  15. Computation of large covariance matrices by SAMMY on graphical processing units and multicore CPUs

    Energy Technology Data Exchange (ETDEWEB)

    Arbanas, G.; Dunn, M.E.; Wiarda, D., E-mail: arbanasg@ornl.gov, E-mail: dunnme@ornl.gov, E-mail: wiardada@ornl.gov [Oak Ridge National Laboratory, Oak Ridge, TN (United States)

    2011-07-01

    Computational power of Graphical Processing Units and multicore CPUs was harnessed by the nuclear data evaluation code SAMMY to speed up computations of large Resonance Parameter Covariance Matrices (RPCMs). This was accomplished by linking SAMMY to vendor-optimized implementations of the matrix-matrix multiplication subroutine of the Basic Linear Algebra Library to compute the most time-consuming step. The {sup 235}U RPCM computed previously using a triple-nested loop was re-computed using the NVIDIA implementation of the subroutine on a single Tesla Fermi Graphical Processing Unit, and also using the Intel's Math Kernel Library implementation on two different multicore CPU systems. A multiplication of two matrices of dimensions 16,000×20,000 that had previously taken days, took approximately one minute on the GPU. Comparable performance was achieved on a dual six-core CPU system. The magnitude of the speed-up suggests that these, or similar, combinations of hardware and libraries may be useful for large matrix operations in SAMMY. Uniform interfaces of standard linear algebra libraries make them a promising candidate for a programming framework of a new generation of SAMMY for the emerging heterogeneous computing platforms. (author)

  16. Image processing unit with fall-back.

    NARCIS (Netherlands)

    2011-01-01

    An image processing unit ( 100,200,300 ) for computing a sequence of output images on basis of a sequence of input images, comprises: a motion estimation unit ( 102 ) for computing a motion vector field on basis of the input images; a quality measurement unit ( 104 ) for computing a value of a

  17. Introduction to the LaRC central scientific computing complex

    Science.gov (United States)

    Shoosmith, John N.

    1993-01-01

    The computers and associated equipment that make up the Central Scientific Computing Complex of the Langley Research Center are briefly described. The electronic networks that provide access to the various components of the complex and a number of areas that can be used by Langley and contractors staff for special applications (scientific visualization, image processing, software engineering, and grid generation) are also described. Flight simulation facilities that use the central computers are described. Management of the complex, procedures for its use, and available services and resources are discussed. This document is intended for new users of the complex, for current users who wish to keep appraised of changes, and for visitors who need to understand the role of central scientific computers at Langley.

  18. Failure detection in high-performance clusters and computers using chaotic map computations

    Science.gov (United States)

    Rao, Nageswara S.

    2015-09-01

    A programmable media includes a processing unit capable of independent operation in a machine that is capable of executing 10.sup.18 floating point operations per second. The processing unit is in communication with a memory element and an interconnect that couples computing nodes. The programmable media includes a logical unit configured to execute arithmetic functions, comparative functions, and/or logical functions. The processing unit is configured to detect computing component failures, memory element failures and/or interconnect failures by executing programming threads that generate one or more chaotic map trajectories. The central processing unit or graphical processing unit is configured to detect a computing component failure, memory element failure and/or an interconnect failure through an automated comparison of signal trajectories generated by the chaotic maps.

  19. Distributed Computing with Centralized Support Works at Brigham Young.

    Science.gov (United States)

    McDonald, Kelly; Stone, Brad

    1992-01-01

    Brigham Young University (Utah) has addressed the need for maintenance and support of distributed computing systems on campus by implementing a program patterned after a national business franchise, providing the support and training of a centralized administration but allowing each unit to operate much as an independent small business.…

  20. Optimization Solutions for Improving the Performance of the Parallel Reduction Algorithm Using Graphics Processing Units

    Directory of Open Access Journals (Sweden)

    Ion LUNGU

    2012-01-01

    Full Text Available In this paper, we research, analyze and develop optimization solutions for the parallel reduction function using graphics processing units (GPUs that implement the Compute Unified Device Architecture (CUDA, a modern and novel approach for improving the software performance of data processing applications and algorithms. Many of these applications and algorithms make use of the reduction function in their computational steps. After having designed the function and its algorithmic steps in CUDA, we have progressively developed and implemented optimization solutions for the reduction function. In order to confirm, test and evaluate the solutions' efficiency, we have developed a custom tailored benchmark suite. We have analyzed the obtained experimental results regarding: the comparison of the execution time and bandwidth when using graphic processing units covering the main CUDA architectures (Tesla GT200, Fermi GF100, Kepler GK104 and a central processing unit; the data type influence; the binary operator's influence.

  1. SHIVGAMI : Simplifying tHe titanIc blastx process using aVailable GAthering of coMputational unIts

    Directory of Open Access Journals (Sweden)

    Naman Mangukia

    2017-10-01

    Full Text Available Assembling novel genomes from scratch is a never ending process unless and until the homo sapiens cover all the living organisms! On top of that, this denovo approach is employed by RNASeq and Metagenomics analysis. Functional identification of the scaffolds or transcripts from such drafted assemblies is a substantial step routinely employes a well-known BlastX program which facilitates a user to search DNA query against NCBI-Protein (NR:~120Gb database. In spite of having multicore-processing option, blastX is an elongated process for the bulk of lengthy Queryinputs. Tremendous efforts are constantly being applied to solve this problem by increasing computational power, GPU-Based computing, Cloud computing and Hadoop based approach which ultimately requires gigantic cost in terms of money and processing. To address this issue, here we have come up with SHIVGAMI, which automates the entire process using perl and shell scripts, which divide, distribute and process the input FASTA sequences as per the CPU-cores availability amongst the computational units individually. Linux operating system, NR database and blastX program installations are prerequisites for each system.  The beauty of this stand-alone automation program SHIVGAMI is it requires the LAN connection exactly twice: During ‘query distribution’ and at the time of ‘proces completion’. In initial phase, it divides the fasta sequences according to the individual computer's core-capability. Then it will evenly distribute all the data along with small automation scripts which will run the blastX process to the respective computational unit and send back the results file to the master computer. The master computer finally combines and compiles the files into a single result. This simple automation converts a computer lab into a GRID without investment of any software, hardware and man-power. In short, SHIVGAMI is a time and cost savior tool for all users starting from commercial firm

  2. Multidisciplinary Simulation Acceleration using Multiple Shared-Memory Graphical Processing Units

    Science.gov (United States)

    Kemal, Jonathan Yashar

    For purposes of optimizing and analyzing turbomachinery and other designs, the unsteady Favre-averaged flow-field differential equations for an ideal compressible gas can be solved in conjunction with the heat conduction equation. We solve all equations using the finite-volume multiple-grid numerical technique, with the dual time-step scheme used for unsteady simulations. Our numerical solver code targets CUDA-capable Graphical Processing Units (GPUs) produced by NVIDIA. Making use of MPI, our solver can run across networked compute notes, where each MPI process can use either a GPU or a Central Processing Unit (CPU) core for primary solver calculations. We use NVIDIA Tesla C2050/C2070 GPUs based on the Fermi architecture, and compare our resulting performance against Intel Zeon X5690 CPUs. Solver routines converted to CUDA typically run about 10 times faster on a GPU for sufficiently dense computational grids. We used a conjugate cylinder computational grid and ran a turbulent steady flow simulation using 4 increasingly dense computational grids. Our densest computational grid is divided into 13 blocks each containing 1033x1033 grid points, for a total of 13.87 million grid points or 1.07 million grid points per domain block. To obtain overall speedups, we compare the execution time of the solver's iteration loop, including all resource intensive GPU-related memory copies. Comparing the performance of 8 GPUs to that of 8 CPUs, we obtain an overall speedup of about 6.0 when using our densest computational grid. This amounts to an 8-GPU simulation running about 39.5 times faster than running than a single-CPU simulation.

  3. Integrating Micro-computers with a Centralized DBMS: ORACLE, SEED AND INGRES

    Science.gov (United States)

    Hoerger, J.

    1984-01-01

    Users of ADABAS, a relational-like data base management system (ADABAS) with its data base programming language (NATURAL) are acquiring microcomputers with hopes of solving their individual word processing, office automation, decision support, and simple data processing problems. As processor speeds, memory sizes, and disk storage capacities increase, individual departments begin to maintain "their own" data base on "their own" micro-computer. This situation can adversely affect several of the primary goals set for implementing a centralized DBMS. In order to avoid this potential problem, these micro-computers must be integrated with the centralized DBMS. An easy to use and flexible means for transferring logic data base files between the central data base machine and micro-computers must be provided. Some of the problems encounted in an effort to accomplish this integration and possible solutions are discussed.

  4. Fast, multi-channel real-time processing of signals with microsecond latency using graphics processing units

    Energy Technology Data Exchange (ETDEWEB)

    Rath, N., E-mail: Nikolaus@rath.org; Levesque, J. P.; Mauel, M. E.; Navratil, G. A.; Peng, Q. [Department of Applied Physics and Applied Mathematics, Columbia University, 500 W 120th St, New York, New York 10027 (United States); Kato, S. [Department of Information Engineering, Nagoya University, Nagoya (Japan)

    2014-04-15

    Fast, digital signal processing (DSP) has many applications. Typical hardware options for performing DSP are field-programmable gate arrays (FPGAs), application-specific integrated DSP chips, or general purpose personal computer systems. This paper presents a novel DSP platform that has been developed for feedback control on the HBT-EP tokamak device. The system runs all signal processing exclusively on a Graphics Processing Unit (GPU) to achieve real-time performance with latencies below 8 μs. Signals are transferred into and out of the GPU using PCI Express peer-to-peer direct-memory-access transfers without involvement of the central processing unit or host memory. Tests were performed on the feedback control system of the HBT-EP tokamak using forty 16-bit floating point inputs and outputs each and a sampling rate of up to 250 kHz. Signals were digitized by a D-TACQ ACQ196 module, processing done on an NVIDIA GTX 580 GPU programmed in CUDA, and analog output was generated by D-TACQ AO32CPCI modules.

  5. Fast, multi-channel real-time processing of signals with microsecond latency using graphics processing units

    International Nuclear Information System (INIS)

    Rath, N.; Levesque, J. P.; Mauel, M. E.; Navratil, G. A.; Peng, Q.; Kato, S.

    2014-01-01

    Fast, digital signal processing (DSP) has many applications. Typical hardware options for performing DSP are field-programmable gate arrays (FPGAs), application-specific integrated DSP chips, or general purpose personal computer systems. This paper presents a novel DSP platform that has been developed for feedback control on the HBT-EP tokamak device. The system runs all signal processing exclusively on a Graphics Processing Unit (GPU) to achieve real-time performance with latencies below 8 μs. Signals are transferred into and out of the GPU using PCI Express peer-to-peer direct-memory-access transfers without involvement of the central processing unit or host memory. Tests were performed on the feedback control system of the HBT-EP tokamak using forty 16-bit floating point inputs and outputs each and a sampling rate of up to 250 kHz. Signals were digitized by a D-TACQ ACQ196 module, processing done on an NVIDIA GTX 580 GPU programmed in CUDA, and analog output was generated by D-TACQ AO32CPCI modules

  6. Data Sorting Using Graphics Processing Units

    Directory of Open Access Journals (Sweden)

    M. J. Mišić

    2012-06-01

    Full Text Available Graphics processing units (GPUs have been increasingly used for general-purpose computation in recent years. The GPU accelerated applications are found in both scientific and commercial domains. Sorting is considered as one of the very important operations in many applications, so its efficient implementation is essential for the overall application performance. This paper represents an effort to analyze and evaluate the implementations of the representative sorting algorithms on the graphics processing units. Three sorting algorithms (Quicksort, Merge sort, and Radix sort were evaluated on the Compute Unified Device Architecture (CUDA platform that is used to execute applications on NVIDIA graphics processing units. Algorithms were tested and evaluated using an automated test environment with input datasets of different characteristics. Finally, the results of this analysis are briefly discussed.

  7. Porting of the transfer-matrix method for multilayer thin-film computations on graphics processing units

    Science.gov (United States)

    Limmer, Steffen; Fey, Dietmar

    2013-07-01

    Thin-film computations are often a time-consuming task during optical design. An efficient way to accelerate these computations with the help of graphics processing units (GPUs) is described. It turned out that significant speed-ups can be achieved. We investigate the circumstances under which the best speed-up values can be expected. Therefore we compare different GPUs among themselves and with a modern CPU. Furthermore, the effect of thickness modulation on the speed-up and the runtime behavior depending on the input data is examined.

  8. Computed tomography of the central nervous system in small animals

    International Nuclear Information System (INIS)

    Tipold, A.; Tipold, E.

    1991-01-01

    With computed tomography in 44 small animals some well defined anatomical structures and pathological processes of the central nervous system are described. Computed tomography is not only necessary for the diagnosis of tumors; malformations, inflammatory, degenerative and vascular diseases and traumas are also visible

  9. New FORTRAN computer programs to acquire and process isotopic mass-spectrometric data

    International Nuclear Information System (INIS)

    Smith, D.H.

    1982-08-01

    The computer programs described in New Computer Programs to Acquire and Process Isotopic Mass Spectrometric Data have been revised. This report describes in some detail the operation of these programs, which acquire and process isotopic mass spectrometric data. Both functional and overall design aspects are addressed. The three basic program units - file manipulation, data acquisition, and data processing - are discussed in turn. Step-by-step instructions are included where appropriate, and each subsection is described in enough detail to give a clear picture of its function. Organization of file structure, which is central to the entire concept, is extensively discussed with the help of numerous tables. Appendices contain flow charts and outline file structure to help a programmer unfamiliar with the programs to alter them with a minimum of lost time

  10. Iterative Methods for MPC on Graphical Processing Units

    DEFF Research Database (Denmark)

    Gade-Nielsen, Nicolai Fog; Jørgensen, John Bagterp; Dammann, Bernd

    2012-01-01

    The high oating point performance and memory bandwidth of Graphical Processing Units (GPUs) makes them ideal for a large number of computations which often arises in scientic computing, such as matrix operations. GPUs achieve this performance by utilizing massive par- allelism, which requires ree...... as to avoid the use of dense matrices, which may be too large for the limited memory capacity of current graphics cards.......The high oating point performance and memory bandwidth of Graphical Processing Units (GPUs) makes them ideal for a large number of computations which often arises in scientic computing, such as matrix operations. GPUs achieve this performance by utilizing massive par- allelism, which requires...

  11. The Fermilab central computing facility architectural model

    International Nuclear Information System (INIS)

    Nicholls, J.

    1989-01-01

    The goal of the current Central Computing Upgrade at Fermilab is to create a computing environment that maximizes total productivity, particularly for high energy physics analysis. The Computing Department and the Next Computer Acquisition Committee decided upon a model which includes five components: an interactive front-end, a Large-Scale Scientific Computer (LSSC, a mainframe computing engine), a microprocessor farm system, a file server, and workstations. With the exception of the file server, all segments of this model are currently in production: a VAX/VMS cluster interactive front-end, an Amdahl VM Computing engine, ACP farms, and (primarily) VMS workstations. This paper will discuss the implementation of the Fermilab Central Computing Facility Architectural Model. Implications for Code Management in such a heterogeneous environment, including issues such as modularity and centrality, will be considered. Special emphasis will be placed on connectivity and communications between the front-end, LSSC, and workstations, as practiced at Fermilab. (orig.)

  12. The Fermilab Central Computing Facility architectural model

    International Nuclear Information System (INIS)

    Nicholls, J.

    1989-05-01

    The goal of the current Central Computing Upgrade at Fermilab is to create a computing environment that maximizes total productivity, particularly for high energy physics analysis. The Computing Department and the Next Computer Acquisition Committee decided upon a model which includes five components: an interactive front end, a Large-Scale Scientific Computer (LSSC, a mainframe computing engine), a microprocessor farm system, a file server, and workstations. With the exception of the file server, all segments of this model are currently in production: a VAX/VMS Cluster interactive front end, an Amdahl VM computing engine, ACP farms, and (primarily) VMS workstations. This presentation will discuss the implementation of the Fermilab Central Computing Facility Architectural Model. Implications for Code Management in such a heterogeneous environment, including issues such as modularity and centrality, will be considered. Special emphasis will be placed on connectivity and communications between the front-end, LSSC, and workstations, as practiced at Fermilab. 2 figs

  13. MASSIVELY PARALLEL LATENT SEMANTIC ANALYSES USING A GRAPHICS PROCESSING UNIT

    Energy Technology Data Exchange (ETDEWEB)

    Cavanagh, J.; Cui, S.

    2009-01-01

    Latent Semantic Analysis (LSA) aims to reduce the dimensions of large term-document datasets using Singular Value Decomposition. However, with the ever-expanding size of datasets, current implementations are not fast enough to quickly and easily compute the results on a standard PC. A graphics processing unit (GPU) can solve some highly parallel problems much faster than a traditional sequential processor or central processing unit (CPU). Thus, a deployable system using a GPU to speed up large-scale LSA processes would be a much more effective choice (in terms of cost/performance ratio) than using a PC cluster. Due to the GPU’s application-specifi c architecture, harnessing the GPU’s computational prowess for LSA is a great challenge. We presented a parallel LSA implementation on the GPU, using NVIDIA® Compute Unifi ed Device Architecture and Compute Unifi ed Basic Linear Algebra Subprograms software. The performance of this implementation is compared to traditional LSA implementation on a CPU using an optimized Basic Linear Algebra Subprograms library. After implementation, we discovered that the GPU version of the algorithm was twice as fast for large matrices (1 000x1 000 and above) that had dimensions not divisible by 16. For large matrices that did have dimensions divisible by 16, the GPU algorithm ran fi ve to six times faster than the CPU version. The large variation is due to architectural benefi ts of the GPU for matrices divisible by 16. It should be noted that the overall speeds for the CPU version did not vary from relative normal when the matrix dimensions were divisible by 16. Further research is needed in order to produce a fully implementable version of LSA. With that in mind, the research we presented shows that the GPU is a viable option for increasing the speed of LSA, in terms of cost/performance ratio.

  14. Mitra 15. Development of a centralized multipoint network at 10 megabauds

    International Nuclear Information System (INIS)

    Mestrallet, Michel.

    1975-01-01

    The system APIS was designed to control the irradiation devices located in Osiris (70MW swimming-pool reactor). In a first stage the satellite units work autonomously. Each is equipped with a small computer (C.I.I. Mitra 15). A larger central computer enables tables and operations to be worked out and prepared from one shut-down to the next. It is also used for link-up trials, to establish test programmes and secondarily to serve as a small computing centre. Each Processing Unit possesses a peripheral MINIBUS on which are connected the terminal links. This MINIBUS takes the form of a printed circuit placed in a crate and allowing the link-up card position to be entirely standardized. The MITRA 15 possesses a micro-programme which automatically deals with deviations on detection of mishap. It is built around a main ferrite core memory equipped with 4 accesses for the connection of 1 to 4 micro-programmed Processing Units (P.U.) or direct access to the memory. According to the nature of the micro-programmes integrated to the P.U. these can serve as Central Unit, Exchange Unit or Special Unit adapted to a special processing [fr

  15. Requirements for SSC central computing staffing (conceptual)

    International Nuclear Information System (INIS)

    Pfister, J.

    1985-01-01

    Given a computation center with --10,000 MIPS supporting --1,000 users, what are the staffing requirements? The attempt in this paper is to list the functions and staff size required in a central computing or centrally supported computing complex. The organization assumes that although considerable computing power would exist (mostly for online) in the four interaction regions (IR) that there are functions/capabilities better performed outside the IR and in this model at a ''central computing facility.'' What follows is one staffing approach, not necessarily optimal, with certain assumptions about numbers of computer systems, media, networks and system controls, that is, one would get the best technology available. Thus, it is speculation about what the technology may bring and what it takes to operate it. From an end user support standpoint it is less clear, given the geography of an SSC, where and what the consulting support should look like and its location

  16. Ising Processing Units: Potential and Challenges for Discrete Optimization

    Energy Technology Data Exchange (ETDEWEB)

    Coffrin, Carleton James [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Nagarajan, Harsha [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Bent, Russell Whitford [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-07-05

    The recent emergence of novel computational devices, such as adiabatic quantum computers, CMOS annealers, and optical parametric oscillators, presents new opportunities for hybrid-optimization algorithms that leverage these kinds of specialized hardware. In this work, we propose the idea of an Ising processing unit as a computational abstraction for these emerging tools. Challenges involved in using and bench- marking these devices are presented, and open-source software tools are proposed to address some of these challenges. The proposed benchmarking tools and methodology are demonstrated by conducting a baseline study of established solution methods to a D-Wave 2X adiabatic quantum computer, one example of a commercially available Ising processing unit.

  17. Instruction Set Architectures for Quantum Processing Units

    OpenAIRE

    Britt, Keith A.; Humble, Travis S.

    2017-01-01

    Progress in quantum computing hardware raises questions about how these devices can be controlled, programmed, and integrated with existing computational workflows. We briefly describe several prominent quantum computational models, their associated quantum processing units (QPUs), and the adoption of these devices as accelerators within high-performance computing systems. Emphasizing the interface to the QPU, we analyze instruction set architectures based on reduced and complex instruction s...

  18. Project for a codable central unit for analog data acquisition

    International Nuclear Information System (INIS)

    Bouras, F.; Da Costa Vieira, D.; Sohier, B.

    1974-07-01

    The instrumentation for a 256 channel codable central processor intended for an operation in connection with a computer is presented. The computer indicates the adresses of the channels to be measured, orders the conversion, and acquires the results of measurements. The acquisition and computer coupling unit is located in a standard rock CAMAC (6 U 19inch., 25 positions); an example of configuration is given. The measurement velocity depends on the converter speed and dead time of analog circuits; for a ADC 1103 converter the total dead time is 6.5s min. The analog circuits are intended for +-10V range, the accuracy is 1/2n (2n is the number of bits). The result is acquired in words of 12 bits maximum. The information transfer and analog commutation (through integrated analog gates) are discussed [fr

  19. Application engineering for process computer systems

    International Nuclear Information System (INIS)

    Mueller, K.

    1975-01-01

    The variety of tasks for process computers in nuclear power stations necessitates the centralization of all production stages from the planning stage to the delivery of the finished process computer system (PRA) to the user. This so-called 'application engineering' comprises all of the activities connected with the application of the PRA: a) establishment of the PRA concept, b) project counselling, c) handling of offers, d) handling of orders, e) internal handling of orders, f) technical counselling, g) establishing of parameters, h) monitoring deadlines, i) training of customers, j) compiling an operation manual. (orig./AK) [de

  20. FamSeq: a variant calling program for family-based sequencing data using graphics processing units.

    Directory of Open Access Journals (Sweden)

    Gang Peng

    2014-10-01

    Full Text Available Various algorithms have been developed for variant calling using next-generation sequencing data, and various methods have been applied to reduce the associated false positive and false negative rates. Few variant calling programs, however, utilize the pedigree information when the family-based sequencing data are available. Here, we present a program, FamSeq, which reduces both false positive and false negative rates by incorporating the pedigree information from the Mendelian genetic model into variant calling. To accommodate variations in data complexity, FamSeq consists of four distinct implementations of the Mendelian genetic model: the Bayesian network algorithm, a graphics processing unit version of the Bayesian network algorithm, the Elston-Stewart algorithm and the Markov chain Monte Carlo algorithm. To make the software efficient and applicable to large families, we parallelized the Bayesian network algorithm that copes with pedigrees with inbreeding loops without losing calculation precision on an NVIDIA graphics processing unit. In order to compare the difference in the four methods, we applied FamSeq to pedigree sequencing data with family sizes that varied from 7 to 12. When there is no inbreeding loop in the pedigree, the Elston-Stewart algorithm gives analytical results in a short time. If there are inbreeding loops in the pedigree, we recommend the Bayesian network method, which provides exact answers. To improve the computing speed of the Bayesian network method, we parallelized the computation on a graphics processing unit. This allowed the Bayesian network method to process the whole genome sequencing data of a family of 12 individuals within two days, which was a 10-fold time reduction compared to the time required for this computation on a central processing unit.

  1. Installation of new Generation General Purpose Computer (GPC) compact unit

    Science.gov (United States)

    1991-01-01

    In the Kennedy Space Center's (KSC's) Orbiter Processing Facility (OPF) high bay 2, Spacecraft Electronics technician Ed Carter (right), wearing clean suit, prepares for (26864) and installs (26865) the new Generation General Purpose Computer (GPC) compact IBM unit in Atlantis', Orbiter Vehicle (OV) 104's, middeck avionics bay as Orbiter Systems Quality Control technician Doug Snider looks on. Both men work for NASA contractor Lockheed Space Operations Company. All three orbiters are being outfitted with the compact IBM unit, which replaces a two-unit earlier generation computer.

  2. Optimization models of the supply of power structures’ organizational units with centralized procurement

    Directory of Open Access Journals (Sweden)

    Sysoiev Volodymyr

    2013-01-01

    Full Text Available Management of the state power structures’ organizational units for materiel and technical support requires the use of effective tools for supporting decisions, due to the complexity, interdependence, and dynamism of supply in the market economy. The corporate nature of power structures is of particular interest to centralized procurement management, as it provides significant advantages through coordination, eliminating duplication, and economy of scale. This article presents optimization models of the supply of state power structures’ organizational units with centralized procurement, for different levels of simulated materiel and technical support processes. The models allow us to find the most profitable options for state power structures’ organizational supply units in a centre-oriented logistics system in conditions of the changing needs, volume of allocated funds, and logistics costs that accompany the process of supply, by maximizing the provision level of organizational units with necessary material and technical resources for the entire planning period of supply by minimizing the total logistical costs, taking into account the diverse nature and the different priorities of organizational units and material and technical resources.

  3. Computing betweenness centrality in external memory

    DEFF Research Database (Denmark)

    Arge, Lars; Goodrich, Michael T.; Walderveen, Freek van

    2013-01-01

    Betweenness centrality is one of the most well-known measures of the importance of nodes in a social-network graph. In this paper we describe the first known external-memory and cache-oblivious algorithms for computing betweenness centrality. We present four different external-memory algorithms...

  4. A computer controlled tele-cobalt unit

    International Nuclear Information System (INIS)

    Brace, J.A.

    1982-01-01

    A computer controlled cobalt treatment unit was commissioned for treating patients in January 1980. Initially the controlling computer was a minicomputer, but now the control of the therapy unit is by a microcomputer. The treatment files, which specify the movement and configurations necessary to deliver the prescribed dose, are produced on the minicomputer and then transferred to the microcomputer using minitape cartridges. The actual treatment unit is based on a standard cobalt unit with a few additional features e.g. the drive motors can be controlled either by the computer or manually. Since the treatment unit is used for both manual and automatic treatments, the operational procedure under computer control is made to closely follow the manual procedure for a single field treatment. The necessary safety features which protect against human, hardware and software errors as well as the advantages and disadvantages of computer controlled radiotherapy are discussed

  5. Concordance-based Kendall's Correlation for Computationally-Light vs. Computationally-Heavy Centrality Metrics: Lower Bound for Correlation

    Directory of Open Access Journals (Sweden)

    Natarajan Meghanathan

    2017-01-01

    Full Text Available We identify three different levels of correlation (pair-wise relative ordering, network-wide ranking and linear regression that could be assessed between a computationally-light centrality metric and a computationally-heavy centrality metric for real-world networks. The Kendall's concordance-based correlation measure could be used to quantitatively assess how well we could consider the relative ordering of two vertices vi and vj with respect to a computationally-light centrality metric as the relative ordering of the same two vertices with respect to a computationally-heavy centrality metric. We hypothesize that the pair-wise relative ordering (concordance-based assessment of the correlation between centrality metrics is the most strictest of all the three levels of correlation and claim that the Kendall's concordance-based correlation coefficient will be lower than the correlation coefficient observed with the more relaxed levels of correlation measures (linear regression-based Pearson's product-moment correlation coefficient and the network wide ranking-based Spearman's correlation coefficient. We validate our hypothesis by evaluating the three correlation coefficients between two sets of centrality metrics: the computationally-light degree and local clustering coefficient complement-based degree centrality metrics and the computationally-heavy eigenvector centrality, betweenness centrality and closeness centrality metrics for a diverse collection of 50 real-world networks.

  6. Study on efficiency of time computation in x-ray imaging simulation base on Monte Carlo algorithm using graphics processing unit

    International Nuclear Information System (INIS)

    Setiani, Tia Dwi; Suprijadi; Haryanto, Freddy

    2016-01-01

    Monte Carlo (MC) is one of the powerful techniques for simulation in x-ray imaging. MC method can simulate the radiation transport within matter with high accuracy and provides a natural way to simulate radiation transport in complex systems. One of the codes based on MC algorithm that are widely used for radiographic images simulation is MC-GPU, a codes developed by Andrea Basal. This study was aimed to investigate the time computation of x-ray imaging simulation in GPU (Graphics Processing Unit) compared to a standard CPU (Central Processing Unit). Furthermore, the effect of physical parameters to the quality of radiographic images and the comparison of image quality resulted from simulation in the GPU and CPU are evaluated in this paper. The simulations were run in CPU which was simulated in serial condition, and in two GPU with 384 cores and 2304 cores. In simulation using GPU, each cores calculates one photon, so, a large number of photon were calculated simultaneously. Results show that the time simulations on GPU were significantly accelerated compared to CPU. The simulations on the 2304 core of GPU were performed about 64 -114 times faster than on CPU, while the simulation on the 384 core of GPU were performed about 20 – 31 times faster than in a single core of CPU. Another result shows that optimum quality of images from the simulation was gained at the history start from 10"8 and the energy from 60 Kev to 90 Kev. Analyzed by statistical approach, the quality of GPU and CPU images are relatively the same.

  7. Study on efficiency of time computation in x-ray imaging simulation base on Monte Carlo algorithm using graphics processing unit

    Energy Technology Data Exchange (ETDEWEB)

    Setiani, Tia Dwi, E-mail: tiadwisetiani@gmail.com [Computational Science, Faculty of Mathematics and Natural Sciences, Institut Teknologi Bandung Jalan Ganesha 10 Bandung, 40132 (Indonesia); Suprijadi [Computational Science, Faculty of Mathematics and Natural Sciences, Institut Teknologi Bandung Jalan Ganesha 10 Bandung, 40132 (Indonesia); Nuclear Physics and Biophysics Reaserch Division, Faculty of Mathematics and Natural Sciences, Institut Teknologi Bandung Jalan Ganesha 10 Bandung, 40132 (Indonesia); Haryanto, Freddy [Nuclear Physics and Biophysics Reaserch Division, Faculty of Mathematics and Natural Sciences, Institut Teknologi Bandung Jalan Ganesha 10 Bandung, 40132 (Indonesia)

    2016-03-11

    Monte Carlo (MC) is one of the powerful techniques for simulation in x-ray imaging. MC method can simulate the radiation transport within matter with high accuracy and provides a natural way to simulate radiation transport in complex systems. One of the codes based on MC algorithm that are widely used for radiographic images simulation is MC-GPU, a codes developed by Andrea Basal. This study was aimed to investigate the time computation of x-ray imaging simulation in GPU (Graphics Processing Unit) compared to a standard CPU (Central Processing Unit). Furthermore, the effect of physical parameters to the quality of radiographic images and the comparison of image quality resulted from simulation in the GPU and CPU are evaluated in this paper. The simulations were run in CPU which was simulated in serial condition, and in two GPU with 384 cores and 2304 cores. In simulation using GPU, each cores calculates one photon, so, a large number of photon were calculated simultaneously. Results show that the time simulations on GPU were significantly accelerated compared to CPU. The simulations on the 2304 core of GPU were performed about 64 -114 times faster than on CPU, while the simulation on the 384 core of GPU were performed about 20 – 31 times faster than in a single core of CPU. Another result shows that optimum quality of images from the simulation was gained at the history start from 10{sup 8} and the energy from 60 Kev to 90 Kev. Analyzed by statistical approach, the quality of GPU and CPU images are relatively the same.

  8. ATLAS Distributed Computing: Its Central Services core

    CERN Document Server

    Lee, Christopher Jon; The ATLAS collaboration

    2018-01-01

    The ATLAS Distributed Computing (ADC) Project is responsible for the off-line processing of data produced by the ATLAS experiment at the Large Hadron Collider (LHC) at CERN. It facilitates data and workload management for ATLAS computing on the Worldwide LHC Computing Grid (WLCG). ADC Central Services operations (CSops)is a vital part of ADC, responsible for the deployment and configuration of services needed by ATLAS computing and operation of those services on CERN IT infrastructure, providing knowledge of CERN IT services to ATLAS service managers and developers, and supporting them in case of issues. Currently this entails the management of thirty seven different OpenStack projects, with more than five thousand cores allocated for these virtual machines, as well as overseeing the distribution of twenty nine petabytes of storage space in EOS for ATLAS. As the LHC begins to get ready for the next long shut-down, which will bring in many new upgrades to allow for more data to be captured by the on-line syste...

  9. Accelerating Monte Carlo simulations of photon transport in a voxelized geometry using a massively parallel graphics processing unit

    International Nuclear Information System (INIS)

    Badal, Andreu; Badano, Aldo

    2009-01-01

    Purpose: It is a known fact that Monte Carlo simulations of radiation transport are computationally intensive and may require long computing times. The authors introduce a new paradigm for the acceleration of Monte Carlo simulations: The use of a graphics processing unit (GPU) as the main computing device instead of a central processing unit (CPU). Methods: A GPU-based Monte Carlo code that simulates photon transport in a voxelized geometry with the accurate physics models from PENELOPE has been developed using the CUDA programming model (NVIDIA Corporation, Santa Clara, CA). Results: An outline of the new code and a sample x-ray imaging simulation with an anthropomorphic phantom are presented. A remarkable 27-fold speed up factor was obtained using a GPU compared to a single core CPU. Conclusions: The reported results show that GPUs are currently a good alternative to CPUs for the simulation of radiation transport. Since the performance of GPUs is currently increasing at a faster pace than that of CPUs, the advantages of GPU-based software are likely to be more pronounced in the future.

  10. Accelerating Monte Carlo simulations of photon transport in a voxelized geometry using a massively parallel graphics processing unit

    Energy Technology Data Exchange (ETDEWEB)

    Badal, Andreu; Badano, Aldo [Division of Imaging and Applied Mathematics, OSEL, CDRH, U.S. Food and Drug Administration, Silver Spring, Maryland 20993-0002 (United States)

    2009-11-15

    Purpose: It is a known fact that Monte Carlo simulations of radiation transport are computationally intensive and may require long computing times. The authors introduce a new paradigm for the acceleration of Monte Carlo simulations: The use of a graphics processing unit (GPU) as the main computing device instead of a central processing unit (CPU). Methods: A GPU-based Monte Carlo code that simulates photon transport in a voxelized geometry with the accurate physics models from PENELOPE has been developed using the CUDA programming model (NVIDIA Corporation, Santa Clara, CA). Results: An outline of the new code and a sample x-ray imaging simulation with an anthropomorphic phantom are presented. A remarkable 27-fold speed up factor was obtained using a GPU compared to a single core CPU. Conclusions: The reported results show that GPUs are currently a good alternative to CPUs for the simulation of radiation transport. Since the performance of GPUs is currently increasing at a faster pace than that of CPUs, the advantages of GPU-based software are likely to be more pronounced in the future.

  11. Accelerating Monte Carlo simulations of photon transport in a voxelized geometry using a massively parallel graphics processing unit.

    Science.gov (United States)

    Badal, Andreu; Badano, Aldo

    2009-11-01

    It is a known fact that Monte Carlo simulations of radiation transport are computationally intensive and may require long computing times. The authors introduce a new paradigm for the acceleration of Monte Carlo simulations: The use of a graphics processing unit (GPU) as the main computing device instead of a central processing unit (CPU). A GPU-based Monte Carlo code that simulates photon transport in a voxelized geometry with the accurate physics models from PENELOPE has been developed using the CUDATM programming model (NVIDIA Corporation, Santa Clara, CA). An outline of the new code and a sample x-ray imaging simulation with an anthropomorphic phantom are presented. A remarkable 27-fold speed up factor was obtained using a GPU compared to a single core CPU. The reported results show that GPUs are currently a good alternative to CPUs for the simulation of radiation transport. Since the performance of GPUs is currently increasing at a faster pace than that of CPUs, the advantages of GPU-based software are likely to be more pronounced in the future.

  12. GPU-accelerated micromagnetic simulations using cloud computing

    Energy Technology Data Exchange (ETDEWEB)

    Jermain, C.L., E-mail: clj72@cornell.edu [Cornell University, Ithaca, NY 14853 (United States); Rowlands, G.E.; Buhrman, R.A. [Cornell University, Ithaca, NY 14853 (United States); Ralph, D.C. [Cornell University, Ithaca, NY 14853 (United States); Kavli Institute at Cornell, Ithaca, NY 14853 (United States)

    2016-03-01

    Highly parallel graphics processing units (GPUs) can improve the speed of micromagnetic simulations significantly as compared to conventional computing using central processing units (CPUs). We present a strategy for performing GPU-accelerated micromagnetic simulations by utilizing cost-effective GPU access offered by cloud computing services with an open-source Python-based program for running the MuMax3 micromagnetics code remotely. We analyze the scaling and cost benefits of using cloud computing for micromagnetics. - Highlights: • The benefits of cloud computing for GPU-accelerated micromagnetics are examined. • We present the MuCloud software for running simulations on cloud computing. • Simulation run times are measured to benchmark cloud computing performance. • Comparison benchmarks are analyzed between CPU and GPU based solvers.

  13. GPU-accelerated micromagnetic simulations using cloud computing

    International Nuclear Information System (INIS)

    Jermain, C.L.; Rowlands, G.E.; Buhrman, R.A.; Ralph, D.C.

    2016-01-01

    Highly parallel graphics processing units (GPUs) can improve the speed of micromagnetic simulations significantly as compared to conventional computing using central processing units (CPUs). We present a strategy for performing GPU-accelerated micromagnetic simulations by utilizing cost-effective GPU access offered by cloud computing services with an open-source Python-based program for running the MuMax3 micromagnetics code remotely. We analyze the scaling and cost benefits of using cloud computing for micromagnetics. - Highlights: • The benefits of cloud computing for GPU-accelerated micromagnetics are examined. • We present the MuCloud software for running simulations on cloud computing. • Simulation run times are measured to benchmark cloud computing performance. • Comparison benchmarks are analyzed between CPU and GPU based solvers.

  14. A survey of process control computers at the Idaho Chemical Processing Plant

    International Nuclear Information System (INIS)

    Dahl, C.A.

    1989-01-01

    The Idaho Chemical Processing Plant (ICPP) at the Idaho National Engineering Laboratory is charged with the safe processing of spent nuclear fuel elements for the United States Department of Energy. The ICPP was originally constructed in the late 1950s and used state-of-the-art technology for process control at that time. The state of process control instrumentation at the ICPP has steadily improved to keep pace with emerging technology. Today, the ICPP is a college of emerging computer technology in process control with some systems as simple as standalone measurement computers while others are state-of-the-art distributed control systems controlling the operations in an entire facility within the plant. The ICPP has made maximal use of process computer technology aimed at increasing surety, safety, and efficiency of the process operations. Many benefits have been derived from the use of the computers for minimal costs, including decreased misoperations in the facility, and more benefits are expected in the future

  15. Reflector antenna analysis using physical optics on Graphics Processing Units

    DEFF Research Database (Denmark)

    Borries, Oscar Peter; Sørensen, Hans Henrik Brandenborg; Dammann, Bernd

    2014-01-01

    The Physical Optics approximation is a widely used asymptotic method for calculating the scattering from electrically large bodies. It requires significant computational work and little memory, and is thus well suited for application on a Graphics Processing Unit. Here, we investigate the perform......The Physical Optics approximation is a widely used asymptotic method for calculating the scattering from electrically large bodies. It requires significant computational work and little memory, and is thus well suited for application on a Graphics Processing Unit. Here, we investigate...

  16. Multiple-User, Multitasking, Virtual-Memory Computer System

    Science.gov (United States)

    Generazio, Edward R.; Roth, Don J.; Stang, David B.

    1993-01-01

    Computer system designed and programmed to serve multiple users in research laboratory. Provides for computer control and monitoring of laboratory instruments, acquisition and anlaysis of data from those instruments, and interaction with users via remote terminals. System provides fast access to shared central processing units and associated large (from megabytes to gigabytes) memories. Underlying concept of system also applicable to monitoring and control of industrial processes.

  17. Real-time radar signal processing using GPGPU (general-purpose graphic processing unit)

    Science.gov (United States)

    Kong, Fanxing; Zhang, Yan Rockee; Cai, Jingxiao; Palmer, Robert D.

    2016-05-01

    This study introduces a practical approach to develop real-time signal processing chain for general phased array radar on NVIDIA GPUs(Graphical Processing Units) using CUDA (Compute Unified Device Architecture) libraries such as cuBlas and cuFFT, which are adopted from open source libraries and optimized for the NVIDIA GPUs. The processed results are rigorously verified against those from the CPUs. Performance benchmarked in computation time with various input data cube sizes are compared across GPUs and CPUs. Through the analysis, it will be demonstrated that GPGPUs (General Purpose GPU) real-time processing of the array radar data is possible with relatively low-cost commercial GPUs.

  18. Graphics processor efficiency for realization of rapid tabular computations

    International Nuclear Information System (INIS)

    Dudnik, V.A.; Kudryavtsev, V.I.; Us, S.A.; Shestakov, M.V.

    2016-01-01

    Capabilities of graphics processing units (GPU) and central processing units (CPU) have been investigated for realization of fast-calculation algorithms with the use of tabulated functions. The realization of tabulated functions is exemplified by the GPU/CPU architecture-based processors. Comparison is made between the operating efficiencies of GPU and CPU, employed for tabular calculations at different conditions of use. Recommendations are formulated for the use of graphical and central processors to speed up scientific and engineering computations through the use of tabulated functions

  19. Massively parallel signal processing using the graphics processing unit for real-time brain-computer interface feature extraction

    Directory of Open Access Journals (Sweden)

    J. Adam Wilson

    2009-07-01

    Full Text Available The clock speeds of modern computer processors have nearly plateaued in the past five years. Consequently, neural prosthetic systems that rely on processing large quantities of data in a short period of time face a bottleneck, in that it may not be possible to process all of the data recorded from an electrode array with high channel counts and bandwidth, such as electrocorticographic grids or other implantable systems. Therefore, in this study a method of using the processing capabilities of a graphics card (GPU was developed for real-time neural signal processing of a brain-computer interface (BCI. The NVIDIA CUDA system was used to offload processing to the GPU, which is capable of running many operations in parallel, potentially greatly increasing the speed of existing algorithms. The BCI system records many channels of data, which are processed and translated into a control signal, such as the movement of a computer cursor. This signal processing chain involves computing a matrix-matrix multiplication (i.e., a spatial filter, followed by calculating the power spectral density on every channel using an auto-regressive method, and finally classifying appropriate features for control. In this study, the first two computationally-intensive steps were implemented on the GPU, and the speed was compared to both the current implementation and a CPU-based implementation that uses multi-threading. Significant performance gains were obtained with GPU processing: the current implementation processed 1000 channels in 933 ms, while the new GPU method took only 27 ms, an improvement of nearly 35 times.

  20. Units for on-line control with the ES computer in physical investigations

    International Nuclear Information System (INIS)

    Efimov, L.G.

    1983-01-01

    The peripheral part of complex of means created for organization of ES computer operation on-line with experimental devices, comprising two units is described. The first unit is employed as a part of a universal driver of the Camac branch for connection with microprogram ES computer channel controller and ensures multioperational (up to 44 record varieties) device software service. The bilateral data exchange between the device and computer can be performed by bytes as well as 16 or 24-digit words using CAMAC group modes and with maximum rate of 1.25 Mbyte/s. The second unit is meant for synchronization of the data aquisition process with the device starting system and for ensuring the device operator dialogue with the computer

  1. Fast network centrality analysis using GPUs

    Directory of Open Access Journals (Sweden)

    Shi Zhiao

    2011-05-01

    Full Text Available Abstract Background With the exploding volume of data generated by continuously evolving high-throughput technologies, biological network analysis problems are growing larger in scale and craving for more computational power. General Purpose computation on Graphics Processing Units (GPGPU provides a cost-effective technology for the study of large-scale biological networks. Designing algorithms that maximize data parallelism is the key in leveraging the power of GPUs. Results We proposed an efficient data parallel formulation of the All-Pairs Shortest Path problem, which is the key component for shortest path-based centrality computation. A betweenness centrality algorithm built upon this formulation was developed and benchmarked against the most recent GPU-based algorithm. Speedup between 11 to 19% was observed in various simulated scale-free networks. We further designed three algorithms based on this core component to compute closeness centrality, eccentricity centrality and stress centrality. To make all these algorithms available to the research community, we developed a software package gpu-fan (GPU-based Fast Analysis of Networks for CUDA enabled GPUs. Speedup of 10-50× compared with CPU implementations was observed for simulated scale-free networks and real world biological networks. Conclusions gpu-fan provides a significant performance improvement for centrality computation in large-scale networks. Source code is available under the GNU Public License (GPL at http://bioinfo.vanderbilt.edu/gpu-fan/.

  2. Green Computing

    Directory of Open Access Journals (Sweden)

    K. Shalini

    2013-01-01

    Full Text Available Green computing is all about using computers in a smarter and eco-friendly way. It is the environmentally responsible use of computers and related resources which includes the implementation of energy-efficient central processing units, servers and peripherals as well as reduced resource consumption and proper disposal of electronic waste .Computers certainly make up a large part of many people lives and traditionally are extremely damaging to the environment. Manufacturers of computer and its parts have been espousing the green cause to help protect environment from computers and electronic waste in any way.Research continues into key areas such as making the use of computers as energy-efficient as Possible, and designing algorithms and systems for efficiency-related computer technologies.

  3. HIGH PERFORMANCE PHOTOGRAMMETRIC PROCESSING ON COMPUTER CLUSTERS

    Directory of Open Access Journals (Sweden)

    V. N. Adrov

    2012-07-01

    Full Text Available Most cpu consuming tasks in photogrammetric processing can be done in parallel. The algorithms take independent bits as input and produce independent bits as output. The independence of bits comes from the nature of such algorithms since images, stereopairs or small image blocks parts can be processed independently. Many photogrammetric algorithms are fully automatic and do not require human interference. Photogrammetric workstations can perform tie points measurements, DTM calculations, orthophoto construction, mosaicing and many other service operations in parallel using distributed calculations. Distributed calculations save time reducing several days calculations to several hours calculations. Modern trends in computer technology show the increase of cpu cores in workstations, speed increase in local networks, and as a result dropping the price of the supercomputers or computer clusters that can contain hundreds or even thousands of computing nodes. Common distributed processing in DPW is usually targeted for interactive work with a limited number of cpu cores and is not optimized for centralized administration. The bottleneck of common distributed computing in photogrammetry can be in the limited lan throughput and storage performance, since the processing of huge amounts of large raster images is needed.

  4. U.S. Experience and practices associated with the use of centralized rad waste processing centers

    International Nuclear Information System (INIS)

    Gibson, James D.

    1994-01-01

    This paper presents the experience and current practices employed within the United States (US) associated with the use of Centralized Rad waste Processing Centers for the processing of Low Level Radioactive Wastes (LLRW). Information is provided on the methods, technologies, and practices employed by Scientific Ecology Group, Inc. (SEG), which is the worlds largest processor of LLRW. SEG processes over 80,000 cubic meters of waste annually and achieves an overall volume reduction of 12 : 1. LLRW processing in the United States is currently performed primarily at Centralized Rad waste Processing Centers, such as SEG's Central Volume Reduction Facility (CVRF) in Oak Ridge, Tennessee. This is primarily due to the superior economical application of advanced waste processing technologies, equipment, and personnel maintained at these centers. Information is provided on how SEG uses supercompaction, incineration, metals recycling, vitrification, and various other waste processing techniques to process both dry and wet wastes from over 90 commercial nuclear power plants, government operated facilities, hospitals, universities, and various small generators of radioactive waste

  5. How Do We Really Compute with Units?

    Science.gov (United States)

    Fiedler, B. H.

    2010-01-01

    The methods that we teach students for computing with units of measurement are often not consistent with the practice of professionals. For professionals, the vast majority of computations with quantities of measure are performed within programs on electronic computers, for which an accounting for the units occurs only once, in the design of the…

  6. Parallel computation for distributed parameter system-from vector processors to Adena computer

    Energy Technology Data Exchange (ETDEWEB)

    Nogi, T

    1983-04-01

    Research on advanced parallel hardware and software architectures for very high-speed computation deserves and needs more support and attention to fulfil its promise. Novel architectures for parallel processing are being made ready. Architectures for parallel processing can be roughly divided into two groups. One is a vector processor in which a single central processing unit involves multiple vector-arithmetic registers. The other is a processor array in which slave processors are connected to a host processor to perform parallel computation. In this review, the concept and data structure of the Adena (alternating-direction edition nexus array) architecture, which is conformable to distributed-parameter simulation algorithms, are described. 5 references.

  7. Real-time computation of parameter fitting and image reconstruction using graphical processing units

    Science.gov (United States)

    Locans, Uldis; Adelmann, Andreas; Suter, Andreas; Fischer, Jannis; Lustermann, Werner; Dissertori, Günther; Wang, Qiulin

    2017-06-01

    In recent years graphical processing units (GPUs) have become a powerful tool in scientific computing. Their potential to speed up highly parallel applications brings the power of high performance computing to a wider range of users. However, programming these devices and integrating their use in existing applications is still a challenging task. In this paper we examined the potential of GPUs for two different applications. The first application, created at Paul Scherrer Institut (PSI), is used for parameter fitting during data analysis of μSR (muon spin rotation, relaxation and resonance) experiments. The second application, developed at ETH, is used for PET (Positron Emission Tomography) image reconstruction and analysis. Applications currently in use were examined to identify parts of the algorithms in need of optimization. Efficient GPU kernels were created in order to allow applications to use a GPU, to speed up the previously identified parts. Benchmarking tests were performed in order to measure the achieved speedup. During this work, we focused on single GPU systems to show that real time data analysis of these problems can be achieved without the need for large computing clusters. The results show that the currently used application for parameter fitting, which uses OpenMP to parallelize calculations over multiple CPU cores, can be accelerated around 40 times through the use of a GPU. The speedup may vary depending on the size and complexity of the problem. For PET image analysis, the obtained speedups of the GPU version were more than × 40 larger compared to a single core CPU implementation. The achieved results show that it is possible to improve the execution time by orders of magnitude.

  8. Optimized Laplacian image sharpening algorithm based on graphic processing unit

    Science.gov (United States)

    Ma, Tinghuai; Li, Lu; Ji, Sai; Wang, Xin; Tian, Yuan; Al-Dhelaan, Abdullah; Al-Rodhaan, Mznah

    2014-12-01

    In classical Laplacian image sharpening, all pixels are processed one by one, which leads to large amount of computation. Traditional Laplacian sharpening processed on CPU is considerably time-consuming especially for those large pictures. In this paper, we propose a parallel implementation of Laplacian sharpening based on Compute Unified Device Architecture (CUDA), which is a computing platform of Graphic Processing Units (GPU), and analyze the impact of picture size on performance and the relationship between the processing time of between data transfer time and parallel computing time. Further, according to different features of different memory, an improved scheme of our method is developed, which exploits shared memory in GPU instead of global memory and further increases the efficiency. Experimental results prove that two novel algorithms outperform traditional consequentially method based on OpenCV in the aspect of computing speed.

  9. Distribution of lithostratigraphic units within the central block of Yucca Mountain, Nevada: A three-dimensional computer-based model, Version YMP.R2.0

    International Nuclear Information System (INIS)

    Buesch, D.C.; Nelson, J.E.; Dickerson, R.P.; Drake, R.M. II; San Juan, C.A.; Spengler, R.W.; Geslin, J.K.; Moyer, T.C.

    1996-01-01

    Yucca Mountain, Nevada is underlain by 14.0 to 11.6 Ma volcanic rocks tilted eastward 3 degree to 20 degree and cut by faults that were primarily active between 12.7 and 11.6 Ma. A three-dimensional computer-based model of the central block of the mountain consists of seven structural subblocks composed of six formations and the interstratified-bedded tuffaceous deposits. Rocks from the 12.7 Ma Tiva Canyon Tuff, which forms most of the exposed rocks on the mountain, to the 13.1 Ma Prow Pass Tuff are modeled with 13 surfaces. Modeled units represent single formations such as the Pah Canyon Tuff, grouped units such as the combination of the Yucca Mountain Tuff with the superjacent bedded tuff, and divisions of the Topopah Spring Tuff such as the crystal-poor vitrophyre interval. The model is based on data from 75 boreholes from which a structure contour map at the base of the Tiva Canyon Tuff and isochore maps for each unit are constructed to serve as primary input. Modeling consists of an iterative cycle that begins with the primary structure-contour map from which isochore values of the subjacent model unit are subtracted to produce the structure contour map on the base of the unit. This new structure contour map forms the input for another cycle of isochore subtraction to produce the next structure contour map. In this method of solids modeling, the model units are presented by surfaces (structure contour maps), and all surfaces are stored in the model. Surfaces can be converted to form volumes of model units with additional effort. This lithostratigraphic and structural model can be used for (1) storing data from, and planning future, site characterization activities, (2) preliminary geometry of units for design of Exploratory Studies Facility and potential repository, and (3) performance assessment evaluations

  10. Vortex particle method in parallel computations on graphical processing units used in study of the evolution of vortex structures

    International Nuclear Information System (INIS)

    Kudela, Henryk; Kosior, Andrzej

    2014-01-01

    Understanding the dynamics and the mutual interaction among various types of vortical motions is a key ingredient in clarifying and controlling fluid motion. In the paper several different cases related to vortex tube interactions are presented. Due to problems with very long computation times on the single processor, the vortex-in-cell (VIC) method is implemented on the multicore architecture of a graphics processing unit (GPU). Numerical results of leapfrogging of two vortex rings for inviscid and viscous fluid are presented as test cases for the new multi-GPU implementation of the VIC method. Influence of the Reynolds number on the reconnection process is shown for two examples: antiparallel vortex tubes and orthogonally offset vortex tubes. Our aim is to show the great potential of the VIC method for solutions of three-dimensional flow problems and that the VIC method is very well suited for parallel computation. (paper)

  11. Using a progress computer for the direct acquisition and processing of radiation protection data

    International Nuclear Information System (INIS)

    Barz, H.G.; Borchardt, K.D.; Hacke, J.; Kirschfeld, K.E.; Kluppak, B.

    1976-01-01

    A process computer will be used in the Hahn-Meitner-Institute to rationalize radiation protection measures. Appr. 150 transmitters are to be connected with this computer. Especially the radiation measuring devices of a nuclear reactor, of hot cells, and of a heavy ion accelerator, as well as the emission- and environment monitoring systems will be connected. The advantages of this method are described: central data acquisition, central alarm and stoppage information, data processing of certain measurement values, possibility of quick disturbance analysis. Furthermore the authors report about the preparations already finished, particularly about data transmission of digital and analog values to the computer. (orig./HP) [de

  12. Graphics Processing Units for HEP trigger systems

    International Nuclear Information System (INIS)

    Ammendola, R.; Bauce, M.; Biagioni, A.; Chiozzi, S.; Cotta Ramusino, A.; Fantechi, R.; Fiorini, M.; Giagu, S.; Gianoli, A.; Lamanna, G.; Lonardo, A.; Messina, A.

    2016-01-01

    General-purpose computing on GPUs (Graphics Processing Units) is emerging as a new paradigm in several fields of science, although so far applications have been tailored to the specific strengths of such devices as accelerator in offline computation. With the steady reduction of GPU latencies, and the increase in link and memory throughput, the use of such devices for real-time applications in high-energy physics data acquisition and trigger systems is becoming ripe. We will discuss the use of online parallel computing on GPU for synchronous low level trigger, focusing on CERN NA62 experiment trigger system. The use of GPU in higher level trigger system is also briefly considered.

  13. Graphics Processing Units for HEP trigger systems

    Energy Technology Data Exchange (ETDEWEB)

    Ammendola, R. [INFN Sezione di Roma “Tor Vergata”, Via della Ricerca Scientifica 1, 00133 Roma (Italy); Bauce, M. [INFN Sezione di Roma “La Sapienza”, P.le A. Moro 2, 00185 Roma (Italy); University of Rome “La Sapienza”, P.lee A.Moro 2, 00185 Roma (Italy); Biagioni, A. [INFN Sezione di Roma “La Sapienza”, P.le A. Moro 2, 00185 Roma (Italy); Chiozzi, S.; Cotta Ramusino, A. [INFN Sezione di Ferrara, Via Saragat 1, 44122 Ferrara (Italy); University of Ferrara, Via Saragat 1, 44122 Ferrara (Italy); Fantechi, R. [INFN Sezione di Pisa, Largo B. Pontecorvo 3, 56127 Pisa (Italy); CERN, Geneve (Switzerland); Fiorini, M. [INFN Sezione di Ferrara, Via Saragat 1, 44122 Ferrara (Italy); University of Ferrara, Via Saragat 1, 44122 Ferrara (Italy); Giagu, S. [INFN Sezione di Roma “La Sapienza”, P.le A. Moro 2, 00185 Roma (Italy); University of Rome “La Sapienza”, P.lee A.Moro 2, 00185 Roma (Italy); Gianoli, A. [INFN Sezione di Ferrara, Via Saragat 1, 44122 Ferrara (Italy); University of Ferrara, Via Saragat 1, 44122 Ferrara (Italy); Lamanna, G., E-mail: gianluca.lamanna@cern.ch [INFN Sezione di Pisa, Largo B. Pontecorvo 3, 56127 Pisa (Italy); INFN Laboratori Nazionali di Frascati, Via Enrico Fermi 40, 00044 Frascati (Roma) (Italy); Lonardo, A. [INFN Sezione di Roma “La Sapienza”, P.le A. Moro 2, 00185 Roma (Italy); Messina, A. [INFN Sezione di Roma “La Sapienza”, P.le A. Moro 2, 00185 Roma (Italy); University of Rome “La Sapienza”, P.lee A.Moro 2, 00185 Roma (Italy); and others

    2016-07-11

    General-purpose computing on GPUs (Graphics Processing Units) is emerging as a new paradigm in several fields of science, although so far applications have been tailored to the specific strengths of such devices as accelerator in offline computation. With the steady reduction of GPU latencies, and the increase in link and memory throughput, the use of such devices for real-time applications in high-energy physics data acquisition and trigger systems is becoming ripe. We will discuss the use of online parallel computing on GPU for synchronous low level trigger, focusing on CERN NA62 experiment trigger system. The use of GPU in higher level trigger system is also briefly considered.

  14. A FPGA-based signal processing unit for a GEM array detector

    International Nuclear Information System (INIS)

    Yen, W.W.; Chou, H.P.

    2013-06-01

    in the present study, a signal processing unit for a GEM one-dimensional array detector is presented to measure the trajectory of photoelectrons produced by cosmic X-rays. The present GEM array detector system has 16 signal channels. The front-end unit provides timing signals from trigger units and energy signals from charge sensitive amplifies. The prototype of the processing unit is implemented using commercial field programmable gate array circuit boards. The FPGA based system is linked to a personal computer for testing and data analysis. Tests using simulated signals indicated that the FPGA-based signal processing unit has a good linearity and is flexible for parameter adjustment for various experimental conditions (authors)

  15. Future forest aboveground carbon dynamics in the central United States: the importance of forest demographic processes

    Science.gov (United States)

    Wenchi Jin; Hong S. He; Frank R. Thompson; Wen J. Wang; Jacob S. Fraser; Stephen R. Shifley; Brice B. Hanberry; William D. Dijak

    2017-01-01

    The Central Hardwood Forest (CHF) in the United States is currently a major carbon sink, there are uncertainties in how long the current carbon sink will persist and if the CHF will eventually become a carbon source. We used a multi-model ensemble to investigate aboveground carbon density of the CHF from 2010 to 2300 under current climate. Simulations were done using...

  16. Partial wave analysis using graphics processing units

    Energy Technology Data Exchange (ETDEWEB)

    Berger, Niklaus; Liu Beijiang; Wang Jike, E-mail: nberger@ihep.ac.c [Institute of High Energy Physics, Chinese Academy of Sciences, 19B Yuquan Lu, Shijingshan, 100049 Beijing (China)

    2010-04-01

    Partial wave analysis is an important tool for determining resonance properties in hadron spectroscopy. For large data samples however, the un-binned likelihood fits employed are computationally very expensive. At the Beijing Spectrometer (BES) III experiment, an increase in statistics compared to earlier experiments of up to two orders of magnitude is expected. In order to allow for a timely analysis of these datasets, additional computing power with short turnover times has to be made available. It turns out that graphics processing units (GPUs) originally developed for 3D computer games have an architecture of massively parallel single instruction multiple data floating point units that is almost ideally suited for the algorithms employed in partial wave analysis. We have implemented a framework for tensor manipulation and partial wave fits called GPUPWA. The user writes a program in pure C++ whilst the GPUPWA classes handle computations on the GPU, memory transfers, caching and other technical details. In conjunction with a recent graphics processor, the framework provides a speed-up of the partial wave fit by more than two orders of magnitude compared to legacy FORTRAN code.

  17. 32-Bit FASTBUS computer

    International Nuclear Information System (INIS)

    Blossom, J.M.; Hong, J.P.; Kellner, R.G.

    1985-01-01

    Los Alamos National Laboratory is building a 32-bit FASTBUS computer using the NATIONAL SEMICONDUCTOR 32032 central processing unit (CPU) and containing 16 million bytes of memory. The board can act both as a FASTBUS master and as a FASTBUS slave. It contains a custom direct memory access (DMA) channel which can perform 80 million bytes per second block transfers across the FASTBUS

  18. Managing internode data communications for an uninitialized process in a parallel computer

    Science.gov (United States)

    Archer, Charles J; Blocksome, Michael A; Miller, Douglas R; Parker, Jeffrey J; Ratterman, Joseph D; Smith, Brian E

    2014-05-20

    A parallel computer includes nodes, each having main memory and a messaging unit (MU). Each MU includes computer memory, which in turn includes, MU message buffers. Each MU message buffer is associated with an uninitialized process on the compute node. In the parallel computer, managing internode data communications for an uninitialized process includes: receiving, by an MU of a compute node, one or more data communications messages in an MU message buffer associated with an uninitialized process on the compute node; determining, by an application agent, that the MU message buffer associated with the uninitialized process is full prior to initialization of the uninitialized process; establishing, by the application agent, a temporary message buffer for the uninitialized process in main computer memory; and moving, by the application agent, data communications messages from the MU message buffer associated with the uninitialized process to the temporary message buffer in main computer memory.

  19. Hybrid parallel computing architecture for multiview phase shifting

    Science.gov (United States)

    Zhong, Kai; Li, Zhongwei; Zhou, Xiaohui; Shi, Yusheng; Wang, Congjun

    2014-11-01

    The multiview phase-shifting method shows its powerful capability in achieving high resolution three-dimensional (3-D) shape measurement. Unfortunately, this ability results in very high computation costs and 3-D computations have to be processed offline. To realize real-time 3-D shape measurement, a hybrid parallel computing architecture is proposed for multiview phase shifting. In this architecture, the central processing unit can co-operate with the graphic processing unit (GPU) to achieve hybrid parallel computing. The high computation cost procedures, including lens distortion rectification, phase computation, correspondence, and 3-D reconstruction, are implemented in GPU, and a three-layer kernel function model is designed to simultaneously realize coarse-grained and fine-grained paralleling computing. Experimental results verify that the developed system can perform 50 fps (frame per second) real-time 3-D measurement with 260 K 3-D points per frame. A speedup of up to 180 times is obtained for the performance of the proposed technique using a NVIDIA GT560Ti graphics card rather than a sequential C in a 3.4 GHZ Inter Core i7 3770.

  20. Modern computer hardware and the role of central computing facilities in particle physics

    International Nuclear Information System (INIS)

    Zacharov, V.

    1981-01-01

    Important recent changes in the hardware technology of computer system components are reviewed, and the impact of these changes assessed on the present and future pattern of computing in particle physics. The place of central computing facilities is particularly examined, to answer the important question as to what, if anything, should be their future role. Parallelism in computing system components is considered to be an important property that can be exploited with advantage. The paper includes a short discussion of the position of communications and network technology in modern computer systems. (orig.)

  1. Scalable and massively parallel Monte Carlo photon transport simulations for heterogeneous computing platforms.

    Science.gov (United States)

    Yu, Leiming; Nina-Paravecino, Fanny; Kaeli, David; Fang, Qianqian

    2018-01-01

    We present a highly scalable Monte Carlo (MC) three-dimensional photon transport simulation platform designed for heterogeneous computing systems. Through the development of a massively parallel MC algorithm using the Open Computing Language framework, this research extends our existing graphics processing unit (GPU)-accelerated MC technique to a highly scalable vendor-independent heterogeneous computing environment, achieving significantly improved performance and software portability. A number of parallel computing techniques are investigated to achieve portable performance over a wide range of computing hardware. Furthermore, multiple thread-level and device-level load-balancing strategies are developed to obtain efficient simulations using multiple central processing units and GPUs. (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).

  2. Monte Carlo MP2 on Many Graphical Processing Units.

    Science.gov (United States)

    Doran, Alexander E; Hirata, So

    2016-10-11

    In the Monte Carlo second-order many-body perturbation (MC-MP2) method, the long sum-of-product matrix expression of the MP2 energy, whose literal evaluation may be poorly scalable, is recast into a single high-dimensional integral of functions of electron pair coordinates, which is evaluated by the scalable method of Monte Carlo integration. The sampling efficiency is further accelerated by the redundant-walker algorithm, which allows a maximal reuse of electron pairs. Here, a multitude of graphical processing units (GPUs) offers a uniquely ideal platform to expose multilevel parallelism: fine-grain data-parallelism for the redundant-walker algorithm in which millions of threads compute and share orbital amplitudes on each GPU; coarse-grain instruction-parallelism for near-independent Monte Carlo integrations on many GPUs with few and infrequent interprocessor communications. While the efficiency boost by the redundant-walker algorithm on central processing units (CPUs) grows linearly with the number of electron pairs and tends to saturate when the latter exceeds the number of orbitals, on a GPU it grows quadratically before it increases linearly and then eventually saturates at a much larger number of pairs. This is because the orbital constructions are nearly perfectly parallelized on a GPU and thus completed in a near-constant time regardless of the number of pairs. In consequence, an MC-MP2/cc-pVDZ calculation of a benzene dimer is 2700 times faster on 256 GPUs (using 2048 electron pairs) than on two CPUs, each with 8 cores (which can use only up to 256 pairs effectively). We also numerically determine that the cost to achieve a given relative statistical uncertainty in an MC-MP2 energy increases as O(n 3 ) or better with system size n, which may be compared with the O(n 5 ) scaling of the conventional implementation of deterministic MP2. We thus establish the scalability of MC-MP2 with both system and computer sizes.

  3. Unit cell-based computer-aided manufacturing system for tissue engineering

    International Nuclear Information System (INIS)

    Kang, Hyun-Wook; Park, Jeong Hun; Kang, Tae-Yun; Seol, Young-Joon; Cho, Dong-Woo

    2012-01-01

    Scaffolds play an important role in the regeneration of artificial tissues or organs. A scaffold is a porous structure with a micro-scale inner architecture in the range of several to several hundreds of micrometers. Therefore, computer-aided construction of scaffolds should provide sophisticated functionality for porous structure design and a tool path generation strategy that can achieve micro-scale architecture. In this study, a new unit cell-based computer-aided manufacturing (CAM) system was developed for the automated design and fabrication of a porous structure with micro-scale inner architecture that can be applied to composite tissue regeneration. The CAM system was developed by first defining a data structure for the computing process of a unit cell representing a single pore structure. Next, an algorithm and software were developed and applied to construct porous structures with a single or multiple pore design using solid freeform fabrication technology and a 3D tooth/spine computer-aided design model. We showed that this system is quite feasible for the design and fabrication of a scaffold for tissue engineering. (paper)

  4. Unit cell-based computer-aided manufacturing system for tissue engineering.

    Science.gov (United States)

    Kang, Hyun-Wook; Park, Jeong Hun; Kang, Tae-Yun; Seol, Young-Joon; Cho, Dong-Woo

    2012-03-01

    Scaffolds play an important role in the regeneration of artificial tissues or organs. A scaffold is a porous structure with a micro-scale inner architecture in the range of several to several hundreds of micrometers. Therefore, computer-aided construction of scaffolds should provide sophisticated functionality for porous structure design and a tool path generation strategy that can achieve micro-scale architecture. In this study, a new unit cell-based computer-aided manufacturing (CAM) system was developed for the automated design and fabrication of a porous structure with micro-scale inner architecture that can be applied to composite tissue regeneration. The CAM system was developed by first defining a data structure for the computing process of a unit cell representing a single pore structure. Next, an algorithm and software were developed and applied to construct porous structures with a single or multiple pore design using solid freeform fabrication technology and a 3D tooth/spine computer-aided design model. We showed that this system is quite feasible for the design and fabrication of a scaffold for tissue engineering.

  5. Computing with impure numbers - Automatic consistency checking and units conversion using computer algebra

    Science.gov (United States)

    Stoutemyer, D. R.

    1977-01-01

    The computer algebra language MACSYMA enables the programmer to include symbolic physical units in computer calculations, and features automatic detection of dimensionally-inhomogeneous formulas and conversion of inconsistent units in a dimensionally homogeneous formula. Some examples illustrate these features.

  6. 41 CFR 105-56.027 - Centralized salary offset computer match.

    Science.gov (United States)

    2010-07-01

    ... 41 Public Contracts and Property Management 3 2010-07-01 2010-07-01 false Centralized salary... Services Administration 56-SALARY OFFSET FOR INDEBTEDNESS OF FEDERAL EMPLOYEES TO THE UNITED STATES Centralized Salary Offset (CSO) Procedures-GSA as Paying Agency § 105-56.027 Centralized salary offset...

  7. 41 CFR 105-56.017 - Centralized salary offset computer match.

    Science.gov (United States)

    2010-07-01

    ... 41 Public Contracts and Property Management 3 2010-07-01 2010-07-01 false Centralized salary... Services Administration 56-SALARY OFFSET FOR INDEBTEDNESS OF FEDERAL EMPLOYEES TO THE UNITED STATES Centralized Salary Offset (CSO) Procedures-GSA as Creditor Agency § 105-56.017 Centralized salary offset...

  8. Micromagnetics on high-performance workstation and mobile computational platforms

    Science.gov (United States)

    Fu, S.; Chang, R.; Couture, S.; Menarini, M.; Escobar, M. A.; Kuteifan, M.; Lubarda, M.; Gabay, D.; Lomakin, V.

    2015-05-01

    The feasibility of using high-performance desktop and embedded mobile computational platforms is presented, including multi-core Intel central processing unit, Nvidia desktop graphics processing units, and Nvidia Jetson TK1 Platform. FastMag finite element method-based micromagnetic simulator is used as a testbed, showing high efficiency on all the platforms. Optimization aspects of improving the performance of the mobile systems are discussed. The high performance, low cost, low power consumption, and rapid performance increase of the embedded mobile systems make them a promising candidate for micromagnetic simulations. Such architectures can be used as standalone systems or can be built as low-power computing clusters.

  9. Quantitative analysis of residual protein contamination of podiatry instruments reprocessed through local and central decontamination units

    Directory of Open Access Journals (Sweden)

    Ramage Gordon

    2011-01-01

    Full Text Available Abstract Background The cleaning stage of the instrument decontamination process has come under increased scrutiny due to the increasing complexity of surgical instruments and the adverse affects of residual protein contamination on surgical instruments. Instruments used in the podiatry field have a complex surface topography and are exposed to a wide range of biological contamination. Currently, podiatry instruments are reprocessed locally within surgeries while national strategies are favouring a move toward reprocessing in central facilities. The aim of this study was to determine the efficacy of local and central reprocessing on podiatry instruments by measuring residual protein contamination of instruments reprocessed by both methods. Methods The residual protein of 189 instruments reprocessed centrally and 189 instruments reprocessed locally was determined using a fluorescent assay based on the reaction of proteins with o-phthaldialdehyde/sodium 2-mercaptoethanesulfonate. Results Residual protein was detected on 72% (n = 136 of instruments reprocessed centrally and 90% (n = 170 of instruments reprocessed locally. Significantly less protein (p Conclusions Overall, the results show the superiority of central reprocessing for complex podiatry instruments when protein contamination is considered, though no significant difference was found in residual protein between local decontamination unit and central decontamination unit processes for Blacks files. Further research is needed to undertake qualitative identification of protein contamination to identify any cross contamination risks and a standard for acceptable residual protein contamination applicable to different instruments and specialities should be considered as a matter of urgency.

  10. Quantitative analysis of residual protein contamination of podiatry instruments reprocessed through local and central decontamination units.

    Science.gov (United States)

    Smith, Gordon Wg; Goldie, Frank; Long, Steven; Lappin, David F; Ramage, Gordon; Smith, Andrew J

    2011-01-10

    The cleaning stage of the instrument decontamination process has come under increased scrutiny due to the increasing complexity of surgical instruments and the adverse affects of residual protein contamination on surgical instruments. Instruments used in the podiatry field have a complex surface topography and are exposed to a wide range of biological contamination. Currently, podiatry instruments are reprocessed locally within surgeries while national strategies are favouring a move toward reprocessing in central facilities. The aim of this study was to determine the efficacy of local and central reprocessing on podiatry instruments by measuring residual protein contamination of instruments reprocessed by both methods. The residual protein of 189 instruments reprocessed centrally and 189 instruments reprocessed locally was determined using a fluorescent assay based on the reaction of proteins with o-phthaldialdehyde/sodium 2-mercaptoethanesulfonate. Residual protein was detected on 72% (n = 136) of instruments reprocessed centrally and 90% (n = 170) of instruments reprocessed locally. Significantly less protein (p podiatry instruments when protein contamination is considered, though no significant difference was found in residual protein between local decontamination unit and central decontamination unit processes for Blacks files. Further research is needed to undertake qualitative identification of protein contamination to identify any cross contamination risks and a standard for acceptable residual protein contamination applicable to different instruments and specialities should be considered as a matter of urgency.

  11. Development of new process network for gas chromatograph and analyzers connected with SCADA system and Digital Control Computers at Cernavoda NPP Unit 1

    International Nuclear Information System (INIS)

    Deneanu, Cornel; Popa Nemoiu, Dragos; Nica, Dana; Bucur, Cosmin

    2007-01-01

    The continuous monitoring of gas mixture concentrations (deuterium/ hydrogen/oxygen/nitrogen) accumulated in 'Moderator Cover Gas', 'Liquid Control Zone' and 'Heat Transport D 2 O Storage Tank Cover Gas', as well as the continuous monitoring of Heavy Water into Light Water concentration in 'Boilers Steam', 'Boilers Blown Down', 'Moderator heat exchangers', and 'Recirculated Water System', sensing any leaks of Cernavoda NPP U1 led to requirement of developing a new process network for gas chromatograph and analyzers connected to the SCADA system and Digital Control Computers of Cernavoda NPP Unit 1. In 2005 it was designed and implemented the process network for gas chromatograph which connected the gas chromatograph equipment to the SCADA system and Digital Control Computers of the Cernavoda NPP Unit 1. Later this process network for gas chromatograph has been extended to connect the AE13 and AE14 Fourier Transform Infrared (FTIR) analyzers with either. The Gas Chromatograph equipment measures with best accuracy the mixture gases (deuterium/ hydrogen/oxygen/nitrogen) concentration. The Fourier Transform Infrared (FTIR) AE13 and AE14 Analyzers measure the Heavy Water into Light Water concentration in Boilers Steam, Boilers BlownDown, Moderator heat exchangers, and Recirculated Water System, monitoring and signaling any leaks. The Gas Chromatograph equipment and Fourier Transform Infrared (FTIR) AE13 and AE14 Analyzers use the new OPC (Object Link Embedded for Process Control) technologies available in ABB's VistaNet network for interoperability with automation equipment. This new process network has interconnected the ABB chromatograph and Fourier Transform Infrared analyzers with plant Digital Control Computers using new technology. The result was an increased reliability and capability for inspection and improved system safety

  12. In silico Interrogation of Insect Central Complex Suggests Computational Roles for the Ellipsoid Body in Spatial Navigation

    Directory of Open Access Journals (Sweden)

    Vincenzo G. Fiore

    2017-08-01

    Full Text Available The central complex in the insect brain is a composite of midline neuropils involved in processing sensory cues and mediating behavioral outputs to orchestrate spatial navigation. Despite recent advances, however, the neural mechanisms underlying sensory integration and motor action selections have remained largely elusive. In particular, it is not yet understood how the central complex exploits sensory inputs to realize motor functions associated with spatial navigation. Here we report an in silico interrogation of central complex-mediated spatial navigation with a special emphasis on the ellipsoid body. Based on known connectivity and function, we developed a computational model to test how the local connectome of the central complex can mediate sensorimotor integration to guide different forms of behavioral outputs. Our simulations show integration of multiple sensory sources can be effectively performed in the ellipsoid body. This processed information is used to trigger continuous sequences of action selections resulting in self-motion, obstacle avoidance and the navigation of simulated environments of varying complexity. The motor responses to perceived sensory stimuli can be stored in the neural structure of the central complex to simulate navigation relying on a collective of guidance cues, akin to sensory-driven innate or habitual behaviors. By comparing behaviors under different conditions of accessible sources of input information, we show the simulated insect computes visual inputs and body posture to estimate its position in space. Finally, we tested whether the local connectome of the central complex might also allow the flexibility required to recall an intentional behavioral sequence, among different courses of actions. Our simulations suggest that the central complex can encode combined representations of motor and spatial information to pursue a goal and thus successfully guide orientation behavior. Together, the observed

  13. In silico Interrogation of Insect Central Complex Suggests Computational Roles for the Ellipsoid Body in Spatial Navigation.

    Science.gov (United States)

    Fiore, Vincenzo G; Kottler, Benjamin; Gu, Xiaosi; Hirth, Frank

    2017-01-01

    The central complex in the insect brain is a composite of midline neuropils involved in processing sensory cues and mediating behavioral outputs to orchestrate spatial navigation. Despite recent advances, however, the neural mechanisms underlying sensory integration and motor action selections have remained largely elusive. In particular, it is not yet understood how the central complex exploits sensory inputs to realize motor functions associated with spatial navigation. Here we report an in silico interrogation of central complex-mediated spatial navigation with a special emphasis on the ellipsoid body. Based on known connectivity and function, we developed a computational model to test how the local connectome of the central complex can mediate sensorimotor integration to guide different forms of behavioral outputs. Our simulations show integration of multiple sensory sources can be effectively performed in the ellipsoid body. This processed information is used to trigger continuous sequences of action selections resulting in self-motion, obstacle avoidance and the navigation of simulated environments of varying complexity. The motor responses to perceived sensory stimuli can be stored in the neural structure of the central complex to simulate navigation relying on a collective of guidance cues, akin to sensory-driven innate or habitual behaviors. By comparing behaviors under different conditions of accessible sources of input information, we show the simulated insect computes visual inputs and body posture to estimate its position in space. Finally, we tested whether the local connectome of the central complex might also allow the flexibility required to recall an intentional behavioral sequence, among different courses of actions. Our simulations suggest that the central complex can encode combined representations of motor and spatial information to pursue a goal and thus successfully guide orientation behavior. Together, the observed computational features

  14. Distributed trace using central performance counter memory

    Science.gov (United States)

    Satterfield, David L.; Sexton, James C.

    2013-01-22

    A plurality of processing cores, are central storage unit having at least memory connected in a daisy chain manner, forming a daisy chain ring layout on an integrated chip. At least one of the plurality of processing cores places trace data on the daisy chain connection for transmitting the trace data to the central storage unit, and the central storage unit detects the trace data and stores the trace data in the memory co-located in with the central storage unit.

  15. Do Bananas Have a Culture? United Fruit Company Colonies in Central America 1900-1960

    Directory of Open Access Journals (Sweden)

    Atalia Shragai

    2014-06-01

    Full Text Available This article is concerned with the processes underlying the development of the unique identifications and culture which evolved among the First Class Workers of the United Fruit Company - the vast majority of whom were citizens of the United States, working alongside Europeans and Central Americans - during the first half of the twentieth century. Examining the social and cultural practices widespread among the Company’s colonies, I trace the nature of the ‘Banana Culture’, a term coined by the members of this group.

  16. Heterogeneous real-time computing in radio astronomy

    Science.gov (United States)

    Ford, John M.; Demorest, Paul; Ransom, Scott

    2010-07-01

    Modern computer architectures suited for general purpose computing are often not the best choice for either I/O-bound or compute-bound problems. Sometimes the best choice is not to choose a single architecture, but to take advantage of the best characteristics of different computer architectures to solve your problems. This paper examines the tradeoffs between using computer systems based on the ubiquitous X86 Central Processing Units (CPU's), Field Programmable Gate Array (FPGA) based signal processors, and Graphical Processing Units (GPU's). We will show how a heterogeneous system can be produced that blends the best of each of these technologies into a real-time signal processing system. FPGA's tightly coupled to analog-to-digital converters connect the instrument to the telescope and supply the first level of computing to the system. These FPGA's are coupled to other FPGA's to continue to provide highly efficient processing power. Data is then packaged up and shipped over fast networks to a cluster of general purpose computers equipped with GPU's, which are used for floating-point intensive computation. Finally, the data is handled by the CPU and written to disk, or further processed. Each of the elements in the system has been chosen for its specific characteristics and the role it can play in creating a system that does the most for the least, in terms of power, space, and money.

  17. Micromagnetic simulations using Graphics Processing Units

    International Nuclear Information System (INIS)

    Lopez-Diaz, L; Aurelio, D; Torres, L; Martinez, E; Hernandez-Lopez, M A; Gomez, J; Alejos, O; Carpentieri, M; Finocchio, G; Consolo, G

    2012-01-01

    The methodology for adapting a standard micromagnetic code to run on graphics processing units (GPUs) and exploit the potential for parallel calculations of this platform is discussed. GPMagnet, a general purpose finite-difference GPU-based micromagnetic tool, is used as an example. Speed-up factors of two orders of magnitude can be achieved with GPMagnet with respect to a serial code. This allows for running extensive simulations, nearly inaccessible with a standard micromagnetic solver, at reasonable computational times. (topical review)

  18. Use of general purpose graphics processing units with MODFLOW

    Science.gov (United States)

    Hughes, Joseph D.; White, Jeremy T.

    2013-01-01

    To evaluate the use of general-purpose graphics processing units (GPGPUs) to improve the performance of MODFLOW, an unstructured preconditioned conjugate gradient (UPCG) solver has been developed. The UPCG solver uses a compressed sparse row storage scheme and includes Jacobi, zero fill-in incomplete, and modified-incomplete lower-upper (LU) factorization, and generalized least-squares polynomial preconditioners. The UPCG solver also includes options for sequential and parallel solution on the central processing unit (CPU) using OpenMP. For simulations utilizing the GPGPU, all basic linear algebra operations are performed on the GPGPU; memory copies between the central processing unit CPU and GPCPU occur prior to the first iteration of the UPCG solver and after satisfying head and flow criteria or exceeding a maximum number of iterations. The efficiency of the UPCG solver for GPGPU and CPU solutions is benchmarked using simulations of a synthetic, heterogeneous unconfined aquifer with tens of thousands to millions of active grid cells. Testing indicates GPGPU speedups on the order of 2 to 8, relative to the standard MODFLOW preconditioned conjugate gradient (PCG) solver, can be achieved when (1) memory copies between the CPU and GPGPU are optimized, (2) the percentage of time performing memory copies between the CPU and GPGPU is small relative to the calculation time, (3) high-performance GPGPU cards are utilized, and (4) CPU-GPGPU combinations are used to execute sequential operations that are difficult to parallelize. Furthermore, UPCG solver testing indicates GPGPU speedups exceed parallel CPU speedups achieved using OpenMP on multicore CPUs for preconditioners that can be easily parallelized.

  19. Stochastic Analysis of a Queue Length Model Using a Graphics Processing Unit

    Czech Academy of Sciences Publication Activity Database

    Přikryl, Jan; Kocijan, J.

    2012-01-01

    Roč. 5, č. 2 (2012), s. 55-62 ISSN 1802-971X R&D Projects: GA MŠk(CZ) MEB091015 Institutional support: RVO:67985556 Keywords : graphics processing unit * GPU * Monte Carlo simulation * computer simulation * modeling Subject RIV: BC - Control Systems Theory http://library.utia.cas.cz/separaty/2012/AS/prikryl-stochastic analysis of a queue length model using a graphics processing unit.pdf

  20. Control of peripheral units by satellite computer

    International Nuclear Information System (INIS)

    Tran, K.T.

    1974-01-01

    A computer system was developed allowing the control of nuclear physics experiments, and use of the results by means of graphical and conversational assemblies. This system which is made of two computers, one IBM-370/135 and one Telemecanique Electrique T1600, controls the conventional IBM peripherals and also the special ones made in the laboratory, such as data acquisition display and graphics units. The visual display is implemented by a scanning-type television, equipped with a light-pen. These units in themselves are universal, but their specifications were established to meet the requirements of nuclear physics experiments. The input-output channels of the two computers have been connected together by an interface, designed and implemented in the Laboratory. This interface allows the exchange of control signals and data (the data are changed from bytes into word and vice-versa). The T1600 controls the peripherals mentionned above according to the commands of the IBM370. Hence the T1600 has here the part of a satellite computer which allows conversation with the main computer and also insures the control of its special peripheral units [fr

  1. The First Prototype for the FastTracker Processing Unit

    CERN Document Server

    Andreani, A; The ATLAS collaboration; Beretta, M; Bogdan, M; Citterio, M; Alberti, F; Giannetti, P; Lanza, A; Magalotti, D; Piendibene, M; Shochet, M; Stabile, A; Tang, J; Tompkins, L

    2012-01-01

    Modern experiments search for extremely rare processes hidden in much larger background levels. As the experiment complexity and the accelerator backgrounds and luminosity increase we need increasingly complex and exclusive selections. We present the first prototype of a new Processing Unit, the core of the FastTracker processor for Atlas, whose computing power is such that a couple of hundreds of them will be able to reconstruct all the tracks with transverse momentum above 1 GeV in the ATLAS events up to Phase II instantaneous luminosities (5×1034 cm-2 s-1) with an event input rate of 100 kHz and a latency below hundreds of microseconds. We plan extremely powerful, very compact and low consumption units for the far future, essential to increase efficiency and purity of the Level 2 selected samples through the intensive use of tracking. This strategy requires massive computing power to minimize the online execution time of complex tracking algorithms. The time consuming pattern recognition problem, generall...

  2. Performance evaluation for volumetric segmentation of multiple sclerosis lesions using MATLAB and computing engine in the graphical processing unit (GPU)

    Science.gov (United States)

    Le, Anh H.; Park, Young W.; Ma, Kevin; Jacobs, Colin; Liu, Brent J.

    2010-03-01

    Multiple Sclerosis (MS) is a progressive neurological disease affecting myelin pathways in the brain. Multiple lesions in the white matter can cause paralysis and severe motor disabilities of the affected patient. To solve the issue of inconsistency and user-dependency in manual lesion measurement of MRI, we have proposed a 3-D automated lesion quantification algorithm to enable objective and efficient lesion volume tracking. The computer-aided detection (CAD) of MS, written in MATLAB, utilizes K-Nearest Neighbors (KNN) method to compute the probability of lesions on a per-voxel basis. Despite the highly optimized algorithm of imaging processing that is used in CAD development, MS CAD integration and evaluation in clinical workflow is technically challenging due to the requirement of high computation rates and memory bandwidth in the recursive nature of the algorithm. In this paper, we present the development and evaluation of using a computing engine in the graphical processing unit (GPU) with MATLAB for segmentation of MS lesions. The paper investigates the utilization of a high-end GPU for parallel computing of KNN in the MATLAB environment to improve algorithm performance. The integration is accomplished using NVIDIA's CUDA developmental toolkit for MATLAB. The results of this study will validate the practicality and effectiveness of the prototype MS CAD in a clinical setting. The GPU method may allow MS CAD to rapidly integrate in an electronic patient record or any disease-centric health care system.

  3. A Primer on High-Throughput Computing for Genomic Selection

    Directory of Open Access Journals (Sweden)

    Xiao-Lin eWu

    2011-02-01

    Full Text Available High-throughput computing (HTC uses computer clusters to solve advanced computational problems, with the goal of accomplishing high throughput over relatively long periods of time. In genomic selection, for example, a set of markers covering the entire genome is used to train a model based on known data, and the resulting model is used to predict the genetic merit of selection candidates. Sophisticated models are very computationally demanding and, with several traits to be evaluated sequentially, computing time is long and output is low. In this paper, we present scenarios and basic principles of how HTC can be used in genomic selection, implemented using various techniques from simple batch processing to pipelining in distributed computer clusters. Various scripting languages, such as shell scripting, Perl and R, are also very useful to devise pipelines. By pipelining, we can reduce total computing time and consequently increase throughput. In comparison to the traditional data processing pipeline residing on the central processors, performing general purpose computation on a graphics processing unit (GPU provide a new-generation approach to massive parallel computing in genomic selection. While the concept of HTC may still be new to many researchers in animal breeding, plant breeding, and genetics, HTC infrastructures have already been built in many institutions, such as the University of Wisconsin – Madison, which can be leveraged for genomic selection, in terms of central processing unit (CPU capacity, network connectivity, storage availability, and middleware connectivity. Exploring existing HTC infrastructures as well as general purpose computing environments will further expand our capability to meet increasing computing demands posed by unprecedented genomic data that we have today. We anticipate that HTC will impact genomic selection via better statistical models, faster solutions, and more competitive products (e.g., from design of

  4. A spacecraft computer repairable via command.

    Science.gov (United States)

    Fimmel, R. O.; Baker, T. E.

    1971-01-01

    The MULTIPAC is a central data system developed for deep-space probes with the distinctive feature that it may be repaired during flight via command and telemetry links by reprogramming around the failed unit. The computer organization uses pools of identical modules which the program organizes into one or more computers called processors. The interaction of these modules is dynamically controlled by the program rather than hardware. In the event of a failure, new programs are entered which reorganize the central data system with a somewhat reduced total processing capability aboard the spacecraft. Emphasis is placed on the evolution of the system architecture and the final overall system design rather than the specific logic design.

  5. Seismic proving test of process computer systems with a seismic floor isolation system

    International Nuclear Information System (INIS)

    Fujimoto, S.; Niwa, H.; Kondo, H.

    1995-01-01

    The authors have carried out seismic proving tests for process computer systems as a Nuclear Power Engineering Corporation (NUPEC) project sponsored by the Ministry of International Trade and Industry (MITI). This paper presents the seismic test results for evaluating functional capabilities of process computer systems with a seismic floor isolation system. The seismic floor isolation system to isolate the horizontal motion was composed of a floor frame (13 m x 13 m), ball bearing units, and spring-damper units. A series of seismic excitation tests was carried out using a large-scale shaking table of NUPEC. From the test results, the functional capabilities during large earthquakes of computer systems with a seismic floor isolation system were verified

  6. Autonomous data acquisition system for Paks NPP process noise signals

    International Nuclear Information System (INIS)

    Lipcsei, S.; Kiss, S.; Czibok, T.; Dezso, Z.; Horvath, Cs.

    2005-01-01

    A prototype of a new concept noise diagnostics data acquisition system has been developed recently to renew the aged present system. This new system is capable of collecting the whole available noise signal set simultaneously. Signal plugging and data acquisition are performed by autonomous systems (installed at each reactor unit) that are controlled through the standard plant network from a central computer installed at a suitable location. Experts can use this central unit to process and archive data series downloaded from the reactor units. This central unit also provides selected noise diagnostics information for other departments. The paper describes the hardware and software architecture of the new system in detail, emphasising the potential benefits of the new approach. (author)

  7. A low-cost system for graphical process monitoring with colour video symbol display units

    International Nuclear Information System (INIS)

    Grauer, H.; Jarsch, V.; Mueller, W.

    1977-01-01

    A system for computer controlled graphic process supervision, using color symbol video displays is described. It has the following characteristics: - compact unit: no external memory for image storage - problem oriented simple descriptive cut to the process program - no restriction of the graphical representation of process variables - computer and display independent, by implementation of colours and parameterized code creation for the display. (WB) [de

  8. Optimized 4-bit Quantum Reversible Arithmetic Logic Unit

    Science.gov (United States)

    Ayyoub, Slimani; Achour, Benslama

    2017-08-01

    Reversible logic has received a great attention in the recent years due to its ability to reduce the power dissipation. The main purposes of designing reversible logic are to decrease quantum cost, depth of the circuits and the number of garbage outputs. The arithmetic logic unit (ALU) is an important part of central processing unit (CPU) as the execution unit. This paper presents a complete design of a new reversible arithmetic logic unit (ALU) that can be part of a programmable reversible computing device such as a quantum computer. The proposed ALU based on a reversible low power control unit and small performance parameters full adder named double Peres gates. The presented ALU can produce the largest number (28) of arithmetic and logic functions and have the smallest number of quantum cost and delay compared with existing designs.

  9. 77 FR 58576 - Certain Wireless Communication Devices, Portable Music and Data Processing Devices, Computers...

    Science.gov (United States)

    2012-09-21

    ... Devices, Portable Music and Data Processing Devices, Computers, and Components Thereof; Institution of... communication devices, portable music and data processing devices, computers, and components thereof by reason... alleges that an industry in the United States exists as required by subsection (a)(2) of section 337. The...

  10. Process control in conventional power plants. The use of computer systems

    Energy Technology Data Exchange (ETDEWEB)

    Schievink, A; Woehrle, G

    1989-03-01

    To process information man can use his knowledge and his experience. Both these means however, permit only slow flows of information (about 25 bit/s) to be processed. The flow of information in a modern 700-MW-coal power station that the staff has to face is about 5000 bit per second, i.e. 200 times as much as a single human brain can process. One therefore needs modern computer-controlled process control systems which support the staff in recognizing and processing the complicated and rapid processes in such a way that the servicing staff is efficiently supported. The computer-man interface is ergonomically improved by visual display units.

  11. A computer-aided software-tool for sustainable process synthesis-intensification

    DEFF Research Database (Denmark)

    Kumar Tula, Anjan; Babi, Deenesh K.; Bottlaender, Jack

    2017-01-01

    and determine within the design space, the more sustainable processes. In this paper, an integrated computer-aided software-tool that searches the design space for hybrid/intensified more sustainable process options is presented. Embedded within the software architecture are process synthesis...... operations as well as reported hybrid/intensified unit operations is large and can be difficult to manually navigate in order to determine the best process flowsheet for the production of a desired chemical product. Therefore, it is beneficial to utilize computer-aided methods and tools to enumerate, analyze...... constraints while also matching the design targets, they are therefore more sustainable than the base case. The application of the software-tool to the production of biodiesel is presented, highlighting the main features of the computer-aided, multi-stage, multi-scale methods that are able to determine more...

  12. Quality Improvement Process in a Large Intensive Care Unit: Structure and Outcomes.

    Science.gov (United States)

    Reddy, Anita J; Guzman, Jorge A

    2016-11-01

    Quality improvement in the health care setting is a complex process, and even more so in the critical care environment. The development of intensive care unit process measures and quality improvement strategies are associated with improved outcomes, but should be individualized to each medical center as structure and culture can differ from institution to institution. The purpose of this report is to describe the structure of quality improvement processes within a large medical intensive care unit while using examples of the study institution's successes and challenges in the areas of stat antibiotic administration, reduction in blood product waste, central line-associated bloodstream infections, and medication errors. © The Author(s) 2015.

  13. Space shuttle general purpose computers (GPCs) (current and future versions)

    Science.gov (United States)

    1988-01-01

    Current and future versions of general purpose computers (GPCs) for space shuttle orbiters are represented in this frame. The two boxes on the left (AP101B) represent the current GPC configuration, with the input-output processor at far left and the central processing unit (CPU) at its side. The upgraded version combines both elements in a single unit (far right, AP101S).

  14. New Generation General Purpose Computer (GPC) compact IBM unit

    Science.gov (United States)

    1991-01-01

    New Generation General Purpose Computer (GPC) compact IBM unit replaces a two-unit earlier generation computer. The new IBM unit is documented in table top views alone (S91-26867, S91-26868), with the onboard equipment it supports including the flight deck CRT screen and keypad (S91-26866), and next to the two earlier versions it replaces (S91-26869).

  15. Hanford Central Waste Complex: Waste Receiving and Processing Facility dangerous waste permit application

    International Nuclear Information System (INIS)

    1991-10-01

    The Hanford Central Waste Complex is an existing and planned series of treatment, and/or disposal (TSD) unites that will centralize the management of solid waste operations at a single location on the Hanford Facility. The Complex includes two units: the WRAP Facility and the Radioactive Mixed Wastes Storage Facility (RMW Storage Facility). This Part B permit application addresses the WRAP Facility. The Facility will be a treatment and storage unit that will provide the capability to examine, sample, characterize, treat, repackage, store, and certify radioactive and/or mixed waste. Waste treated and stored will include both radioactive and/or mixed waste received from onsite and offsite sources. Certification will be designed to ensure and demonstrate compliance with waste acceptance criteria set forth by onsite disposal units and/or offsite facilities that subsequently are to receive waste from the WRAP Facility. This permit application discusses the following: facility description and general provisions; waste characterization; process information; groundwater monitoring; procedures to prevent hazards; contingency plant; personnel training; exposure information report; waste minimization plan; closure and postclosure requirements; reporting and recordkeeping; other relevant laws; certification

  16. Woody encroachment in the Central United States

    Science.gov (United States)

    Greg C. Liknes; Dacia M. Meneguzzo; Kevin. Nimerfro

    2015-01-01

    The landscape of the central United States is dominated by cropland and rangeland mixed with remnants of short- and tall-grass prairies that were once prevalent. Since the last ice age, these areas had sparse tree cover due to cyclical severe droughts, intentional fires used by indigenous people as a land management tool, and natural fires caused by lightning. More...

  17. Energy efficiency of computer power supply units - Final report

    Energy Technology Data Exchange (ETDEWEB)

    Aebischer, B. [cepe - Centre for Energy Policy and Economics, Swiss Federal Institute of Technology Zuerich, Zuerich (Switzerland); Huser, H. [Encontrol GmbH, Niederrohrdorf (Switzerland)

    2002-11-15

    This final report for the Swiss Federal Office of Energy (SFOE) takes a look at the efficiency of computer power supply units, which decreases rapidly during average computer use. The background and the purpose of the project are examined. The power supplies for personal computers are discussed and the testing arrangement used is described. Efficiency, power-factor and operating points of the units are examined. Potentials for improvement and measures to be taken are discussed. Also, action to be taken by those involved in the design and operation of such power units is proposed. Finally, recommendations for further work are made.

  18. Fast analytical scatter estimation using graphics processing units.

    Science.gov (United States)

    Ingleby, Harry; Lippuner, Jonas; Rickey, Daniel W; Li, Yue; Elbakri, Idris

    2015-01-01

    To develop a fast patient-specific analytical estimator of first-order Compton and Rayleigh scatter in cone-beam computed tomography, implemented using graphics processing units. The authors developed an analytical estimator for first-order Compton and Rayleigh scatter in a cone-beam computed tomography geometry. The estimator was coded using NVIDIA's CUDA environment for execution on an NVIDIA graphics processing unit. Performance of the analytical estimator was validated by comparison with high-count Monte Carlo simulations for two different numerical phantoms. Monoenergetic analytical simulations were compared with monoenergetic and polyenergetic Monte Carlo simulations. Analytical and Monte Carlo scatter estimates were compared both qualitatively, from visual inspection of images and profiles, and quantitatively, using a scaled root-mean-square difference metric. Reconstruction of simulated cone-beam projection data of an anthropomorphic breast phantom illustrated the potential of this method as a component of a scatter correction algorithm. The monoenergetic analytical and Monte Carlo scatter estimates showed very good agreement. The monoenergetic analytical estimates showed good agreement for Compton single scatter and reasonable agreement for Rayleigh single scatter when compared with polyenergetic Monte Carlo estimates. For a voxelized phantom with dimensions 128 × 128 × 128 voxels and a detector with 256 × 256 pixels, the analytical estimator required 669 seconds for a single projection, using a single NVIDIA 9800 GX2 video card. Accounting for first order scatter in cone-beam image reconstruction improves the contrast to noise ratio of the reconstructed images. The analytical scatter estimator, implemented using graphics processing units, provides rapid and accurate estimates of single scatter and with further acceleration and a method to account for multiple scatter may be useful for practical scatter correction schemes.

  19. Compute-unified device architecture implementation of a block-matching algorithm for multiple graphical processing unit cards.

    Science.gov (United States)

    Massanes, Francesc; Cadennes, Marie; Brankov, Jovan G

    2011-07-01

    In this paper we describe and evaluate a fast implementation of a classical block matching motion estimation algorithm for multiple Graphical Processing Units (GPUs) using the Compute Unified Device Architecture (CUDA) computing engine. The implemented block matching algorithm (BMA) uses summed absolute difference (SAD) error criterion and full grid search (FS) for finding optimal block displacement. In this evaluation we compared the execution time of a GPU and CPU implementation for images of various sizes, using integer and non-integer search grids.The results show that use of a GPU card can shorten computation time by a factor of 200 times for integer and 1000 times for a non-integer search grid. The additional speedup for non-integer search grid comes from the fact that GPU has built-in hardware for image interpolation. Further, when using multiple GPU cards, the presented evaluation shows the importance of the data splitting method across multiple cards, but an almost linear speedup with a number of cards is achievable.In addition we compared execution time of the proposed FS GPU implementation with two existing, highly optimized non-full grid search CPU based motion estimations methods, namely implementation of the Pyramidal Lucas Kanade Optical flow algorithm in OpenCV and Simplified Unsymmetrical multi-Hexagon search in H.264/AVC standard. In these comparisons, FS GPU implementation still showed modest improvement even though the computational complexity of FS GPU implementation is substantially higher than non-FS CPU implementation.We also demonstrated that for an image sequence of 720×480 pixels in resolution, commonly used in video surveillance, the proposed GPU implementation is sufficiently fast for real-time motion estimation at 30 frames-per-second using two NVIDIA C1060 Tesla GPU cards.

  20. Centralized multiprocessor control system for the frascati storage rings DAΦNE

    International Nuclear Information System (INIS)

    Di Pirro, G.; Milardi, C.; Serio, M.

    1992-01-01

    We describe the status of the DANTE (DAΦne New Tools Environment) control system for the new DAΦNE Φ-factory under construction at the Frascati National Laboratories. The system is based on a centralized communication architecture for simplicity and reliability. A central processor unit coordinates all communications between the consoles and the lower level distributed processing power, and continuously updates a central memory that contains the whole machine status. We have developed a system of VME Fiber Optic interfaces allowing very fast point to point communication between distant processors. Macintosh II personal computers are used as consoles. The lower levels are all built using the VME standard. (author)

  1. A primer on high-throughput computing for genomic selection.

    Science.gov (United States)

    Wu, Xiao-Lin; Beissinger, Timothy M; Bauck, Stewart; Woodward, Brent; Rosa, Guilherme J M; Weigel, Kent A; Gatti, Natalia de Leon; Gianola, Daniel

    2011-01-01

    High-throughput computing (HTC) uses computer clusters to solve advanced computational problems, with the goal of accomplishing high-throughput over relatively long periods of time. In genomic selection, for example, a set of markers covering the entire genome is used to train a model based on known data, and the resulting model is used to predict the genetic merit of selection candidates. Sophisticated models are very computationally demanding and, with several traits to be evaluated sequentially, computing time is long, and output is low. In this paper, we present scenarios and basic principles of how HTC can be used in genomic selection, implemented using various techniques from simple batch processing to pipelining in distributed computer clusters. Various scripting languages, such as shell scripting, Perl, and R, are also very useful to devise pipelines. By pipelining, we can reduce total computing time and consequently increase throughput. In comparison to the traditional data processing pipeline residing on the central processors, performing general-purpose computation on a graphics processing unit provide a new-generation approach to massive parallel computing in genomic selection. While the concept of HTC may still be new to many researchers in animal breeding, plant breeding, and genetics, HTC infrastructures have already been built in many institutions, such as the University of Wisconsin-Madison, which can be leveraged for genomic selection, in terms of central processing unit capacity, network connectivity, storage availability, and middleware connectivity. Exploring existing HTC infrastructures as well as general-purpose computing environments will further expand our capability to meet increasing computing demands posed by unprecedented genomic data that we have today. We anticipate that HTC will impact genomic selection via better statistical models, faster solutions, and more competitive products (e.g., from design of marker panels to realized

  2. Trinary arithmetic and logic unit (TALU) using savart plate and spatial light modulator (SLM) suitable for optical computation in multivalued logic

    Science.gov (United States)

    Ghosh, Amal K.; Bhattacharya, Animesh; Raul, Moumita; Basuray, Amitabha

    2012-07-01

    Arithmetic logic unit (ALU) is the most important unit in any computing system. Optical computing is becoming popular day-by-day because of its ultrahigh processing speed and huge data handling capability. Obviously for the fast processing we need the optical TALU compatible with the multivalued logic. In this regard we are communicating the trinary arithmetic and logic unit (TALU) in modified trinary number (MTN) system, which is suitable for the optical computation and other applications in multivalued logic system. Here the savart plate and spatial light modulator (SLM) based optoelectronic circuits have been used to exploit the optical tree architecture (OTA) in optical interconnection network.

  3. Information processing, computation, and cognition.

    Science.gov (United States)

    Piccinini, Gualtiero; Scarantino, Andrea

    2011-01-01

    Computation and information processing are among the most fundamental notions in cognitive science. They are also among the most imprecisely discussed. Many cognitive scientists take it for granted that cognition involves computation, information processing, or both - although others disagree vehemently. Yet different cognitive scientists use 'computation' and 'information processing' to mean different things, sometimes without realizing that they do. In addition, computation and information processing are surrounded by several myths; first and foremost, that they are the same thing. In this paper, we address this unsatisfactory state of affairs by presenting a general and theory-neutral account of computation and information processing. We also apply our framework by analyzing the relations between computation and information processing on one hand and classicism, connectionism, and computational neuroscience on the other. We defend the relevance to cognitive science of both computation, at least in a generic sense, and information processing, in three important senses of the term. Our account advances several foundational debates in cognitive science by untangling some of their conceptual knots in a theory-neutral way. By leveling the playing field, we pave the way for the future resolution of the debates' empirical aspects.

  4. Computational content analysis of European Central Bank statements

    NARCIS (Netherlands)

    Milea, D.V.; Almeida, R.J.; Sharef, N.M.; Kaymak, U.; Frasincar, F.

    2012-01-01

    In this paper we present a framework for the computational content analysis of European Central Bank (ECB) statements. Based on this framework, we provide two approaches that can be used in a practical context. Both approaches use the content of ECB statements to predict upward and downward movement

  5. Parameters that affect parallel processing for computational electromagnetic simulation codes on high performance computing clusters

    Science.gov (United States)

    Moon, Hongsik

    What is the impact of multicore and associated advanced technologies on computational software for science? Most researchers and students have multicore laptops or desktops for their research and they need computing power to run computational software packages. Computing power was initially derived from Central Processing Unit (CPU) clock speed. That changed when increases in clock speed became constrained by power requirements. Chip manufacturers turned to multicore CPU architectures and associated technological advancements to create the CPUs for the future. Most software applications benefited by the increased computing power the same way that increases in clock speed helped applications run faster. However, for Computational ElectroMagnetics (CEM) software developers, this change was not an obvious benefit - it appeared to be a detriment. Developers were challenged to find a way to correctly utilize the advancements in hardware so that their codes could benefit. The solution was parallelization and this dissertation details the investigation to address these challenges. Prior to multicore CPUs, advanced computer technologies were compared with the performance using benchmark software and the metric was FLoting-point Operations Per Seconds (FLOPS) which indicates system performance for scientific applications that make heavy use of floating-point calculations. Is FLOPS an effective metric for parallelized CEM simulation tools on new multicore system? Parallel CEM software needs to be benchmarked not only by FLOPS but also by the performance of other parameters related to type and utilization of the hardware, such as CPU, Random Access Memory (RAM), hard disk, network, etc. The codes need to be optimized for more than just FLOPs and new parameters must be included in benchmarking. In this dissertation, the parallel CEM software named High Order Basis Based Integral Equation Solver (HOBBIES) is introduced. This code was developed to address the needs of the

  6. Remote sensing-based characterization of rainfall during atmospheric rivers over the central United States

    Science.gov (United States)

    Nayak, Munir A.; Villarini, Gabriele

    2018-01-01

    Atmospheric rivers (ARs) play a central role in the hydrology and hydroclimatology of the central United States. More than 25% of the annual rainfall is associated with ARs over much of this region, with many large flood events tied to their occurrence. Despite the relevance of these storms for flood hydrology and water budget, the characteristics of rainfall associated with ARs over the central United has not been investigated thus far. This study fills this major scientific gap by describing the rainfall during ARs over the central United States using five remote sensing-based precipitation products over a 12-year study period. The products we consider are: Stage IV, Tropical Rainfall Measuring Mission - Multi-satellite Precipitation Analysis (TMPA, both real-time and research version); Precipitation Estimation from Remotely Sensed Information using Artificial Neural Networks (PERSIANN); the CPC MORPHing Technique (CMORPH). As part of the study, we evaluate these products against a rain gauge-based dataset using both graphical- and metrics-based diagnostics. Based on our analyses, Stage IV is found to better reproduce the reference data. Hence, we use it for the characterization of rainfall in ARs. Most of the AR-rainfall is located in a narrow region within ∼150 km on both sides of the AR major axis. In this region, rainfall has a pronounced positive relationship with the magnitude of the water vapor transport. Moreover, we have also identified a consistent increase in rainfall intensity with duration (or persistence) of AR conditions. However, there is not a strong indication of diurnal variability in AR rainfall. These results can be directly used in developing flood protection strategies during ARs. Further, weather prediction agencies can benefit from the results of this study to achieve higher skill of resolving precipitation processes in their models.

  7. To the problem of reliability standardization in computer-aided manufacturing at NPP units

    International Nuclear Information System (INIS)

    Yastrebenetskij, M.A.; Shvyryaev, Yu.V.; Spektor, L.I.; Nikonenko, I.V.

    1989-01-01

    The problems of reliability standardization in computer-aided manufacturing of NPP units considering the following approaches: computer-aided manufacturing of NPP units as a part of automated technological complex; computer-aided manufacturing of NPP units as multi-functional system, are analyzed. Selection of the composition of reliability indeces for computer-aided manufacturing of NPP units for each of the approaches considered is substantiated

  8. Processing computed tomography images by using personal computer

    International Nuclear Information System (INIS)

    Seto, Kazuhiko; Fujishiro, Kazuo; Seki, Hirofumi; Yamamoto, Tetsuo.

    1994-01-01

    Processing of CT images was attempted by using a popular personal computer. The program for image-processing was made with C compiler. The original images, acquired with CT scanner (TCT-60A, Toshiba), were transferred to the computer by 8-inch flexible diskette. Many fundamental image-processing, such as displaying image to the monitor, calculating CT value and drawing the profile curve. The result showed that a popular personal computer had ability to process CT images. It seemed that 8-inch flexible diskette was still useful medium of transferring image data. (author)

  9. Computation and brain processes, with special reference to neuroendocrine systems.

    Science.gov (United States)

    Toni, Roberto; Spaletta, Giulia; Casa, Claudia Della; Ravera, Simone; Sandri, Giorgio

    2007-01-01

    The development of neural networks and brain automata has made neuroscientists aware that the performance limits of these brain-like devices lies, at least in part, in their computational power. The computational basis of a. standard cybernetic design, in fact, refers to that of a discrete and finite state machine or Turing Machine (TM). In contrast, it has been suggested that a number of human cerebral activites, from feedback controls up to mental processes, rely on a mixing of both finitary, digital-like and infinitary, continuous-like procedures. Therefore, the central nervous system (CNS) of man would exploit a form of computation going beyond that of a TM. This "non conventional" computation has been called hybrid computation. Some basic structures for hybrid brain computation are believed to be the brain computational maps, in which both Turing-like (digital) computation and continuous (analog) forms of calculus might occur. The cerebral cortex and brain stem appears primary candidate for this processing. However, also neuroendocrine structures like the hypothalamus are believed to exhibit hybrid computional processes, and might give rise to computational maps. Current theories on neural activity, including wiring and volume transmission, neuronal group selection and dynamic evolving models of brain automata, bring fuel to the existence of natural hybrid computation, stressing a cooperation between discrete and continuous forms of communication in the CNS. In addition, the recent advent of neuromorphic chips, like those to restore activity in damaged retina and visual cortex, suggests that assumption of a discrete-continuum polarity in designing biocompatible neural circuitries is crucial for their ensuing performance. In these bionic structures, in fact, a correspondence exists between the original anatomical architecture and synthetic wiring of the chip, resulting in a correspondence between natural and cybernetic neural activity. Thus, chip "form

  10. Utilizing General Purpose Graphics Processing Units to Improve Performance of Computer Modelling and Visualization

    Science.gov (United States)

    Monk, J.; Zhu, Y.; Koons, P. O.; Segee, B. E.

    2009-12-01

    With the introduction of the G8X series of cards by nVidia an architecture called CUDA was released, virtually all subsequent video cards have had CUDA support. With this new architecture nVidia provided extensions for C/C++ that create an Application Programming Interface (API) allowing code to be executed on the GPU. Since then the concept of GPGPU (general purpose graphics processing unit) has been growing, this is the concept that the GPU is very good a algebra and running things in parallel so we should take use of that power for other applications. This is highly appealing in the area of geodynamic modeling, as multiple parallel solutions of the same differential equations at different points in space leads to a large speedup in simulation speed. Another benefit of CUDA is a programmatic method of transferring large amounts of data between the computer's main memory and the dedicated GPU memory located on the video card. In addition to being able to compute and render on the video card, the CUDA framework allows for a large speedup in the situation, such as with a tiled display wall, where the rendered pixels are to be displayed in a different location than where they are rendered. A CUDA extension for VirtualGL was developed allowing for faster read back at high resolutions. This paper examines several aspects of rendering OpenGL graphics on large displays using VirtualGL and VNC. It demonstrates how performance can be significantly improved in rendering on a tiled monitor wall. We present a CUDA enhanced version of VirtualGL as well as the advantages to having multiple VNC servers. It will discuss restrictions caused by read back and blitting rates and how they are affected by different sizes of virtual displays being rendered.

  11. A learnable parallel processing architecture towards unity of memory and computing.

    Science.gov (United States)

    Li, H; Gao, B; Chen, Z; Zhao, Y; Huang, P; Ye, H; Liu, L; Liu, X; Kang, J

    2015-08-14

    Developing energy-efficient parallel information processing systems beyond von Neumann architecture is a long-standing goal of modern information technologies. The widely used von Neumann computer architecture separates memory and computing units, which leads to energy-hungry data movement when computers work. In order to meet the need of efficient information processing for the data-driven applications such as big data and Internet of Things, an energy-efficient processing architecture beyond von Neumann is critical for the information society. Here we show a non-von Neumann architecture built of resistive switching (RS) devices named "iMemComp", where memory and logic are unified with single-type devices. Leveraging nonvolatile nature and structural parallelism of crossbar RS arrays, we have equipped "iMemComp" with capabilities of computing in parallel and learning user-defined logic functions for large-scale information processing tasks. Such architecture eliminates the energy-hungry data movement in von Neumann computers. Compared with contemporary silicon technology, adder circuits based on "iMemComp" can improve the speed by 76.8% and the power dissipation by 60.3%, together with a 700 times aggressive reduction in the circuit area.

  12. A learnable parallel processing architecture towards unity of memory and computing

    Science.gov (United States)

    Li, H.; Gao, B.; Chen, Z.; Zhao, Y.; Huang, P.; Ye, H.; Liu, L.; Liu, X.; Kang, J.

    2015-08-01

    Developing energy-efficient parallel information processing systems beyond von Neumann architecture is a long-standing goal of modern information technologies. The widely used von Neumann computer architecture separates memory and computing units, which leads to energy-hungry data movement when computers work. In order to meet the need of efficient information processing for the data-driven applications such as big data and Internet of Things, an energy-efficient processing architecture beyond von Neumann is critical for the information society. Here we show a non-von Neumann architecture built of resistive switching (RS) devices named “iMemComp”, where memory and logic are unified with single-type devices. Leveraging nonvolatile nature and structural parallelism of crossbar RS arrays, we have equipped “iMemComp” with capabilities of computing in parallel and learning user-defined logic functions for large-scale information processing tasks. Such architecture eliminates the energy-hungry data movement in von Neumann computers. Compared with contemporary silicon technology, adder circuits based on “iMemComp” can improve the speed by 76.8% and the power dissipation by 60.3%, together with a 700 times aggressive reduction in the circuit area.

  13. Computer systems for the control of teletherapy units

    International Nuclear Information System (INIS)

    Brace, J.A.

    1985-01-01

    This paper describes a computer-controlled tracking cobalt unit installed at the Royal Free Hospital. It is based on a standard TEM MS90 unit and operates at 90-cm source-axis distance with a geometric field size of 45 x 45 cm at that distance. It has been modified so that it can be used either manually or under computer control. There are nine parameters that can be controlled positionally and two that can be controlled in rate mode; these are presented in a table

  14. The Computational Processing of Intonational Prominence: A Functional Prosody Perspective

    OpenAIRE

    Nakatani, Christine Hisayo

    1997-01-01

    Intonational prominence, or accent, is a fundamental prosodic feature that is said to contribute to discourse meaning. This thesis outlines a new, computational theory of the discourse interpretation of prominence, from a FUNCTIONAL PROSODY perspective. Functional prosody makes the following two important assumptions: first, there is an aspect of prominence interpretation that centrally concerns discourse processes, namely the discourse focusing nature of prominence; and second, the role of p...

  15. Progress in a novel architecture for high performance processing

    Science.gov (United States)

    Zhang, Zhiwei; Liu, Meng; Liu, Zijun; Du, Xueliang; Xie, Shaolin; Ma, Hong; Ding, Guangxin; Ren, Weili; Zhou, Fabiao; Sun, Wenqin; Wang, Huijuan; Wang, Donglin

    2018-04-01

    The high performance processing (HPP) is an innovative architecture which targets on high performance computing with excellent power efficiency and computing performance. It is suitable for data intensive applications like supercomputing, machine learning and wireless communication. An example chip with four application-specific integrated circuit (ASIC) cores which is the first generation of HPP cores has been taped out successfully under Taiwan Semiconductor Manufacturing Company (TSMC) 40 nm low power process. The innovative architecture shows great energy efficiency over the traditional central processing unit (CPU) and general-purpose computing on graphics processing units (GPGPU). Compared with MaPU, HPP has made great improvement in architecture. The chip with 32 HPP cores is being developed under TSMC 16 nm field effect transistor (FFC) technology process and is planed to use commercially. The peak performance of this chip can reach 4.3 teraFLOPS (TFLOPS) and its power efficiency is up to 89.5 gigaFLOPS per watt (GFLOPS/W).

  16. Plant computer system in nuclear power station

    International Nuclear Information System (INIS)

    Kato, Shinji; Fukuchi, Hiroshi

    1991-01-01

    In nuclear power stations, centrally concentrated monitoring system has been adopted, and in central control rooms, large quantity of information and operational equipments concentrate, therefore, those become the important place of communication between plants and operators. Further recently, due to the increase of the unit capacity, the strengthening of safety, the problems of man-machine interface and so on, it has become important to concentrate information, to automate machinery and equipment and to simplify them for improving the operational environment, reliability and so on. On the relation of nuclear power stations and computer system, to which attention has been paid recently as the man-machine interface, the example in Tsuruga Power Station, Japan Atomic Power Co. is shown. No.2 plant in the Tsuruga Power Station is a PWR plant with 1160 MWe output, which is a home built standardized plant, accordingly the computer system adopted here is explained. The fundamental concept of the central control board, the process computer system, the design policy, basic system configuration, reliability and maintenance, CRT display, and the computer system for No.1 BWR 357 MW plant are reported. (K.I.)

  17. Experience in programming Assembly language of CDC CYBER 170/750 computer

    International Nuclear Information System (INIS)

    Caldeira, A.D.

    1987-10-01

    Aiming to optimize processing time of BCG computer code in the CDC CYBER 170/750 computer, the FORTRAN-V language of INTERP subroutine was converted to Assembly language. The BCG code was developed for solving neutron transport equation by iterative method, and the INTERP subroutine is innermost loop of the code carrying out 5 interpolation types. The central processor unit Assembly language of the CDC CYBER 170/750 computer and its application in implementing the interpolation subroutine of BCG code are described. (M.C.K.)

  18. Discrete-Event Execution Alternatives on General Purpose Graphical Processing Units

    International Nuclear Information System (INIS)

    Perumalla, Kalyan S.

    2006-01-01

    Graphics cards, traditionally designed as accelerators for computer graphics, have evolved to support more general-purpose computation. General Purpose Graphical Processing Units (GPGPUs) are now being used as highly efficient, cost-effective platforms for executing certain simulation applications. While most of these applications belong to the category of time-stepped simulations, little is known about the applicability of GPGPUs to discrete event simulation (DES). Here, we identify some of the issues and challenges that the GPGPU stream-based interface raises for DES, and present some possible approaches to moving DES to GPGPUs. Initial performance results on simulation of a diffusion process show that DES-style execution on GPGPU runs faster than DES on CPU and also significantly faster than time-stepped simulations on either CPU or GPGPU.

  19. Fluid Centrality: A Social Network Analysis of Social-Technical Relations in Computer-Mediated Communication

    Science.gov (United States)

    Enriquez, Judith Guevarra

    2010-01-01

    In this article, centrality is explored as a measure of computer-mediated communication (CMC) in networked learning. Centrality measure is quite common in performing social network analysis (SNA) and in analysing social cohesion, strength of ties and influence in CMC, and computer-supported collaborative learning research. It argues that measuring…

  20. Distributed GPU Computing in GIScience

    Science.gov (United States)

    Jiang, Y.; Yang, C.; Huang, Q.; Li, J.; Sun, M.

    2013-12-01

    Transactions on, 9(3), 378-394. 2. Li, J., Jiang, Y., Yang, C., Huang, Q., & Rice, M. (2013). Visualizing 3D/4D Environmental Data Using Many-core Graphics Processing Units (GPUs) and Multi-core Central Processing Units (CPUs). Computers & Geosciences, 59(9), 78-89. 3. Owens, J. D., Houston, M., Luebke, D., Green, S., Stone, J. E., & Phillips, J. C. (2008). GPU computing. Proceedings of the IEEE, 96(5), 879-899.

  1. Internode data communications in a parallel computer

    Science.gov (United States)

    Archer, Charles J.; Blocksome, Michael A.; Miller, Douglas R.; Parker, Jeffrey J.; Ratterman, Joseph D.; Smith, Brian E.

    2013-09-03

    Internode data communications in a parallel computer that includes compute nodes that each include main memory and a messaging unit, the messaging unit including computer memory and coupling compute nodes for data communications, in which, for each compute node at compute node boot time: a messaging unit allocates, in the messaging unit's computer memory, a predefined number of message buffers, each message buffer associated with a process to be initialized on the compute node; receives, prior to initialization of a particular process on the compute node, a data communications message intended for the particular process; and stores the data communications message in the message buffer associated with the particular process. Upon initialization of the particular process, the process establishes a messaging buffer in main memory of the compute node and copies the data communications message from the message buffer of the messaging unit into the message buffer of main memory.

  2. Application of analogue computers to radiotracer data processing

    International Nuclear Information System (INIS)

    Chmielewski, A.G.

    1979-01-01

    Some applications of analogue computers for processing the flow-system radiotracer-investigation data are presented. Analysis of the impulse response shaped to obtain the frequency response of the system under consideration can be performed on the basis of an estimated transfer function. Furthermore, simulation of the system behaviour for other excitation functions is discussed. Simple approach is made for estimating the model parameters in situations where the input signal is not approximated by the unit impulse function. (author)

  3. Capacity-Building Programs Under the Dominican Republic-Central America-United States Free Trade Agreement (CAFTA-DR)

    Science.gov (United States)

    The United States signed the Dominican Republic-Central America-United States Free Trade Agreement (CAFTA-DR) in August 2004 with five Central American countries (Costa Rica, El Salvador, Guatemala, Honduras, and Nicaragua) and the Dominican Republic.

  4. The Evolution of Computer Based Learning Software Design: Computer Assisted Teaching Unit Experience.

    Science.gov (United States)

    Blandford, A. E.; Smith, P. R.

    1986-01-01

    Describes the style of design of computer simulations developed by Computer Assisted Teaching Unit at Queen Mary College with reference to user interface, input and initialization, input data vetting, effective display screen use, graphical results presentation, and need for hard copy. Procedures and problems relating to academic involvement are…

  5. Heterogeneous Multicore Parallel Programming for Graphics Processing Units

    Directory of Open Access Journals (Sweden)

    Francois Bodin

    2009-01-01

    Full Text Available Hybrid parallel multicore architectures based on graphics processing units (GPUs can provide tremendous computing power. Current NVIDIA and AMD Graphics Product Group hardware display a peak performance of hundreds of gigaflops. However, exploiting GPUs from existing applications is a difficult task that requires non-portable rewriting of the code. In this paper, we present HMPP, a Heterogeneous Multicore Parallel Programming workbench with compilers, developed by CAPS entreprise, that allows the integration of heterogeneous hardware accelerators in a unintrusive manner while preserving the legacy code.

  6. A model for calculating the optimal replacement interval of computer systems

    International Nuclear Information System (INIS)

    Fujii, Minoru; Asai, Kiyoshi

    1981-08-01

    A mathematical model for calculating the optimal replacement interval of computer systems is described. This model is made to estimate the best economical interval of computer replacement when computing demand, cost and performance of computer, etc. are known. The computing demand is assumed to monotonously increase every year. Four kinds of models are described. In the model 1, a computer system is represented by only a central processing unit (CPU) and all the computing demand is to be processed on the present computer until the next replacement. On the other hand in the model 2, the excessive demand is admitted and may be transferred to other computing center and processed costly there. In the model 3, the computer system is represented by a CPU, memories (MEM) and input/output devices (I/O) and it must process all the demand. Model 4 is same as model 3, but the excessive demand is admitted to be processed in other center. (1) Computing demand at the JAERI, (2) conformity of Grosch's law for the recent computers, (3) replacement cost of computer systems, etc. are also described. (author)

  7. Software Graphics Processing Unit (sGPU) for Deep Space Applications

    Science.gov (United States)

    McCabe, Mary; Salazar, George; Steele, Glen

    2015-01-01

    A graphics processing capability will be required for deep space missions and must include a range of applications, from safety-critical vehicle health status to telemedicine for crew health. However, preliminary radiation testing of commercial graphics processing cards suggest they cannot operate in the deep space radiation environment. Investigation into an Software Graphics Processing Unit (sGPU)comprised of commercial-equivalent radiation hardened/tolerant single board computers, field programmable gate arrays, and safety-critical display software shows promising results. Preliminary performance of approximately 30 frames per second (FPS) has been achieved. Use of multi-core processors may provide a significant increase in performance.

  8. Computers and data processing

    CERN Document Server

    Deitel, Harvey M

    1985-01-01

    Computers and Data Processing provides information pertinent to the advances in the computer field. This book covers a variety of topics, including the computer hardware, computer programs or software, and computer applications systems.Organized into five parts encompassing 19 chapters, this book begins with an overview of some of the fundamental computing concepts. This text then explores the evolution of modern computing systems from the earliest mechanical calculating devices to microchips. Other chapters consider how computers present their results and explain the storage and retrieval of

  9. Review of computational fluid dynamics applications in biotechnology processes.

    Science.gov (United States)

    Sharma, C; Malhotra, D; Rathore, A S

    2011-01-01

    Computational fluid dynamics (CFD) is well established as a tool of choice for solving problems that involve one or more of the following phenomena: flow of fluids, heat transfer,mass transfer, and chemical reaction. Unit operations that are commonly utilized in biotechnology processes are often complex and as such would greatly benefit from application of CFD. The thirst for deeper process and product understanding that has arisen out of initiatives such as quality by design provides further impetus toward usefulness of CFD for problems that may otherwise require extensive experimentation. Not surprisingly, there has been increasing interest in applying CFD toward a variety of applications in biotechnology processing in the last decade. In this article, we will review applications in the major unit operations involved with processing of biotechnology products. These include fermentation,centrifugation, chromatography, ultrafiltration, microfiltration, and freeze drying. We feel that the future applications of CFD in biotechnology processing will focus on establishing CFD as a tool of choice for providing process understanding that can be then used to guide more efficient and effective experimentation. This article puts special emphasis on the work done in the last 10 years. © 2011 American Institute of Chemical Engineers

  10. The AMchip04 and the Processing Unit Prototype for the FastTracker

    CERN Document Server

    Andreani, A; The ATLAS collaboration; Beretta, M; Bogdan, M; Citterio, M; Alberti, F; Giannetti, P; Lanza, A; Magalotti, D; Piendibene, M; Shochet, M; Stabile, A; Tang, J; Tompkins, L; Volpi, G

    2012-01-01

    Modern experiments search for extremely rare processes hidden in much larger background levels. As the experiment complexity and the accelerator backgrounds and luminosity increase we need increasingly complex and exclusive selections. We present the first prototype of a new Processing Unit, the core of the FastTracker processor for Atlas, whose computing power is such that a couple of hundreds of them will be able to reconstruct all the tracks with transverse momentum above 1 GeV in the ATLAS events up to Phase II instantaneous luminosities (5×1034 cm-2 s-1) with an event input rate of 100 kHz and a latency below hundreds of microseconds. We plan extremely powerful, very compact and low consumption units for the far future, essential to increase efficiency and purity of the Level 2 selected samples through the intensive use of tracking. This strategy requires massive computing power to minimize the online execution time of complex tracking algorithms. The time consuming pattern recognition problem, generall...

  11. Development of a computer system at La Hague center

    International Nuclear Information System (INIS)

    Mimaud, Robert; Malet, Georges; Ollivier, Francis; Fabre, J.-C.; Valois, Philippe; Desgranges, Patrick; Anfossi, Gilbert; Gentizon, Michel; Serpollet, Roger.

    1977-01-01

    The U.P.2 plant, built at La Hague Center is intended mainly for the reprocessing of spent fuels coming from (as metal) graphite-gas reactors and (as oxide) light-water, heavy-water and breeder reactors. In each of the five large nuclear units the digital processing of measurements was dealt with until 1974 by CAE 3030 data processors. During the period 1974-1975 a modern industrial computer system was set up. This system, equipped with T 2000/20 material from the Telemecanique company, consists of five measurement acquisition devices (for a total of 1500 lines processed) and two central processing units (CPU). The connection of these two PCU (Hardware and Software) enables an automatic connection of the system either on the first CPU or on the second one. The system covers, at present, data processing, threshold monitoring, alarm systems, display devices, periodical listing, and specific calculations concerning the process (balances etc), and at a later stage, an automatic control of certain units of the Process [fr

  12. Application of GPU to computational multiphase fluid dynamics

    International Nuclear Information System (INIS)

    Nagatake, T; Kunugi, T

    2010-01-01

    The MARS (Multi-interfaces Advection and Reconstruction Solver) [1] is one of the surface volume tracking methods for multi-phase flows. Nowadays, the performance of GPU (Graphics Processing Unit) is much higher than the CPU (Central Processing Unit). In this study, the GPU was applied to the MARS in order to accelerate the computation of multi-phase flows (GPU-MARS), and the performance of the GPU-MARS was discussed. From the performance of the interface tracking method for the analyses of one-directional advection problem, it is found that the computing time of GPU(single GTX280) was around 4 times faster than that of the CPU (Xeon 5040, 4 threads parallelized). From the performance of Poisson Solver by using the algorithm developed in this study, it is found that the performance of the GPU showed around 30 times faster than that of the CPU. Finally, it is confirmed that the GPU showed the large acceleration of the fluid flow computation (GPU-MARS) compared to the CPU. However, it is also found that the double-precision computation of the GPU must perform with very high precision.

  13. Computer system for the beam line data processing at JT-60 prototype neutral beam injector

    International Nuclear Information System (INIS)

    Horiike, Hiroshi; Kawai, Mikito; Ohara, Yoshihiro

    1987-08-01

    The present report describes the hard and soft wares of the data acquisition computer system for the prototype neutral injector unit for JT-60. In order to operate the unit, more than hundreds of signals of the beam line components have to be measured. These are mainly differential thermometers for the coolant waters and thermocouples for the beam dump components but not include those for the cryo system. Since the unit operates in a series of pulses, the measurement should be conducted very quickly in order to ensure the simultaneity of large number of the measured data. The present system actualize fast data acquisition using a small computer of 128 kB and measuring instruments connected through the bus. The system is connected to the JAERI computer center since the data capacity is fairly large to completely process them by the small computer. Therefore the measured data can be transferred to the computer center to calculate there, and the results can be received. After the system was completed the computer quickly print out the power flow data, which needed much work to calculate with hands. This system was very useful. It enhanced the experiments at the unit and reduced the labor. It enables us to early demonstrate the rated operation of the unit and to accurately estimate such operation data of the JT-60 NBI as the injection power. (author)

  14. Implicit Theories of Creativity in Computer Science in the United States and China

    Science.gov (United States)

    Tang, Chaoying; Baer, John; Kaufman, James C.

    2015-01-01

    To study implicit concepts of creativity in computer science in the United States and mainland China, we first asked 308 Chinese computer scientists for adjectives that would describe a creative computer scientist. Computer scientists and non-computer scientists from China (N = 1069) and the United States (N = 971) then rated how well those…

  15. Power plant process computer

    International Nuclear Information System (INIS)

    Koch, R.

    1982-01-01

    The concept of instrumentation and control in nuclear power plants incorporates the use of process computers for tasks which are on-line in respect to real-time requirements but not closed-loop in respect to closed-loop control. The general scope of tasks is: - alarm annunciation on CRT's - data logging - data recording for post trip reviews and plant behaviour analysis - nuclear data computation - graphic displays. Process computers are used additionally for dedicated tasks such as the aeroball measuring system, the turbine stress evaluator. Further applications are personal dose supervision and access monitoring. (orig.)

  16. Overview of Parallel Platforms for Common High Performance Computing

    Directory of Open Access Journals (Sweden)

    T. Fryza

    2012-04-01

    Full Text Available The paper deals with various parallel platforms used for high performance computing in the signal processing domain. More precisely, the methods exploiting the multicores central processing units such as message passing interface and OpenMP are taken into account. The properties of the programming methods are experimentally proved in the application of a fast Fourier transform and a discrete cosine transform and they are compared with the possibilities of MATLAB's built-in functions and Texas Instruments digital signal processors with very long instruction word architectures. New FFT and DCT implementations were proposed and tested. The implementation phase was compared with CPU based computing methods and with possibilities of the Texas Instruments digital signal processing library on C6747 floating-point DSPs. The optimal combination of computing methods in the signal processing domain and new, fast routines' implementation is proposed as well.

  17. Stress drops of induced and tectonic earthquakes in the central United States are indistinguishable.

    Science.gov (United States)

    Huang, Yihe; Ellsworth, William L; Beroza, Gregory C

    2017-08-01

    Induced earthquakes currently pose a significant hazard in the central United States, but there is considerable uncertainty about the severity of their ground motions. We measure stress drops of 39 moderate-magnitude induced and tectonic earthquakes in the central United States and eastern North America. Induced earthquakes, more than half of which are shallower than 5 km, show a comparable median stress drop to tectonic earthquakes in the central United States that are dominantly strike-slip but a lower median stress drop than that of tectonic earthquakes in the eastern North America that are dominantly reverse-faulting. This suggests that ground motion prediction equations developed for tectonic earthquakes can be applied to induced earthquakes if the effects of depth and faulting style are properly considered. Our observation leads to the notion that, similar to tectonic earthquakes, induced earthquakes are driven by tectonic stresses.

  18. Noncontrast computed tomographic Hounsfield unit evaluation of cerebral venous thrombosis: a quantitative evaluation

    Energy Technology Data Exchange (ETDEWEB)

    Besachio, David A. [University of Utah, Department of Radiology, Salt Lake City (United States); United States Navy, Bethesda, MD (United States); Quigley, Edward P.; Shah, Lubdha M.; Salzman, Karen L. [University of Utah, Department of Radiology, Salt Lake City (United States)

    2013-08-15

    Our objective is to determine the utility of noncontrast Hounsfield unit values, Hounsfield unit values corrected for the patient's hematocrit, and venoarterial Hounsfield unit difference measurements in the identification of intracranial venous thrombosis on noncontrast head computed tomography. We retrospectively reviewed noncontrast head computed tomography exams performed in both normal patients and those with cerebral venous thrombosis, acquiring Hounsfield unit values in normal and thrombosed cerebral venous structures. Also, we acquired Hounsfield unit values in the internal carotid artery for comparison to thrombosed and nonthrombosed venous structures and compared the venous Hounsfield unit values to the patient's hematocrit. A significant difference is identified between Hounsfield unit values in thrombosed and nonthrombosed venous structures. Applying Hounsfield unit threshold values of greater than 65, a Hounsfield unit to hematocrit ratio of greater than 1.7, and venoarterial difference values greater than 15 alone and in combination, the majority of cases of venous thrombosis are identifiable on noncontrast head computed tomography. Absolute Hounsfield unit values, Hounsfield unit to hematocrit ratios, and venoarterial Hounsfield unit value differences are a useful adjunct in noncontrast head computed tomographic evaluation of cerebral venous thrombosis. (orig.)

  19. Noncontrast computed tomographic Hounsfield unit evaluation of cerebral venous thrombosis: a quantitative evaluation

    International Nuclear Information System (INIS)

    Besachio, David A.; Quigley, Edward P.; Shah, Lubdha M.; Salzman, Karen L.

    2013-01-01

    Our objective is to determine the utility of noncontrast Hounsfield unit values, Hounsfield unit values corrected for the patient's hematocrit, and venoarterial Hounsfield unit difference measurements in the identification of intracranial venous thrombosis on noncontrast head computed tomography. We retrospectively reviewed noncontrast head computed tomography exams performed in both normal patients and those with cerebral venous thrombosis, acquiring Hounsfield unit values in normal and thrombosed cerebral venous structures. Also, we acquired Hounsfield unit values in the internal carotid artery for comparison to thrombosed and nonthrombosed venous structures and compared the venous Hounsfield unit values to the patient's hematocrit. A significant difference is identified between Hounsfield unit values in thrombosed and nonthrombosed venous structures. Applying Hounsfield unit threshold values of greater than 65, a Hounsfield unit to hematocrit ratio of greater than 1.7, and venoarterial difference values greater than 15 alone and in combination, the majority of cases of venous thrombosis are identifiable on noncontrast head computed tomography. Absolute Hounsfield unit values, Hounsfield unit to hematocrit ratios, and venoarterial Hounsfield unit value differences are a useful adjunct in noncontrast head computed tomographic evaluation of cerebral venous thrombosis. (orig.)

  20. Seismic evaluation of buildings in the Eastern and Central United States

    International Nuclear Information System (INIS)

    Malley, J.O.; Poland, C.D.

    1991-01-01

    The vast majority of the existing buildings in the Central and Eastern United States have not been designed to resist seismic forces, even though it is becoming widely accepted that there is a potential for damaging earthquakes in these regions for the country. These buildings, therefore, may constitute a serious threat to life safety in the event of a major earthquake. The ATC-14 procedure for the seismic evaluation of existing buildings has begun to gain wide acceptance since its publication in 1987. The National Center for Earthquake Engineering Research (NCEER) funded a project to critically assess the applicability of ATC-14 to buildings in the Eastern and Central United States. This NCEER project developed a large volume of recommended modifications to ATC-14 which are intended to improve the modifications to ATC-14 procedure's recommendations for the seismic evaluation of buildings in regions of low siesmicity. NCEER is sponsoring a second project which will produce a separate document for the seismic evaluation of existing buildings which specifically focuses on structures in these areas of the country. This report, which should be completed in 1991, will provide a valuable tool for practicing engineers performing these evaluations in the Eastern and Central United States. This paper will present the results of these NCEER projects and introduce the revised ATC-14 methodology

  1. Centralized Monitoring of the Microsoft Windows-based computers of the LHC Experiment Control Systems

    International Nuclear Information System (INIS)

    Varela Rodriguez, F

    2011-01-01

    The control system of each of the four major Experiments at the CERN Large Hadron Collider (LHC) is distributed over up to 160 computers running either Linux or Microsoft Windows. A quick response to abnormal situations of the computer infrastructure is crucial to maximize the physics usage. For this reason, a tool was developed to supervise, identify errors and troubleshoot such a large system. Although the monitoring of the performance of the Linux computers and their processes was available since the first versions of the tool, it is only recently that the software package has been extended to provide similar functionality for the nodes running Microsoft Windows as this platform is the most commonly used in the LHC detector control systems. In this paper, the architecture and the functionality of the Windows Management Instrumentation (WMI) client developed to provide centralized monitoring of the nodes running different flavour of the Microsoft platform, as well as the interface to the SCADA software of the control systems are presented. The tool is currently being commissioned by the Experiments and it has already proven to be very efficient optimize the running systems and to detect misbehaving processes or nodes.

  2. Centralized Monitoring of the Microsoft Windows-based computers of the LHC Experiment Control Systems

    Science.gov (United States)

    Varela Rodriguez, F.

    2011-12-01

    The control system of each of the four major Experiments at the CERN Large Hadron Collider (LHC) is distributed over up to 160 computers running either Linux or Microsoft Windows. A quick response to abnormal situations of the computer infrastructure is crucial to maximize the physics usage. For this reason, a tool was developed to supervise, identify errors and troubleshoot such a large system. Although the monitoring of the performance of the Linux computers and their processes was available since the first versions of the tool, it is only recently that the software package has been extended to provide similar functionality for the nodes running Microsoft Windows as this platform is the most commonly used in the LHC detector control systems. In this paper, the architecture and the functionality of the Windows Management Instrumentation (WMI) client developed to provide centralized monitoring of the nodes running different flavour of the Microsoft platform, as well as the interface to the SCADA software of the control systems are presented. The tool is currently being commissioned by the Experiments and it has already proven to be very efficient optimize the running systems and to detect misbehaving processes or nodes.

  3. Design, Assembly, Integration, and Testing of a Power Processing Unit for a Cylindrical Hall Thruster, the NORSAT-2 Flatsat, and the Vector Gravimeter for Asteroids Instrument Computer

    Science.gov (United States)

    Svatos, Adam Ladislav

    This thesis describes the author's contributions to three separate projects. The bus of the NORSAT-2 satellite was developed by the Space Flight Laboratory (SFL) for the Norwegian Space Centre (NSC) and Space Norway. The author's contributions to the mission were performing unit tests for the components of all the spacecraft subsystems as well as designing and assembling the flatsat from flight spares. Gedex's Vector Gravimeter for Asteroids (VEGA) is an accelerometer for spacecraft. The author's contributions to this payload were modifying the instrument computer board schematic, designing the printed circuit board, developing and applying test software, and performing thermal acceptance testing of two instrument computer boards. The SFL's cylindrical Hall effect thruster combines the cylindrical configuration for a Hall thruster and uses permanent magnets to achieve miniaturization and low power consumption, respectively. The author's contributions were to design, build, and test an engineering model power processing unit.

  4. High performance direct gravitational N-body simulations on graphics processing units II: An implementation in CUDA

    NARCIS (Netherlands)

    Belleman, R.G.; Bédorf, J.; Portegies Zwart, S.F.

    2008-01-01

    We present the results of gravitational direct N-body simulations using the graphics processing unit (GPU) on a commercial NVIDIA GeForce 8800GTX designed for gaming computers. The force evaluation of the N-body problem is implemented in "Compute Unified Device Architecture" (CUDA) using the GPU to

  5. GPU-Based FFT Computation for Multi-Gigabit WirelessHD Baseband Processing

    Directory of Open Access Journals (Sweden)

    Nicholas Hinitt

    2010-01-01

    Full Text Available The next generation Graphics Processing Units (GPUs are being considered for non-graphics applications. Millimeter wave (60 Ghz wireless networks that are capable of multi-gigabit per second (Gbps transfer rates require a significant baseband throughput. In this work, we consider the baseband of WirelessHD, a 60 GHz communications system, which can provide a data rate of up to 3.8 Gbps over a short range wireless link. Thus, we explore the feasibility of achieving gigabit baseband throughput using the GPUs. One of the most computationally intensive functions commonly used in baseband communications, the Fast Fourier Transform (FFT algorithm, is implemented on an NVIDIA GPU using their general-purpose computing platform called the Compute Unified Device Architecture (CUDA. The paper, first, investigates the implementation of an FFT algorithm using the GPU hardware and exploiting the computational capability available. It then outlines the limitations discovered and the methods used to overcome these challenges. Finally a new algorithm to compute FFT is proposed, which reduces interprocessor communication. It is further optimized by improving memory access, enabling the processing rate to exceed 4 Gbps, achieving a processing time of a 512-point FFT in less than 200 ns using a two-GPU solution.

  6. Computer-assisted training in the thermal production department

    International Nuclear Information System (INIS)

    Felgines, R.

    1985-01-01

    For many years now, in the United States and Canada, computer-assisted training (CAT) experiments have been carried out in various fields: general or professional education, student testing in universities. This method seems very promising and particularly for continuing education and for keeping industrial process operating and maintenance personnel abreast of their specialities. Thanks to the progress in data processing and remote processing with central computers, this technique is being developed in France for professional training applications. Faced with many training problems, the Thermal Production Department of EDF (Electricite de France) first conducted in 1979 a test involving a limited subset of the nuclear power station operating personnel; this course amounted to some ten hours with very limited content. It seemed promising enough, so that in 1981, a real test was launched at 4 PWR plants: DAMPIERRE, FESSENHEIM, GRAVELINES, TRICASTIN. This test which involves about 700 employees has been fruitful and we decided to generalise this system to all the French nuclear power plants (40 units of 900 and 1300 MW). (author)

  7. Computation of Galois field expressions for quaternary logic functions on GPUs

    Directory of Open Access Journals (Sweden)

    Gajić Dušan B.

    2014-01-01

    Full Text Available Galois field (GF expressions are polynomials used as representations of multiple-valued logic (MVL functions. For this purpose, MVL functions are considered as functions defined over a finite (Galois field of order p - GF(p. The problem of computing these functional expressions has an important role in areas such as digital signal processing and logic design. Time needed for computing GF-expressions increases exponentially with the number of variables in MVL functions and, as a result, it often represents a limiting factor in applications. This paper proposes a method for an accelerated computation of GF(4-expressions for quaternary (four-valued logic functions using graphics processing units (GPUs. The method is based on the spectral interpretation of GF-expressions, permitting the use of fast Fourier transform (FFT-like algorithms for their computation. These algorithms are then adapted for highly parallel processing on GPUs. The performance of the proposed solutions is compared with referent C/C++ implementations of the same algorithms processed on central processing units (CPUs. Experimental results confirm that the presented approach leads to significant reduction in processing times (up to 10.86 times when compared to CPU processing. Therefore, the proposed approach widens the set of problem instances which can be efficiently handled in practice. [Projekat Ministarstva nauke Republike Srbije, br. ON174026 i br. III44006

  8. Initial Assessment of Parallelization of Monte Carlo Calculation using Graphics Processing Units

    International Nuclear Information System (INIS)

    Choi, Sung Hoon; Joo, Han Gyu

    2009-01-01

    Monte Carlo (MC) simulation is an effective tool for calculating neutron transports in complex geometry. However, because Monte Carlo simulates each neutron behavior one by one, it takes a very long computing time if enough neutrons are used for high precision of calculation. Accordingly, methods that reduce the computing time are required. In a Monte Carlo code, parallel calculation is well-suited since it simulates the behavior of each neutron independently and thus parallel computation is natural. The parallelization of the Monte Carlo codes, however, was done using multi CPUs. By the global demand for high quality 3D graphics, the Graphics Processing Unit (GPU) has developed into a highly parallel, multi-core processor. This parallel processing capability of GPUs can be available to engineering computing once a suitable interface is provided. Recently, NVIDIA introduced CUDATM, a general purpose parallel computing architecture. CUDA is a software environment that allows developers to manage GPU using C/C++ or other languages. In this work, a GPU-based Monte Carlo is developed and the initial assessment of it parallel performance is investigated

  9. The ATLAS Fast Tracker Processing Units - input and output data preparation

    CERN Document Server

    Bolz, Arthur Eugen; The ATLAS collaboration

    2016-01-01

    The ATLAS Fast Tracker is a hardware processor built to reconstruct tracks at a rate of up to 100 kHz and provide them to the high level trigger system. The Fast Tracker will allow the trigger to utilize tracking information from the entire detector at an earlier event selection stage than ever before, allowing for more efficient event rejection. The connection of the system from to the detector read-outs and to the high level trigger computing farms are made through custom boards implementing Advanced Telecommunications Computing Technologies standard. The input is processed by the Input Mezzanines and Data Formatter boards, designed to receive and sort the data coming from the Pixel and Semi-conductor Tracker. The Fast Tracker to Level-2 Interface Card connects the system to the computing farm. The Input Mezzanines are 128 boards, performing clustering, placed on the 32 Data Formatter mother boards that sort the information into 64 logical regions required by the downstream processing units. This necessitat...

  10. Characterization of the Temporal Clustering of Flood Events across the Central United States in terms of Climate States

    Science.gov (United States)

    Mallakpour, Iman; Villarini, Gabriele; Jones, Michael; Smith, James

    2016-04-01

    The central United States is a region of the country that has been plagued by frequent catastrophic flooding (e.g., flood events of 1993, 2008, 2013, and 2014), with large economic and social repercussions (e.g., fatalities, agricultural losses, flood losses, water quality issues). The goal of this study is to examine whether it is possible to describe the occurrence of flood events at the sub-seasonal scale in terms of variations in the climate system. Daily streamflow time series from 774 USGS stream gage stations over the central United States (defined here to include North Dakota, South Dakota, Nebraska, Kansas, Missouri, Iowa, Minnesota, Wisconsin, Illinois, West Virginia, Kentucky, Ohio, Indiana, and Michigan) with a record of at least 50 years and ending no earlier than 2011 are used for this study. We use a peak-over-threshold (POT) approach to identify flood peaks so that we have, on average two events per year. We model the occurrence/non-occurrence of a flood event over time using regression models based on Cox processes. Cox processes are widely used in biostatistics and can be viewed as a generalization of Poisson processes. Rather than assuming that flood events occur independently of the occurrence of previous events (as in Poisson processes), Cox processes allow us to account for the potential presence of temporal clustering, which manifests itself in an alternation of quiet and active periods. Here we model the occurrence/non-occurrence of flood events using two climate indices as climate time-varying covariates: the North Atlantic Oscillation (NAO) and the Pacific-North American pattern (PNA). The results of this study show that NAO and/or PNA can explain the temporal clustering in flood occurrences in over 90% of the stream gage stations we considered. Analyses of the sensitivity of the results to different average numbers of flood events per year (from one to five) are also performed and lead to the same conclusions. The findings of this work

  11. Data processing with PC-9801 micro-computer for HCN laser scattering experiments

    International Nuclear Information System (INIS)

    Iwasaki, T.; Okajima, S.; Kawahata, K.; Tetsuka, T.; Fujita, J.

    1986-09-01

    In order to process the data of HCN laser scattering experiments, a micro-computer software has been developed and applied to the measurements of density fluctuations in the JIPP T-IIU tokamak plasma. The data processing system consists of a spectrum analyzer, SM-2100A Signal Analyzer (IWATSU ELECTRIC CO., LTD.), PC-9801m3 micro-computer, a CRT-display and a dot-printer. The output signals from the spectrum analyzer are A/D converted, and stored on a mini-floppy-disk equipped to the signal analyzer. The software to process the data is composed of system-programs and several user-programs. The real time data processing is carried out for every shot of plasma at 4 minutes interval by the micro-computer connected with the signal analyzer through a GP-IB interface. The time evolutions of the frequency spectrum of the density fluctuations are displayed on the CRT attached to the micro-computer and printed out on a printer-sheet. In the case of the data processing after experiments, the data stored on the floppy-disk of the signal analyzer are read out by using a floppy-disk unit attached to the micro-computer. After computation with the user-programs, the results, such as monitored signal, frequency spectra, wave number spectra and the time evolutions of the spectrum, are displayed and printed out. In this technical report, the system, the software and the directions for use are described. (author)

  12. An Application of Graphics Processing Units to Geosimulation of Collective Crowd Behaviour

    Directory of Open Access Journals (Sweden)

    Cjoskāns Jānis

    2017-12-01

    Full Text Available The goal of the paper is to assess the ways for computational performance and efficiency improvement of collective crowd behaviour simulation by using parallel computing methods implemented on graphics processing unit (GPU. To perform an experimental evaluation of benefits of parallel computing, a new GPU-based simulator prototype is proposed and the runtime performance is analysed. Based on practical examples of pedestrian dynamics geosimulation, the obtained performance measurements are compared to several other available multiagent simulation tools to determine the efficiency of the proposed simulator, as well as to provide generic guidelines for the efficiency improvements of the parallel simulation of collective crowd behaviour.

  13. Solution of relativistic quantum optics problems using clusters of graphical processing units

    Energy Technology Data Exchange (ETDEWEB)

    Gordon, D.F., E-mail: daviel.gordon@nrl.navy.mil; Hafizi, B.; Helle, M.H.

    2014-06-15

    Numerical solution of relativistic quantum optics problems requires high performance computing due to the rapid oscillations in a relativistic wavefunction. Clusters of graphical processing units are used to accelerate the computation of a time dependent relativistic wavefunction in an arbitrary external potential. The stationary states in a Coulomb potential and uniform magnetic field are determined analytically and numerically, so that they can used as initial conditions in fully time dependent calculations. Relativistic energy levels in extreme magnetic fields are recovered as a means of validation. The relativistic ionization rate is computed for an ion illuminated by a laser field near the usual barrier suppression threshold, and the ionizing wavefunction is displayed.

  14. Development of DUST: A computer code that calculates release rates from a LLW disposal unit

    International Nuclear Information System (INIS)

    Sullivan, T.M.

    1992-01-01

    Performance assessment of a Low-Level Waste (LLW) disposal facility begins with an estimation of the rate at which radionuclides migrate out of the facility (i.e., the disposal unit source term). The major physical processes that influence the source term are water flow, container degradation, waste form leaching, and radionuclide transport. A computer code, DUST (Disposal Unit Source Term) has been developed which incorporates these processes in a unified manner. The DUST code improves upon existing codes as it has the capability to model multiple container failure times, multiple waste form release properties, and radionuclide specific transport properties. Verification studies performed on the code are discussed

  15. A formalized design process for bacterial consortia that perform logic computing.

    Directory of Open Access Journals (Sweden)

    Weiyue Ji

    Full Text Available The concept of microbial consortia is of great attractiveness in synthetic biology. Despite of all its benefits, however, there are still problems remaining for large-scaled multicellular gene circuits, for example, how to reliably design and distribute the circuits in microbial consortia with limited number of well-behaved genetic modules and wiring quorum-sensing molecules. To manage such problem, here we propose a formalized design process: (i determine the basic logic units (AND, OR and NOT gates based on mathematical and biological considerations; (ii establish rules to search and distribute simplest logic design; (iii assemble assigned basic logic units in each logic operating cell; and (iv fine-tune the circuiting interface between logic operators. We in silico analyzed gene circuits with inputs ranging from two to four, comparing our method with the pre-existing ones. Results showed that this formalized design process is more feasible concerning numbers of cells required. Furthermore, as a proof of principle, an Escherichia coli consortium that performs XOR function, a typical complex computing operation, was designed. The construction and characterization of logic operators is independent of "wiring" and provides predictive information for fine-tuning. This formalized design process provides guidance for the design of microbial consortia that perform distributed biological computation.

  16. Guide to Computational Geometry Processing

    DEFF Research Database (Denmark)

    Bærentzen, Jakob Andreas; Gravesen, Jens; Anton, François

    be processed before it is useful. This Guide to Computational Geometry Processing reviews the algorithms for processing geometric data, with a practical focus on important techniques not covered by traditional courses on computer vision and computer graphics. This is balanced with an introduction...... to the theoretical and mathematical underpinnings of each technique, enabling the reader to not only implement a given method, but also to understand the ideas behind it, its limitations and its advantages. Topics and features: Presents an overview of the underlying mathematical theory, covering vector spaces......, metric space, affine spaces, differential geometry, and finite difference methods for derivatives and differential equations Reviews geometry representations, including polygonal meshes, splines, and subdivision surfaces Examines techniques for computing curvature from polygonal meshes Describes...

  17. Procedures for central auditory processing screening in schoolchildren.

    Science.gov (United States)

    Carvalho, Nádia Giulian de; Ubiali, Thalita; Amaral, Maria Isabel Ramos do; Santos, Maria Francisca Colella

    2018-03-22

    Central auditory processing screening in schoolchildren has led to debates in literature, both regarding the protocol to be used and the importance of actions aimed at prevention and promotion of auditory health. Defining effective screening procedures for central auditory processing is a challenge in Audiology. This study aimed to analyze the scientific research on central auditory processing screening and discuss the effectiveness of the procedures utilized. A search was performed in the SciELO and PUBMed databases by two researchers. The descriptors used in Portuguese and English were: auditory processing, screening, hearing, auditory perception, children, auditory tests and their respective terms in Portuguese. original articles involving schoolchildren, auditory screening of central auditory skills and articles in Portuguese or English. studies with adult and/or neonatal populations, peripheral auditory screening only, and duplicate articles. After applying the described criteria, 11 articles were included. At the international level, central auditory processing screening methods used were: screening test for auditory processing disorder and its revised version, screening test for auditory processing, scale of auditory behaviors, children's auditory performance scale and Feather Squadron. In the Brazilian scenario, the procedures used were the simplified auditory processing assessment and Zaidan's battery of tests. At the international level, the screening test for auditory processing and Feather Squadron batteries stand out as the most comprehensive evaluation of hearing skills. At the national level, there is a paucity of studies that use methods evaluating more than four skills, and are normalized by age group. The use of simplified auditory processing assessment and questionnaires can be complementary in the search for an easy access and low-cost alternative in the auditory screening of Brazilian schoolchildren. Interactive tools should be proposed, that

  18. Accelerating VASP electronic structure calculations using graphic processing units

    KAUST Repository

    Hacene, Mohamed

    2012-08-20

    We present a way to improve the performance of the electronic structure Vienna Ab initio Simulation Package (VASP) program. We show that high-performance computers equipped with graphics processing units (GPUs) as accelerators may reduce drastically the computation time when offloading these sections to the graphic chips. The procedure consists of (i) profiling the performance of the code to isolate the time-consuming parts, (ii) rewriting these so that the algorithms become better-suited for the chosen graphic accelerator, and (iii) optimizing memory traffic between the host computer and the GPU accelerator. We chose to accelerate VASP with NVIDIA GPU using CUDA. We compare the GPU and original versions of VASP by evaluating the Davidson and RMM-DIIS algorithms on chemical systems of up to 1100 atoms. In these tests, the total time is reduced by a factor between 3 and 8 when running on n (CPU core + GPU) compared to n CPU cores only, without any accuracy loss. © 2012 Wiley Periodicals, Inc.

  19. Accelerating VASP electronic structure calculations using graphic processing units

    KAUST Repository

    Hacene, Mohamed; Anciaux-Sedrakian, Ani; Rozanska, Xavier; Klahr, Diego; Guignon, Thomas; Fleurat-Lessard, Paul

    2012-01-01

    We present a way to improve the performance of the electronic structure Vienna Ab initio Simulation Package (VASP) program. We show that high-performance computers equipped with graphics processing units (GPUs) as accelerators may reduce drastically the computation time when offloading these sections to the graphic chips. The procedure consists of (i) profiling the performance of the code to isolate the time-consuming parts, (ii) rewriting these so that the algorithms become better-suited for the chosen graphic accelerator, and (iii) optimizing memory traffic between the host computer and the GPU accelerator. We chose to accelerate VASP with NVIDIA GPU using CUDA. We compare the GPU and original versions of VASP by evaluating the Davidson and RMM-DIIS algorithms on chemical systems of up to 1100 atoms. In these tests, the total time is reduced by a factor between 3 and 8 when running on n (CPU core + GPU) compared to n CPU cores only, without any accuracy loss. © 2012 Wiley Periodicals, Inc.

  20. Design, implementation and evalution of a central unit for controlling climatic conditions in the greenhouse

    OpenAIRE

    Gh. Zarei; A. Azizi

    2016-01-01

    In greenhouse culture, in addition to increasing the quantity and quality of crop production in comparison with traditional methods, the agricultural inputs are saved, too. Recently, using new methods, designs and materials, and higher automation in greenhouses, better management has become possible for enhancing yield and improving the quality of greenhouse crops. The constructed and evaluated central controller unit (CCU) is a central controller system and computerized monitoring unit for g...

  1. PREMATH: a Precious-Material Holdup Estimator for unit operations and chemical processes

    International Nuclear Information System (INIS)

    Krichinsky, A.M.; Bruns, D.D.

    1982-01-01

    A computer program, PREMATH (Precious Material Holdup Estimator), has been developed to permit inventory estimation in vessels involved in unit operations and chemical processes. This program has been implemented in an operating nuclear fuel processing plant. PREMATH's purpose is to provide steady-state composition estimates for material residing in process vessels until representative samples can be obtained and chemical analyses can be performed. Since these compositions are used for inventory estimation, the results are determined for and cataloged in container-oriented files. The estimated compositions represent material collected in applicable vessels - including consideration for material previously acknowledged in these vessels. The program utilizes process measurements and simple material balance models to estimate material holdups and distribution within unit operations. During simulated run testing, PREMATH-estimated inventories typically produced material balances within 7% of the associated measured material balances for uranium and within 16% of the associated, measured material balances for thorium (a less valuable material than uranium) during steady-state process operation

  2. NUMATH: a nuclear-material-holdup estimator for unit operations and chemical processes

    International Nuclear Information System (INIS)

    Krichinsky, A.M.

    1981-01-01

    A computer program, NUMATH (Nuclear Material Holdup Estimator), has been developed to permit inventory estimation in vessels involved in unit operations and chemical processes. This program has been implemented in an operating nuclear fuel processing plant. NUMATH's purpose is to provide steady-state composition estimates for material residing in process vessels until representative samples can be obtained and chemical analyses can be performed. Since these compositions are used for inventory estimation, the results are determined for and cataloged in container-oriented files. The estimated compositions represent material collected in applicable vessels-including consideration for material previously acknowledged in these vessels. The program utilizes process measurements and simple material balance models to estimate material holdups and distribution within unit operations. During simulated run testing, NUMATH-estimated inventories typically produced material balances within 7% of the associated measured material balances for uranium and within 16% of the associated, measured material balance for thorium during steady-state process operation

  3. An Examination of the Relationship between Acculturation Level and PTSD among Central American Immigrants in the United States

    Science.gov (United States)

    Sankey, Sarita Marie

    2010-01-01

    The purpose of this study was to examine the relationship between acculturation level and posttraumatic stress disorder (PTSD) prevalence in Central American immigrants in the United States. Central American immigrants represent a population that is a part of the Latino/Hispanic Diaspora in the United States. By the year 2050 the United States…

  4. On the use of Cox regression to examine the temporal clustering of flooding and heavy precipitation across the central United States

    Science.gov (United States)

    Mallakpour, Iman; Villarini, Gabriele; Jones, Michael P.; Smith, James A.

    2017-08-01

    The central United States is plagued by frequent catastrophic flooding, such as the flood events of 1993, 2008, 2011, 2013, 2014 and 2016. The goal of this study is to examine whether it is possible to describe the occurrence of flood and heavy precipitation events at the sub-seasonal scale in terms of variations in the climate system. Daily streamflow and precipitation time series over the central United States (defined here to include North Dakota, South Dakota, Nebraska, Kansas, Missouri, Iowa, Minnesota, Wisconsin, Illinois, West Virginia, Kentucky, Ohio, Indiana, and Michigan) are used in this study. We model the occurrence/non-occurrence of a flood and heavy precipitation event over time using regression models based on Cox processes, which can be viewed as a generalization of Poisson processes. Rather than assuming that an event (i.e., flooding or precipitation) occurs independently of the occurrence of the previous one (as in Poisson processes), Cox processes allow us to account for the potential presence of temporal clustering, which manifests itself in an alternation of quiet and active periods. Here we model the occurrence/non-occurrence of flood and heavy precipitation events using two climate indices as time-varying covariates: the Arctic Oscillation (AO) and the Pacific-North American pattern (PNA). We find that AO and/or PNA are important predictors in explaining the temporal clustering in flood occurrences in over 78% of the stream gages we considered. Similar results are obtained when working with heavy precipitation events. Analyses of the sensitivity of the results to different thresholds used to identify events lead to the same conclusions. The findings of this work highlight that variations in the climate system play a critical role in explaining the occurrence of flood and heavy precipitation events at the sub-seasonal scale over the central United States.

  5. Computer aided process control equipment at the Karlsruhe reprocessing pilot plant, WAK

    International Nuclear Information System (INIS)

    Winter, R.; Finsterwalder, L.; Gutzeit, G.; Reif, J.; Stollenwerk, A.H.; Weinbrecht, E.; Weishaupt, M.

    1991-01-01

    A computer aided process control system has been installed at the Karlsruhe Spent Fuel Reprocessing Plant, WAK. All necessary process control data of the first extraction cycle is collected via a data collection system and is displayed in suitable ways on a screen for the operator in charge of the unit. To aid verification of displayed data, various measurements are associated to each other using balance type process modeling. Thus, deviation of flowsheet conditions and malfunctioning of measuring equipment are easily detected. (orig.) [de

  6. Use of computers at nuclear power plants

    International Nuclear Information System (INIS)

    Sen'kin, V.I.; Ozhigano, Yu.V.

    1974-01-01

    Applications of information and control computors in reacter central systems in Great Britain, Federal Republic of Germany, France, Canada, and the USA is surveyed. For the purpose of increasing the reliability of the computers effective means were designed for emergency operation and automatic computerized controls, and highly reliable micromodel modifications were developed. Numerical data units were handled along with development of methods and diagrams for converting analog values to numerical values, in accordance with modern requirements. Some data are presented on computer reliability in operating nuclear power plants both proposed and under construction. It is concluded that in foreign nuclear power stations the informational and calculational computers are finding increasingly wide distribution. Rapid action, the possibility of controlling large parameters, and operation of the computer in conjunction with increasing reliability are speeding up the process of introducing computers in atomic energy and broadenig their functions. (V.P.)

  7. Judicial Process, Grade Eight. Resource Unit (Unit V).

    Science.gov (United States)

    Minnesota Univ., Minneapolis. Project Social Studies Curriculum Center.

    This resource unit, developed by the University of Minnesota's Project Social Studies, introduces eighth graders to the judicial process. The unit was designed with two major purposes in mind. First, it helps pupils understand judicial decision-making, and second, it provides for the study of the rights guaranteed by the federal Constitution. Both…

  8. Computer interfacing of the unified systems for personnel supervising in nuclear units

    International Nuclear Information System (INIS)

    Staicu, M.

    1997-01-01

    The dosimetric supervising of the personnel working in nuclear units is based on the information supplied by: 1) the dosimetric data obtained by the method of thermoluminescence; 2) the dosimetric data obtained by the method of photo dosimetry: 3) the records from medical periodic control. To create a unified system of supervising the following elements were combined: a) an Automatic System of TLD Reading and Data Processing (SACDTL). The data from this system are transmitted 'on line' to the computer; b) the measuring line of the optical density of exposed dosimetric films. The interface achieved within the general ensemble SACDTL could be adapted to this line of measurement. The transmission of the data from the measurement line to the computer is made 'on line'; c) the medical surveillance data for each person transmitted 'off line' to the database computer. The unified system resulting from the unification of the three supervising systems will achieve the following general functions: - registering of the personnel working in the nuclear field; - recording the dosimetric data; - processing and presentation of the data; - issuing of measurement bulletins. Thus, by means of unified database, dosimetric intercomparison and correlative studies can be undertaken. (author)

  9. Optical fiber network of the data acquisition sub system of SIIP Integral Information System of Process, Unit 2

    International Nuclear Information System (INIS)

    Moreno R, J.; Ramirez C, M.J.; Pina O, I.; Cortazar F, S.; Villavicencio R, A.

    1995-01-01

    In this article, a description of the communication network, based in optical fiber, which interlace the data acquisition equipment with the computers of Laguna Verde Nuclear Power Plant of SIIP is made. It is also presented a description of the equipment and accessories which conform the network. The requirements imposed by the Central which stated the selection of optical fiber as interlace mean are also outstanding. SIIP is a computerized, centralized and integrated system which make information functions by means of the acquisition of signals and the required computational process for the continuous evaluation of the nuclear power plant in normal and emergency conditions. Is an exclusive monitoring system with no one action on the generation process; that is to say, it only acquire, process, store information and assist to the personnel in the operational analysis of the nuclear plant. SIIP is a Joint Project with three participant institutions: Federal Electricity Commission/ Electrical Research Institute/ General Electric. (Author)

  10. Analysis of impact of general-purpose graphics processor units in supersonic flow modeling

    Science.gov (United States)

    Emelyanov, V. N.; Karpenko, A. G.; Kozelkov, A. S.; Teterina, I. V.; Volkov, K. N.; Yalozo, A. V.

    2017-06-01

    Computational methods are widely used in prediction of complex flowfields associated with off-normal situations in aerospace engineering. Modern graphics processing units (GPU) provide architectures and new programming models that enable to harness their large processing power and to design computational fluid dynamics (CFD) simulations at both high performance and low cost. Possibilities of the use of GPUs for the simulation of external and internal flows on unstructured meshes are discussed. The finite volume method is applied to solve three-dimensional unsteady compressible Euler and Navier-Stokes equations on unstructured meshes with high resolution numerical schemes. CUDA technology is used for programming implementation of parallel computational algorithms. Solutions of some benchmark test cases on GPUs are reported, and the results computed are compared with experimental and computational data. Approaches to optimization of the CFD code related to the use of different types of memory are considered. Speedup of solution on GPUs with respect to the solution on central processor unit (CPU) is compared. Performance measurements show that numerical schemes developed achieve 20-50 speedup on GPU hardware compared to CPU reference implementation. The results obtained provide promising perspective for designing a GPU-based software framework for applications in CFD.

  11. High-speed optical coherence tomography signal processing on GPU

    International Nuclear Information System (INIS)

    Li Xiqi; Shi Guohua; Zhang Yudong

    2011-01-01

    The signal processing speed of spectral domain optical coherence tomography (SD-OCT) has become a bottleneck in many medical applications. Recently, a time-domain interpolation method was proposed. This method not only gets a better signal-to noise ratio (SNR) but also gets a faster signal processing time for the SD-OCT than the widely used zero-padding interpolation method. Furthermore, the re-sampled data is obtained by convoluting the acquired data and the coefficients in time domain. Thus, a lot of interpolations can be performed concurrently. So, this interpolation method is suitable for parallel computing. An ultra-high optical coherence tomography signal processing can be realized by using graphics processing unit (GPU) with computer unified device architecture (CUDA). This paper will introduce the signal processing steps of SD-OCT on GPU. An experiment is performed to acquire a frame SD-OCT data (400A-linesx2048 pixel per A-line) and real-time processed the data on GPU. The results show that it can be finished in 6.208 milliseconds, which is 37 times faster than that on Central Processing Unit (CPU).

  12. Mission: Define Computer Literacy. The Illinois-Wisconsin ISACS Computer Coordinators' Committee on Computer Literacy Report (May 1985).

    Science.gov (United States)

    Computing Teacher, 1985

    1985-01-01

    Defines computer literacy and describes a computer literacy course which stresses ethics, hardware, and disk operating systems throughout. Core units on keyboarding, word processing, graphics, database management, problem solving, algorithmic thinking, and programing are outlined, together with additional units on spreadsheets, simulations,…

  13. Central alarm system replacement in NPP Krsko

    International Nuclear Information System (INIS)

    Cicvaric, D.; Susnic, M.; Djetelic, N.

    2004-01-01

    Current NPP Krsko central alarm system consists of three main segments. Main Control Board alarm system (BETA 1000), Ventilation Control Board alarm system (BETA 1000) and Electrical Control Board alarm system (BETA 1100). All sections are equipped with specific BetaTone audible alarms and silence, acknowledge as well as test push buttons. The main reason for central alarm system replacement is system obsolescence and problems with maintenance, due to lack of spare parts. Other issue is lack of system redundancy, which could lead to loss of several Alarm Light Boxes in the event of particular power supply failure. Current central alarm system does not provide means of alarm optimization, grouping or prioritization. There are three main options for central alarm system replacement: Conventional annunciator system, hybrid annunciator system and advanced alarm system. Advanced alarm system implementation requires Main Control Board upgrade, integration of process instrumentation and plant process computer as well as long time for replacement. NPP Krsko has decided to implement hybrid alarm system with patchwork approach. The new central alarm system will be stand alone, digital, with advanced filtering and alarm grouping options. Sequence of event recorder will be linked with plant process computer and time synchronized with redundant GPS signal. Advanced functions such as link to plant procedures will be implemented with plant process computer upgrade in outage 2006. Central alarm system replacement is due in outage 2004.(author)

  14. Model of a programmable quantum processing unit based on a quantum transistor effect

    Science.gov (United States)

    Ablayev, Farid; Andrianov, Sergey; Fetisov, Danila; Moiseev, Sergey; Terentyev, Alexandr; Urmanchev, Andrey; Vasiliev, Alexander

    2018-02-01

    In this paper we propose a model of a programmable quantum processing device realizable with existing nano-photonic technologies. It can be viewed as a basis for new high performance hardware architectures. Protocols for physical implementation of device on the controlled photon transfer and atomic transitions are presented. These protocols are designed for executing basic single-qubit and multi-qubit gates forming a universal set. We analyze the possible operation of this quantum computer scheme. Then we formalize the physical architecture by a mathematical model of a Quantum Processing Unit (QPU), which we use as a basis for the Quantum Programming Framework. This framework makes it possible to perform universal quantum computations in a multitasking environment.

  15. 77 FR 59679 - Central Vermont Public Service Corporation (Millstone Power Station, Unit 3); Order Approving...

    Science.gov (United States)

    2012-09-28

    ... NUCLEAR REGULATORY COMMISSION [NRC-2012-0044; Docket No. 50-423] Central Vermont Public Service Corporation (Millstone Power Station, Unit 3); Order Approving Application Regarding Corporate Restructuring and Conforming Amendment I Dominion Nuclear Connecticut, Inc. (DNC), Central Vermont Public Service...

  16. A new era for central processing and production in CMS

    International Nuclear Information System (INIS)

    Fajardo, E; Gutsche, O; Foulkes, S; Linacre, J; Spinoso, V; Lahiff, A; Gomez-Ceballos, G; Klute, M; Mohapatra, A

    2012-01-01

    The goal for CMS computing is to maximise the throughput of simulated event generation while also processing event data generated by the detector as quickly and reliably as possible. To maintain this achievement as the quantity of events increases CMS computing has migrated at the Tier 1 level from its old production framework, ProdAgent, to a new one, WMAgent. The WMAgent framework offers improved processing efficiency and increased resource usage as well as a reduction in operational manpower. In addition to the challenges encountered during the design of the WMAgent framework, several operational issues have arisen during its commissioning. The largest operational challenges were in the usage and monitoring of resources, mainly a result of a change in the way work is allocated. Instead of work being assigned to operators, all work is centrally injected and managed in the Request Manager system and the task of the operators has changed from running individual workflows to monitoring the global workload. In this report we present how we tackled some of the operational challenges, and how we benefitted from the lessons learned in the commissioning of the WMAgent framework at the Tier 2 level in late 2011. As case studies, we will show how the WMAgent system performed during some of the large data reprocessing and Monte Carlo simulation campaigns.

  17. Software of the BESM-6 computer for automatic image processing from liquid-hydrogen bubble chambers

    International Nuclear Information System (INIS)

    Grebenikov, E.A.; Kiosa, M.N.; Kobzarev, K.K.; Kuznetsova, N.A.; Mironov, S.V.; Nasonova, L.P.

    1978-01-01

    A set of programs, which is used in ''road guidance'' mode on the BESM-6 computer to process picture information taken in liquid hydrogen bubble chambers is discussed. This mode allows the system to process data from an automatic scanner (AS) taking into account the results of manual scanning. The system hardware includes: an automatic scanner, an M-6000 mini-controller and a BESM-6 computer. Software is functionally divided into the following units: computation of event mask parameters and generation . of data files controlling the AS; front-end processing of data coming from the AS; filtering of track data; simulation of AS operation and gauging of the AS reference system. To speed up the overall performance, programs which receive and decode data, coming from the AS via the M-6000 controller and the data link to the BESM-6 computer, are written in machine language

  18. Computer-aided modeling of aluminophosphate zeolites as packings of building units

    KAUST Repository

    Peskov, Maxim

    2012-03-22

    New building schemes of aluminophosphate molecular sieves from packing units (PUs) are proposed. We have investigated 61 framework types discovered in zeolite-like aluminophosphates and have identified important PU combinations using a recently implemented computational algorithm of the TOPOS package. All PUs whose packing completely determines the overall topology of the aluminophosphate framework were described and catalogued. We have enumerated 235 building models for the aluminophosphates belonging to 61 zeolite framework types, from ring- or cage-like PU clusters. It is indicated that PUs can be considered as precursor species in the zeolite synthesis processes. © 2012 American Chemical Society.

  19. The modernization of the process computer of the Trillo Nuclear Power Plant; Modernizacion del ordenador de proceso de la Central Nuclear de Trillo

    Energy Technology Data Exchange (ETDEWEB)

    Martin Aparicio, J.; Atanasio, J.

    2011-07-01

    The paper describes the modernization of the Process computer of the Trillo Nuclear Power Plant. The process computer functions, have been incorporated in the non Safety I and C platform selected in Trillo NPP: the Siemens SPPA-T2000 OM690 (formerly known as Teleperm XP). The upgrade of the Human Machine Interface of the control room has been included in the project. The modernization project has followed the same development process used in the upgrade of the process computer of PWR German nuclear power plants. (Author)

  20. Graphics processing units accelerated semiclassical initial value representation molecular dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Tamascelli, Dario; Dambrosio, Francesco Saverio [Dipartimento di Fisica, Università degli Studi di Milano, via Celoria 16, 20133 Milano (Italy); Conte, Riccardo [Department of Chemistry and Cherry L. Emerson Center for Scientific Computation, Emory University, Atlanta, Georgia 30322 (United States); Ceotto, Michele, E-mail: michele.ceotto@unimi.it [Dipartimento di Chimica, Università degli Studi di Milano, via Golgi 19, 20133 Milano (Italy)

    2014-05-07

    This paper presents a Graphics Processing Units (GPUs) implementation of the Semiclassical Initial Value Representation (SC-IVR) propagator for vibrational molecular spectroscopy calculations. The time-averaging formulation of the SC-IVR for power spectrum calculations is employed. Details about the GPU implementation of the semiclassical code are provided. Four molecules with an increasing number of atoms are considered and the GPU-calculated vibrational frequencies perfectly match the benchmark values. The computational time scaling of two GPUs (NVIDIA Tesla C2075 and Kepler K20), respectively, versus two CPUs (Intel Core i5 and Intel Xeon E5-2687W) and the critical issues related to the GPU implementation are discussed. The resulting reduction in computational time and power consumption is significant and semiclassical GPU calculations are shown to be environment friendly.

  1. Vertebrobasilar system computed tomographic angiography in central vertigo.

    Science.gov (United States)

    Paşaoğlu, Lale

    2017-03-01

    The incidence of vertigo in the population is 20% to 30% and one-fourth of the cases are related to central causes. The aim of this study was to evaluate computed tomography angiography (CTA) findings of the vertebrobasilar system in central vertigo without stroke.CTA and magnetic resonance images of patients with vertigo were retrospectively evaluated. One hundred twenty-nine patients suspected of having central vertigo according to history, physical examination, and otological and neurological tests without signs of infarction on diffusion-weighted magnetic resonance imaging were included in the study. The control group included 120 patients with similar vascular disease risk factors but without vertigo. Vertebral and basilar artery diameters, hypoplasias, exit-site variations of vertebral artery, vertebrobasilar tortuosity, and stenosis of ≥50% detected on CTA were recorded for all patients. Independent-samples t test was used in variables with normal distribution, and Mann-Whitney U test in non-normal distribution. The difference of categorical variable distribution according to groups was analyzed with χ and/or Fisher exact test.Vertebral artery hypoplasia and ≥50% stenosis were seen more often in the vertigo group (P = 0.000, vertigo patients had ≥50% stenosis, 54 (69.2%) had stenosis at V1 segment, 9 (11.5%) at V2 segment, 2 (2.5%) at V3 segment, and 13 (16.6%) at V4 segment. Both vertigo and control groups had similar basilar artery hypoplasia and ≥50% stenosis rates (P = 0.800, >0.05).CTA may be helpful to clarify the association between abnormal CTA findings of vertebral arteries and central vertigo.This article reveals the opportunity to diagnose posterior circulation abnormalities causing central vertigo with a feasible method such as CTA.

  2. Soft-error tolerance and energy consumption evaluation of embedded computer with magnetic random access memory in practical systems using computer simulations

    Science.gov (United States)

    Nebashi, Ryusuke; Sakimura, Noboru; Sugibayashi, Tadahiko

    2017-08-01

    We evaluated the soft-error tolerance and energy consumption of an embedded computer with magnetic random access memory (MRAM) using two computer simulators. One is a central processing unit (CPU) simulator of a typical embedded computer system. We simulated the radiation-induced single-event-upset (SEU) probability in a spin-transfer-torque MRAM cell and also the failure rate of a typical embedded computer due to its main memory SEU error. The other is a delay tolerant network (DTN) system simulator. It simulates the power dissipation of wireless sensor network nodes of the system using a revised CPU simulator and a network simulator. We demonstrated that the SEU effect on the embedded computer with 1 Gbit MRAM-based working memory is less than 1 failure in time (FIT). We also demonstrated that the energy consumption of the DTN sensor node with MRAM-based working memory can be reduced to 1/11. These results indicate that MRAM-based working memory enhances the disaster tolerance of embedded computers.

  3. A distributed process monitoring system for nuclear powered electrical generating facilities

    International Nuclear Information System (INIS)

    Sweney, A.D.

    1991-01-01

    Duke Power Company is one of the largest investor owned utilities in the United States, with a service area of 20,000 square miles extending across North and South Carolina. Oconee Nuclear Station, one of Duke Power's three nuclear generating facilities, is a three unit pressurized water reactor site and has, over the course of its 15-year operating lifetime, effectively run out of plant processing capability. From a severely overcrowded cable spread room to an aging overtaxed Operator Aid Computer, the problems with trying to add additional process variables to the present centralized Operator Aid Computer are almost insurmountable obstacles. This paper reports that for this reason, and to realize the inherent benefits of a distributed process monitoring and control system, Oconee has embarked on a project to demonstrate the ability of a distributed system to perform in the nuclear power plant environment

  4. General purpose graphics-processing-unit implementation of cosmological domain wall network evolution.

    Science.gov (United States)

    Correia, J R C C C; Martins, C J A P

    2017-10-01

    Topological defects unavoidably form at symmetry breaking phase transitions in the early universe. To probe the parameter space of theoretical models and set tighter experimental constraints (exploiting the recent advances in astrophysical observations), one requires more and more demanding simulations, and therefore more hardware resources and computation time. Improving the speed and efficiency of existing codes is essential. Here we present a general purpose graphics-processing-unit implementation of the canonical Press-Ryden-Spergel algorithm for the evolution of cosmological domain wall networks. This is ported to the Open Computing Language standard, and as a consequence significant speedups are achieved both in two-dimensional (2D) and 3D simulations.

  5. General purpose graphics-processing-unit implementation of cosmological domain wall network evolution

    Science.gov (United States)

    Correia, J. R. C. C. C.; Martins, C. J. A. P.

    2017-10-01

    Topological defects unavoidably form at symmetry breaking phase transitions in the early universe. To probe the parameter space of theoretical models and set tighter experimental constraints (exploiting the recent advances in astrophysical observations), one requires more and more demanding simulations, and therefore more hardware resources and computation time. Improving the speed and efficiency of existing codes is essential. Here we present a general purpose graphics-processing-unit implementation of the canonical Press-Ryden-Spergel algorithm for the evolution of cosmological domain wall networks. This is ported to the Open Computing Language standard, and as a consequence significant speedups are achieved both in two-dimensional (2D) and 3D simulations.

  6. Centralized TLD service and record keeping in Canada

    International Nuclear Information System (INIS)

    Grogan, D.; Ashmore, J.P.; Bradley, R.P.

    1979-01-01

    A centralized automated TLD service operated by the Department of National Health and Welfare went into operation in May 1977 to monitor radiation workers throughout Canada. Twenty-thousand employees from a wide range of disciplines are enrolled and the number will be increased to fifty thousand by September l978. A prototype of the system, operational from September 1976 to May 1977 for three-thousand people, has already been described. A description of technical and operational highlights is presented as well as a description of problems experienced during the first full year of operation. Details of costs, conversion logistics, operational performance and technical problems are included. A comparison of the advantages and disadvantages of changing from film dosimetry to TLD in a nationwide context is detailed. The dose meter read-out unit is interfaced, through video terminals, with a time-sharing computer system programmed to provide direct access to the Canadian National Dose Registry. Details of this linkage are described, as are the computer programmes for routine processing of raw batch data. The centralized TLD service interactively linked with the National Dose Registry provides a comprehensive occupational monitoring programme invaluable for regulatory control. (author)

  7. PGAS in-memory data processing for the Processing Unit of the Upgraded Electronics of the Tile Calorimeter of the ATLAS Detector

    International Nuclear Information System (INIS)

    Ohene-Kwofie, Daniel; Otoo, Ekow

    2015-01-01

    The ATLAS detector, operated at the Large Hadron Collider (LHC) records proton-proton collisions at CERN every 50ns resulting in a sustained data flow up to PB/s. The upgraded Tile Calorimeter of the ATLAS experiment will sustain about 5PB/s of digital throughput. These massive data rates require extremely fast data capture and processing. Although there has been a steady increase in the processing speed of CPU/GPGPU assembled for high performance computing, the rate of data input and output, even under parallel I/O, has not kept up with the general increase in computing speeds. The problem then is whether one can implement an I/O subsystem infrastructure capable of meeting the computational speeds of the advanced computing systems at the petascale and exascale level.We propose a system architecture that leverages the Partitioned Global Address Space (PGAS) model of computing to maintain an in-memory data-store for the Processing Unit (PU) of the upgraded electronics of the Tile Calorimeter which is proposed to be used as a high throughput general purpose co-processor to the sROD of the upgraded Tile Calorimeter. The physical memory of the PUs are aggregated into a large global logical address space using RDMA- capable interconnects such as PCI- Express to enhance data processing throughput. (paper)

  8. Fast Simulation of Large-Scale Floods Based on GPU Parallel Computing

    Directory of Open Access Journals (Sweden)

    Qiang Liu

    2018-05-01

    Full Text Available Computing speed is a significant issue of large-scale flood simulations for real-time response to disaster prevention and mitigation. Even today, most of the large-scale flood simulations are generally run on supercomputers due to the massive amounts of data and computations necessary. In this work, a two-dimensional shallow water model based on an unstructured Godunov-type finite volume scheme was proposed for flood simulation. To realize a fast simulation of large-scale floods on a personal computer, a Graphics Processing Unit (GPU-based, high-performance computing method using the OpenACC application was adopted to parallelize the shallow water model. An unstructured data management method was presented to control the data transportation between the GPU and CPU (Central Processing Unit with minimum overhead, and then both computation and data were offloaded from the CPU to the GPU, which exploited the computational capability of the GPU as much as possible. The parallel model was validated using various benchmarks and real-world case studies. The results demonstrate that speed-ups of up to one order of magnitude can be achieved in comparison with the serial model. The proposed parallel model provides a fast and reliable tool with which to quickly assess flood hazards in large-scale areas and, thus, has a bright application prospect for dynamic inundation risk identification and disaster assessment.

  9. FAST CALCULATION OF THE LOMB-SCARGLE PERIODOGRAM USING GRAPHICS PROCESSING UNITS

    International Nuclear Information System (INIS)

    Townsend, R. H. D.

    2010-01-01

    I introduce a new code for fast calculation of the Lomb-Scargle periodogram that leverages the computing power of graphics processing units (GPUs). After establishing a background to the newly emergent field of GPU computing, I discuss the code design and narrate key parts of its source. Benchmarking calculations indicate no significant differences in accuracy compared to an equivalent CPU-based code. However, the differences in performance are pronounced; running on a low-end GPU, the code can match eight CPU cores, and on a high-end GPU it is faster by a factor approaching 30. Applications of the code include analysis of long photometric time series obtained by ongoing satellite missions and upcoming ground-based monitoring facilities, and Monte Carlo simulation of periodogram statistical properties.

  10. Graphics Processing Unit Accelerated Hirsch-Fye Quantum Monte Carlo

    Science.gov (United States)

    Moore, Conrad; Abu Asal, Sameer; Rajagoplan, Kaushik; Poliakoff, David; Caprino, Joseph; Tomko, Karen; Thakur, Bhupender; Yang, Shuxiang; Moreno, Juana; Jarrell, Mark

    2012-02-01

    In Dynamical Mean Field Theory and its cluster extensions, such as the Dynamic Cluster Algorithm, the bottleneck of the algorithm is solving the self-consistency equations with an impurity solver. Hirsch-Fye Quantum Monte Carlo is one of the most commonly used impurity and cluster solvers. This work implements optimizations of the algorithm, such as enabling large data re-use, suitable for the Graphics Processing Unit (GPU) architecture. The GPU's sheer number of concurrent parallel computations and large bandwidth to many shared memories takes advantage of the inherent parallelism in the Green function update and measurement routines, and can substantially improve the efficiency of the Hirsch-Fye impurity solver.

  11. Role of centralized review processes for making reimbursement decisions on new health technologies in Europe

    Directory of Open Access Journals (Sweden)

    Stafinski T

    2011-08-01

    Full Text Available Tania Stafinski1, Devidas Menon2, Caroline Davis1, Christopher McCabe31Health Technology and Policy Unit, 2Health Policy and Management, School of Public Health, University of Alberta, Edmonton, Alberta, Canada; 3Academic Unit of Health Economics, Leeds Institute for Health Sciences, University of Leeds, Leeds, UKBackground: The purpose of this study was to compare centralized reimbursement/coverage decision-making processes for health technologies in 23 European countries, according to: mandate, authority, structure, and policy options; mechanisms for identifying, selecting, and evaluating technologies; clinical and economic evidence expectations; committee composition, procedures, and factors considered; available conditional reimbursement options for promising new technologies; and the manufacturers' roles in the process.Methods: A comprehensive review of publicly available information from peer-reviewed literature (using a variety of bibliographic databases and gray literature (eg, working papers, committee reports, presentations, and government documents was conducted. Policy experts in each of the 23 countries were also contacted. All information collected was reviewed by two independent researchers.Results: Most European countries have established centralized reimbursement systems for making decisions on health technologies. However, the scope of technologies considered, as well as processes for identifying, selecting, and reviewing them varies. All systems include an assessment of clinical evidence, compiled in accordance with their own guidelines or internationally recognized published ones. In addition, most systems require an economic evaluation. The quality of such information is typically assessed by content and methodological experts. Committees responsible for formulating recommendations or decisions are multidisciplinary. While criteria used by committees appear transparent, how they are operationalized during deliberations

  12. The Executive Process, Grade Eight. Resource Unit (Unit III).

    Science.gov (United States)

    Minnesota Univ., Minneapolis. Project Social Studies Curriculum Center.

    This resource unit, developed by the University of Minnesota's Project Social Studies, introduces eighth graders to the executive process. The unit uses case studies of presidential decision making such as the decision to drop the atomic bomb on Hiroshima, the Cuba Bay of Pigs and quarantine decisions, and the Little Rock decision. A case study of…

  13. Parallel Algorithm for Incremental Betweenness Centrality on Large Graphs

    KAUST Repository

    Jamour, Fuad Tarek

    2017-10-17

    Betweenness centrality quantifies the importance of nodes in a graph in many applications, including network analysis, community detection and identification of influential users. Typically, graphs in such applications evolve over time. Thus, the computation of betweenness centrality should be performed incrementally. This is challenging because updating even a single edge may trigger the computation of all-pairs shortest paths in the entire graph. Existing approaches cannot scale to large graphs: they either require excessive memory (i.e., quadratic to the size of the input graph) or perform unnecessary computations rendering them prohibitively slow. We propose iCentral; a novel incremental algorithm for computing betweenness centrality in evolving graphs. We decompose the graph into biconnected components and prove that processing can be localized within the affected components. iCentral is the first algorithm to support incremental betweeness centrality computation within a graph component. This is done efficiently, in linear space; consequently, iCentral scales to large graphs. We demonstrate with real datasets that the serial implementation of iCentral is up to 3.7 times faster than existing serial methods. Our parallel implementation that scales to large graphs, is an order of magnitude faster than the state-of-the-art parallel algorithm, while using an order of magnitude less computational resources.

  14. Transaction processing in the common node of a distributed function laboratory computer system

    International Nuclear Information System (INIS)

    Stubblefield, F.W.; Dimmler, D.G.

    1975-01-01

    A computer network architecture consisting of a common node processor for managing peripherals and files and a number of private node processors for laboratory experiment control is briefly reviewed. Central to the problem of private node-common node communication is the concept of a transaction. The collection of procedures and the data structure associated with a transaction are described. The common node properties assigned to a transaction and procedures required for its complete processing are discussed. (U.S.)

  15. [A computer aided design approach of all-ceramics abutment for maxilla central incisor].

    Science.gov (United States)

    Sun, Yu-chun; Zhao, Yi-jiao; Wang, Yong; Han, Jing-yun; Lin, Ye; Lü, Pei-jun

    2010-10-01

    To establish the computer aided design (CAD) software platform of individualized abutment for the maxilla central incisor. Three-dimentional data of the incisor was collected by scanning and geometric transformation. Data mainly included the occlusal part of the healing abutment, the location carinae of the bedpiece, the occlusal 1/3 part of the artificial gingiva's inner surface, and so on. The all-ceramic crown designed in advanced was "virtual cutback" to get the original data of the abutment's supragingival part. The abutment's in-gum part was designed to simulate the individual natural tooth root. The functions such as "data offset", "bi-rail sweep surface" and "loft surface" were used in the process of CAD. The CAD route of the individualized all-ceramic abutment was set up. The functions and application methods were decided and the complete CAD process was realized. The software platform was basically set up according to the requests of the dental clinic.

  16. The influence of (central) auditory processing disorder in speech sound disorders.

    Science.gov (United States)

    Barrozo, Tatiane Faria; Pagan-Neves, Luciana de Oliveira; Vilela, Nadia; Carvallo, Renata Mota Mamede; Wertzner, Haydée Fiszbein

    2016-01-01

    Considering the importance of auditory information for the acquisition and organization of phonological rules, the assessment of (central) auditory processing contributes to both the diagnosis and targeting of speech therapy in children with speech sound disorders. To study phonological measures and (central) auditory processing of children with speech sound disorder. Clinical and experimental study, with 21 subjects with speech sound disorder aged between 7.0 and 9.11 years, divided into two groups according to their (central) auditory processing disorder. The assessment comprised tests of phonology, speech inconsistency, and metalinguistic abilities. The group with (central) auditory processing disorder demonstrated greater severity of speech sound disorder. The cutoff value obtained for the process density index was the one that best characterized the occurrence of phonological processes for children above 7 years of age. The comparison among the tests evaluated between the two groups showed differences in some phonological and metalinguistic abilities. Children with an index value above 0.54 demonstrated strong tendencies towards presenting a (central) auditory processing disorder, and this measure was effective to indicate the need for evaluation in children with speech sound disorder. Copyright © 2015 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier Editora Ltda. All rights reserved.

  17. Portable brine evaporator unit, process, and system

    Science.gov (United States)

    Hart, Paul John; Miller, Bruce G.; Wincek, Ronald T.; Decker, Glenn E.; Johnson, David K.

    2009-04-07

    The present invention discloses a comprehensive, efficient, and cost effective portable evaporator unit, method, and system for the treatment of brine. The evaporator unit, method, and system require a pretreatment process that removes heavy metals, crude oil, and other contaminates in preparation for the evaporator unit. The pretreatment and the evaporator unit, method, and system process metals and brine at the site where they are generated (the well site). Thus, saving significant money to producers who can avoid present and future increases in transportation costs.

  18. Large-Scale Sentinel-1 Processing for Solid Earth Science and Urgent Response using Cloud Computing and Machine Learning

    Science.gov (United States)

    Hua, H.; Owen, S. E.; Yun, S. H.; Agram, P. S.; Manipon, G.; Starch, M.; Sacco, G. F.; Bue, B. D.; Dang, L. B.; Linick, J. P.; Malarout, N.; Rosen, P. A.; Fielding, E. J.; Lundgren, P.; Moore, A. W.; Liu, Z.; Farr, T.; Webb, F.; Simons, M.; Gurrola, E. M.

    2017-12-01

    With the increased availability of open SAR data (e.g. Sentinel-1 A/B), new challenges are being faced with processing and analyzing the voluminous SAR datasets to make geodetic measurements. Upcoming SAR missions such as NISAR are expected to generate close to 100TB per day. The Advanced Rapid Imaging and Analysis (ARIA) project can now generate geocoded unwrapped phase and coherence products from Sentinel-1 TOPS mode data in an automated fashion, using the ISCE software. This capability is currently being exercised on various study sites across the United States and around the globe, including Hawaii, Central California, Iceland and South America. The automated and large-scale SAR data processing and analysis capabilities use cloud computing techniques to speed the computations and provide scalable processing power and storage. Aspects such as how to processing these voluminous SLCs and interferograms at global scales, keeping up with the large daily SAR data volumes, and how to handle the voluminous data rates are being explored. Scene-partitioning approaches in the processing pipeline help in handling global-scale processing up to unwrapped interferograms with stitching done at a late stage. We have built an advanced science data system with rapid search functions to enable access to the derived data products. Rapid image processing of Sentinel-1 data to interferograms and time series is already being applied to natural hazards including earthquakes, floods, volcanic eruptions, and land subsidence due to fluid withdrawal. We will present the status of the ARIA science data system for generating science-ready data products and challenges that arise from being able to process SAR datasets to derived time series data products at large scales. For example, how do we perform large-scale data quality screening on interferograms? What approaches can be used to minimize compute, storage, and data movement costs for time series analysis in the cloud? We will also

  19. A Study on GPU-based Iterative ML-EM Reconstruction Algorithm for Emission Computed Tomographic Imaging Systems

    Energy Technology Data Exchange (ETDEWEB)

    Ha, Woo Seok; Kim, Soo Mee; Park, Min Jae; Lee, Dong Soo; Lee, Jae Sung [Seoul National University, Seoul (Korea, Republic of)

    2009-10-15

    The maximum likelihood-expectation maximization (ML-EM) is the statistical reconstruction algorithm derived from probabilistic model of the emission and detection processes. Although the ML-EM has many advantages in accuracy and utility, the use of the ML-EM is limited due to the computational burden of iterating processing on a CPU (central processing unit). In this study, we developed a parallel computing technique on GPU (graphic processing unit) for ML-EM algorithm. Using Geforce 9800 GTX+ graphic card and CUDA (compute unified device architecture) the projection and backprojection in ML-EM algorithm were parallelized by NVIDIA's technology. The time delay on computations for projection, errors between measured and estimated data and backprojection in an iteration were measured. Total time included the latency in data transmission between RAM and GPU memory. The total computation time of the CPU- and GPU-based ML-EM with 32 iterations were 3.83 and 0.26 sec, respectively. In this case, the computing speed was improved about 15 times on GPU. When the number of iterations increased into 1024, the CPU- and GPU-based computing took totally 18 min and 8 sec, respectively. The improvement was about 135 times and was caused by delay on CPU-based computing after certain iterations. On the other hand, the GPU-based computation provided very small variation on time delay per iteration due to use of shared memory. The GPU-based parallel computation for ML-EM improved significantly the computing speed and stability. The developed GPU-based ML-EM algorithm could be easily modified for some other imaging geometries

  20. A Study on GPU-based Iterative ML-EM Reconstruction Algorithm for Emission Computed Tomographic Imaging Systems

    International Nuclear Information System (INIS)

    Ha, Woo Seok; Kim, Soo Mee; Park, Min Jae; Lee, Dong Soo; Lee, Jae Sung

    2009-01-01

    The maximum likelihood-expectation maximization (ML-EM) is the statistical reconstruction algorithm derived from probabilistic model of the emission and detection processes. Although the ML-EM has many advantages in accuracy and utility, the use of the ML-EM is limited due to the computational burden of iterating processing on a CPU (central processing unit). In this study, we developed a parallel computing technique on GPU (graphic processing unit) for ML-EM algorithm. Using Geforce 9800 GTX+ graphic card and CUDA (compute unified device architecture) the projection and backprojection in ML-EM algorithm were parallelized by NVIDIA's technology. The time delay on computations for projection, errors between measured and estimated data and backprojection in an iteration were measured. Total time included the latency in data transmission between RAM and GPU memory. The total computation time of the CPU- and GPU-based ML-EM with 32 iterations were 3.83 and 0.26 sec, respectively. In this case, the computing speed was improved about 15 times on GPU. When the number of iterations increased into 1024, the CPU- and GPU-based computing took totally 18 min and 8 sec, respectively. The improvement was about 135 times and was caused by delay on CPU-based computing after certain iterations. On the other hand, the GPU-based computation provided very small variation on time delay per iteration due to use of shared memory. The GPU-based parallel computation for ML-EM improved significantly the computing speed and stability. The developed GPU-based ML-EM algorithm could be easily modified for some other imaging geometries

  1. Tensors in image processing and computer vision

    CERN Document Server

    De Luis García, Rodrigo; Tao, Dacheng; Li, Xuelong

    2009-01-01

    Tensor signal processing is an emerging field with important applications to computer vision and image processing. This book presents the developments in this branch of signal processing, offering research and discussions by experts in the area. It is suitable for advanced students working in the area of computer vision and image processing.

  2. Three-dimensional photoacoustic tomography based on graphics-processing-unit-accelerated finite element method.

    Science.gov (United States)

    Peng, Kuan; He, Ling; Zhu, Ziqiang; Tang, Jingtian; Xiao, Jiaying

    2013-12-01

    Compared with commonly used analytical reconstruction methods, the frequency-domain finite element method (FEM) based approach has proven to be an accurate and flexible algorithm for photoacoustic tomography. However, the FEM-based algorithm is computationally demanding, especially for three-dimensional cases. To enhance the algorithm's efficiency, in this work a parallel computational strategy is implemented in the framework of the FEM-based reconstruction algorithm using a graphic-processing-unit parallel frame named the "compute unified device architecture." A series of simulation experiments is carried out to test the accuracy and accelerating effect of the improved method. The results obtained indicate that the parallel calculation does not change the accuracy of the reconstruction algorithm, while its computational cost is significantly reduced by a factor of 38.9 with a GTX 580 graphics card using the improved method.

  3. Semi-automatic film processing unit

    International Nuclear Information System (INIS)

    Mohamad Annuar Assadat Husain; Abdul Aziz Bin Ramli; Mohd Khalid Matori

    2005-01-01

    The design concept applied in the development of an semi-automatic film processing unit needs creativity and user support in channelling the required information to select materials and operation system that suit the design produced. Low cost and efficient operation are the challenges that need to be faced abreast with the fast technology advancement. In producing this processing unit, there are few elements which need to be considered in order to produce high quality image. Consistent movement and correct time coordination for developing and drying are a few elements which need to be controlled. Other elements which need serious attentions are temperature, liquid density and the amount of time for the chemical liquids to react. Subsequent chemical reaction that take place will cause the liquid chemical to age and this will adversely affect the quality of image produced. This unit is also equipped with liquid chemical drainage system and disposal chemical tank. This unit would be useful in GP clinics especially in rural area which practice manual system for developing and require low operational cost. (Author)

  4. Report on the Fourth Reactor Refueling. Laguna Verde Nuclear Central. Unit 1. April-May 1995

    International Nuclear Information System (INIS)

    Mendoza L, A.; Flores C, E.; Lopez G, C.P.F.

    1995-01-01

    The fourth refueling of the Unit 1 of Laguna Verde Nuclear Central was executed in the period of April 17 to May 31 of 1995 with the participation of a task group of 358 persons, included technicians and radiation protection officials and auxiliaries.The radiation monitoring and radiological surveillance to the workers was present length ways the refueling process and always attached to the ALARA criteria. The check points for radiation levels were set at: primary container or dry well, reloading floor, decontamination room (level 10.5), turbine building and radioactive waste building. To take advantage of the refueling process, rooms 203 and 213 of the turbine buildings were subject to inspection and maintenance work in valves, heaters and drains of heaters. Management aspects as personnel selection and training, costs, and countable are also presented in this report. Owing to the high cost of man-hour of the members of the ININ staff, its participation in the refueling process was in smaller number than years before. (Author)

  5. En Garde: Fencing at Kansas City's Central Computers Unlimited/Classical Greek Magnet High School, 1991-1995

    Science.gov (United States)

    Poos, Bradley W.

    2015-01-01

    Central High School in Kansas City, Missouri is one of the oldest schools west of the Mississippi and the first public high school built in Kansas City. Kansas City's magnet plan resulted in Central High School being rebuilt as the Central Computers Unlimited/Classical Greek Magnet High School, a school that was designed to offer students an…

  6. Hand held control unit for controlling a display screen-oriented computer game, and a display screen-oriented computer game having one or more such control units

    NARCIS (Netherlands)

    2001-01-01

    A hand-held control unit is used to control a display screen-oriented computer game. The unit comprises a housing with a front side, a set of control members lying generally flush with the front side for through actuating thereof controlling actions of in-game display items, and an output for

  7. Complexity estimates based on integral transforms induced by computational units

    Czech Academy of Sciences Publication Activity Database

    Kůrková, Věra

    2012-01-01

    Roč. 33, September (2012), s. 160-167 ISSN 0893-6080 R&D Projects: GA ČR GAP202/11/1368 Institutional research plan: CEZ:AV0Z10300504 Institutional support: RVO:67985807 Keywords : neural networks * estimates of model complexity * approximation from a dictionary * integral transforms * norms induced by computational units Subject RIV: IN - Informatics, Computer Science Impact factor: 1.927, year: 2012

  8. Sustained reduction of central line-associated bloodstream infections outside the intensive care unit with a multimodal intervention focusing on central line maintenance.

    Science.gov (United States)

    Dumyati, Ghinwa; Concannon, Cathleen; van Wijngaarden, Edwin; Love, Tanzy M T; Graman, Paul; Pettis, Ann Marie; Greene, Linda; El-Daher, Nayef; Farnsworth, Donna; Quinlan, Gail; Karr, Gloria; Ward, Lynnette; Knab, Robin; Shelly, Mark

    2014-07-01

    Central venous catheter use is common outside the intensive care units (ICUs), but prevention in this setting is not well studied. We initiated surveillance for central line-associated bloodstream infections (CLABSIs) outside the ICU setting and studied the impact of a multimodal intervention on the incidence of CLABSIs across multiple hospitals. This project was constructed as a prospective preintervention-postintervention design. The project comprised 3 phases (preintervention [baseline], intervention, and postintervention) over a 4.5-year period (2008-2012) and was implemented through a collaborative of 37 adult non-ICU wards at 6 hospitals in the Rochester, NY area. The intervention focused on engagement of nursing staff and leadership, nursing education on line care maintenance, competence evaluation, audits of line care, and regular feedback on CLABSI rates. Quarterly rates were compared over time in relation to intervention implementation. The overall CLABSI rate for all participating units decreased from 2.6/1000 line-days preintervention to 2.1/1,000 line-days during the intervention and to 1.3/1,000 line-days postintervention, a 50% reduction (95% confidence interval, .40-.59) compared with the preintervention period (P .0179). A multipronged approach blending both the adaptive and technical aspects of care including front line engagement, education, execution of best practices, and evaluation of both process and outcome measures may provide an effective strategy for reducing CLABSI rates outside the ICU. Copyright © 2014 Association for Professionals in Infection Control and Epidemiology, Inc. Published by Mosby, Inc. All rights reserved.

  9. Computational Modeling of Arc-Slag Interaction in DC Furnaces

    Science.gov (United States)

    Reynolds, Quinn G.

    2017-02-01

    The plasma arc is central to the operation of the direct-current arc furnace, a unit operation commonly used in high-temperature processing of both primary ores and recycled metals. The arc is a high-velocity, high-temperature jet of ionized gas created and sustained by interactions among the thermal, momentum, and electromagnetic fields resulting from the passage of electric current. In addition to being the primary source of thermal energy, the arc jet also couples mechanically with the bath of molten process material within the furnace, causing substantial splashing and stirring in the region in which it impinges. The arc's interaction with the molten bath inside the furnace is studied through use of a multiphase, multiphysics computational magnetohydrodynamic model developed in the OpenFOAM® framework. Results from the computational solver are compared with empirical correlations that account for arc-slag interaction effects.

  10. Recognition of oral spelling is diagnostic of the central reading processes.

    Science.gov (United States)

    Schubert, Teresa; McCloskey, Michael

    2015-01-01

    The task of recognition of oral spelling (stimulus: "C-A-T", response: "cat") is often administered to individuals with acquired written language disorders, yet there is no consensus about the underlying cognitive processes. We adjudicate between two existing hypotheses: Recognition of oral spelling uses central reading processes, or recognition of oral spelling uses central spelling processes in reverse. We tested the recognition of oral spelling and spelling to dictation abilities of a single individual with acquired dyslexia and dysgraphia. She was impaired relative to matched controls in spelling to dictation but unimpaired in recognition of oral spelling. Recognition of oral spelling for exception words (e.g., colonel) and pronounceable nonwords (e.g., larth) was intact. Our results were predicted by the hypothesis that recognition of oral spelling involves the central reading processes. We conclude that recognition of oral spelling is a useful tool for probing the integrity of the central reading processes.

  11. Dynamic computing random access memory

    International Nuclear Information System (INIS)

    Traversa, F L; Bonani, F; Pershin, Y V; Di Ventra, M

    2014-01-01

    The present von Neumann computing paradigm involves a significant amount of information transfer between a central processing unit and memory, with concomitant limitations in the actual execution speed. However, it has been recently argued that a different form of computation, dubbed memcomputing (Di Ventra and Pershin 2013 Nat. Phys. 9 200–2) and inspired by the operation of our brain, can resolve the intrinsic limitations of present day architectures by allowing for computing and storing of information on the same physical platform. Here we show a simple and practical realization of memcomputing that utilizes easy-to-build memcapacitive systems. We name this architecture dynamic computing random access memory (DCRAM). We show that DCRAM provides massively-parallel and polymorphic digital logic, namely it allows for different logic operations with the same architecture, by varying only the control signals. In addition, by taking into account realistic parameters, its energy expenditures can be as low as a few fJ per operation. DCRAM is fully compatible with CMOS technology, can be realized with current fabrication facilities, and therefore can really serve as an alternative to the present computing technology. (paper)

  12. Computer Processing of Esperanto Text.

    Science.gov (United States)

    Sherwood, Bruce

    1981-01-01

    Basic aspects of computer processing of Esperanto are considered in relation to orthography and computer representation, phonetics, morphology, one-syllable and multisyllable words, lexicon, semantics, and syntax. There are 28 phonemes in Esperanto, each represented in orthography by a single letter. The PLATO system handles diacritics by using a…

  13. Some Aspects of Process Computers Configuration Control in Nuclear Power Plant Krsko - Process Computer Signal Configuration Database (PCSCDB)

    International Nuclear Information System (INIS)

    Mandic, D.; Kocnar, R.; Sucic, B.

    2002-01-01

    During the operation of NEK and other nuclear power plants it has been recognized that certain issues related to the usage of digital equipment and associated software in NPP technological process protection, control and monitoring, is not adequately addressed in the existing programs and procedures. The term and the process of Process Computers Configuration Control joins three 10CFR50 Appendix B quality requirements of Process Computers application in NPP: Design Control, Document Control and Identification and Control of Materials, Parts and Components. This paper describes Process Computer Signal Configuration Database (PCSCDB), that was developed and implemented in order to resolve some aspects of Process Computer Configuration Control related to the signals or database points that exist in the life cycle of different Process Computer Systems (PCS) in Nuclear Power Plant Krsko. PCSCDB is controlled, master database, related to the definition and description of the configurable database points associated with all Process Computer Systems in NEK. PCSCDB holds attributes related to the configuration of addressable and configurable real time database points and attributes related to the signal life cycle references and history data such as: Input/Output signals, Manually Input database points, Program constants, Setpoints, Calculated (by application program or SCADA calculation tools) database points, Control Flags (example: enable / disable certain program feature) Signal acquisition design references to the DCM (Document Control Module Application software for document control within Management Information System - MIS) and MECL (Master Equipment and Component List MIS Application software for identification and configuration control of plant equipment and components) Usage of particular database point in particular application software packages, and in the man-machine interface features (display mimics, printout reports, ...) Signals history (EEAR Engineering

  14. A utilização de um software infantil na terapia fonoaudiológica de Distúrbio do Processamento Auditivo Central The use of a children software in the treatment of Central Auditory Processing Disorder

    Directory of Open Access Journals (Sweden)

    Juliana Schwambach Martins

    2008-01-01

    Full Text Available O objetivo deste estudo foi verificar a efetividade do uso de recursos de informática na terapia fonoaudiológica do Distúrbio do Processamento Auditivo Central para a adequação das habilidades auditivas alteradas. Participaram desta pesquisa dois indivíduos, com diagnóstico do Distúrbio do Processamento Auditivo Central, sendo um do sexo masculino e outro do sexo feminino, ambos com nove anos. Os pacientes foram submetidos a oito sessões de terapia fonoaudiológica com a utilização do software e, posteriormente, realizou-se uma re-avaliação do processamento auditivo central para verificar o desenvolvimento das habilidades auditivas e a efetividade do treinamento auditivo. Verificou-se que, após o treinamento auditivo informal, houve adequação das habilidades auditivas de resolução temporal, figura-fundo para sons não verbais e verbais, ordenação temporal para sons verbais e não-verbais para ambos os pacientes. Conclui-se que o computador como instrumento terapêutico é um recurso estimulador e que possibilita o desenvolvimento de habilidades auditivas alteradas em pacientes com Distúrbio do Processamento Auditivo Central.The aim of this study was to verify the effectiveness of the use of computer science resources in the treatment of Central Auditory Processing Disorder, in order to adequate the altered auditory abilities. Two individuals with diagnosis of Central Auditory Processing Disorder, a boy and a girl, both with nine years old, participated on this study. The subjects were submitted to eight sessions of speech therapy using the software and, after this period, a reassessment of the central auditory processing abilities was carried out, in order to verify the development of the auditory abilities and the effectiveness of the auditory training. It was verified that, after this informal auditory training, the auditory abilities of temporal resolution, figure-ground for both verbal and nonverbal sounds, and temporal

  15. Role of computers in CANDU safety systems

    International Nuclear Information System (INIS)

    Hepburn, G.A.; Gilbert, R.S.; Ichiyen, N.M.

    1985-01-01

    Small digital computers are playing an expanding role in the safety systems of CANDU nuclear generating stations, both as active components in the trip logic, and as monitoring and testing systems. The paper describes three recent applications: (i) A programmable controller was retro-fitted to Bruce ''A'' Nuclear Generating Station to handle trip setpoint modification as a function of booster rod insertion. (ii) A centralized monitoring computer to monitor both shutdown systems and the Emergency Coolant Injection system, is currently being retro-fitted to Bruce ''A''. (iii) The implementation of process trips on the CANDU 600 design using microcomputers. While not truly a retrofit, this feature was added very late in the design cycle to increase the margin against spurious trips, and has now seen about 4 unit-years of service at three separate sites. Committed future applications of computers in special safety systems are also described. (author)

  16. Single instruction computer architecture and its application in image processing

    Science.gov (United States)

    Laplante, Phillip A.

    1992-03-01

    A single processing computer system using only half-adder circuits is described. In addition, it is shown that only a single hard-wired instruction is needed in the control unit to obtain a complete instruction set for this general purpose computer. Such a system has several advantages. First it is intrinsically a RISC machine--in fact the 'ultimate RISC' machine. Second, because only a single type of logic element is employed the entire computer system can be easily realized on a single, highly integrated chip. Finally, due to the homogeneous nature of the computer's logic elements, the computer has possible implementations as an optical or chemical machine. This in turn suggests possible paradigms for neural computing and artificial intelligence. After showing how we can implement a full-adder, min, max and other operations using the half-adder, we use an array of such full-adders to implement the dilation operation for two black and white images. Next we implement the erosion operation of two black and white images using a relative complement function and the properties of erosion and dilation. This approach was inspired by papers by van der Poel in which a single instruction is used to furnish a complete set of general purpose instructions and by Bohm- Jacopini where it is shown that any problem can be solved using a Turing machine with one entry and one exit.

  17. The role of personnel marketing in the process of building corporate social responsibility strategy of a scientific unit

    Directory of Open Access Journals (Sweden)

    Sylwia Jarosławska-Sobór

    2015-09-01

    Full Text Available The goal of this article is to discuss the significance of human capital in the process of building the strategy of social responsibility and the role of personnel marketing in the process. Dynamically changing social environment has enforced a new way of looking at non-material resources. Organizations have understood that it is human capital and social competences that have a significant impact on the creation of an organization’s value, generating profits, as well as gaining competitive advantage in the 21st century. Personnel marketing is now a key element in the process of implementation of the CSR concept and building the value of contemporary organizations, especially such unique organizations as scientific units. In this article you will find a discussion concerning the basic values regarded as crucial by the Central Mining Institute in the context of their significance for the paradigm of social responsibility. Such an analysis was carried out on the basis of the experiences of Central Mining Institute (GIG in the development of strategic CSR, which takes into consideration the specific character of the Institute as a scientific unit.

  18. A Framework for Modeling Competitive and Cooperative Computation in Retinal Processing

    Science.gov (United States)

    Moreno-Díaz, Roberto; de Blasio, Gabriel; Moreno-Díaz, Arminda

    2008-07-01

    The structure of the retina suggests that it should be treated (at least from the computational point of view), as a layered computer. Different retinal cells contribute to the coding of the signals down to ganglion cells. Also, because of the nature of the specialization of some ganglion cells, the structure suggests that all these specialization processes should take place at the inner plexiform layer and they should be of a local character, prior to a global integration and frequency-spike coding by the ganglion cells. The framework we propose consists of a layered computational structure, where outer layers provide essentially with band-pass space-time filtered signals which are progressively delayed, at least for their formal treatment. Specialization is supposed to take place at the inner plexiform layer by the action of spatio-temporal microkernels (acting very locally), and having a centerperiphery space-time structure. The resulting signals are then integrated by the ganglion cells through macrokernels structures. Practically all types of specialization found in different vertebrate retinas, as well as the quasilinear behavior in some higher vertebrates, can be modeled and simulated within this framework. Finally, possible feedback from central structures is considered. Though their relevance to retinal processing is not definitive, it is included here for the sake of completeness, since it is a formal requisite for recursiveness.

  19. Processing Optimization of Typed Resources with Synchronized Storage and Computation Adaptation in Fog Computing

    Directory of Open Access Journals (Sweden)

    Zhengyang Song

    2018-01-01

    Full Text Available Wide application of the Internet of Things (IoT system has been increasingly demanding more hardware facilities for processing various resources including data, information, and knowledge. With the rapid growth of generated resource quantity, it is difficult to adapt to this situation by using traditional cloud computing models. Fog computing enables storage and computing services to perform at the edge of the network to extend cloud computing. However, there are some problems such as restricted computation, limited storage, and expensive network bandwidth in Fog computing applications. It is a challenge to balance the distribution of network resources. We propose a processing optimization mechanism of typed resources with synchronized storage and computation adaptation in Fog computing. In this mechanism, we process typed resources in a wireless-network-based three-tier architecture consisting of Data Graph, Information Graph, and Knowledge Graph. The proposed mechanism aims to minimize processing cost over network, computation, and storage while maximizing the performance of processing in a business value driven manner. Simulation results show that the proposed approach improves the ratio of performance over user investment. Meanwhile, conversions between resource types deliver support for dynamically allocating network resources.

  20. A universal electronical adaptation of automats for biochemical analysis to a central processing computer by applying CAMAC-signals

    International Nuclear Information System (INIS)

    Schaefer, R.

    1975-01-01

    A universal expansion of a CAMAC-subsystem - BORER 3000 - for adapting analysis instruments in biochemistry to a processing computer is described. The possibility of standardizing input interfaces for lab instruments with such circuits is discussed and the advantages achieved by applying the CAMAC-specifications are described

  1. Controlling Laboratory Processes From A Personal Computer

    Science.gov (United States)

    Will, H.; Mackin, M. A.

    1991-01-01

    Computer program provides natural-language process control from IBM PC or compatible computer. Sets up process-control system that either runs without operator or run by workers who have limited programming skills. Includes three smaller programs. Two of them, written in FORTRAN 77, record data and control research processes. Third program, written in Pascal, generates FORTRAN subroutines used by other two programs to identify user commands with device-driving routines written by user. Also includes set of input data allowing user to define user commands to be executed by computer. Requires personal computer operating under MS-DOS with suitable hardware interfaces to all controlled devices. Also requires FORTRAN 77 compiler and device drivers written by user.

  2. An Investigation of the Artifacts and Process of Constructing Computers Games about Environmental Science in a Fifth Grade Classroom

    Science.gov (United States)

    Baytak, Ahmet; Land, Susan M.

    2011-01-01

    This study employed a case study design (Yin, "Case study research, design and methods," 2009) to investigate the processes used by 5th graders to design and develop computer games within the context of their environmental science unit, using the theoretical framework of "constructionism." Ten fifth graders designed computer games using "Scratch"…

  3. Development of process control capability through the Browns Ferry Integrated Computer System using Reactor Water Clanup System as an example. Final report

    International Nuclear Information System (INIS)

    Smith, J.; Mowrey, J.

    1995-12-01

    This report describes the design, development and testing of process controls for selected system operations in the Browns Ferry Nuclear Plant (BFNP) Reactor Water Cleanup System (RWCU) using a Computer Simulation Platform which simulates the RWCU System and the BFNP Integrated Computer System (ICS). This system was designed to demonstrate the feasibility of the soft control (video touch screen) of nuclear plant systems through an operator console. The BFNP Integrated Computer System, which has recently. been installed at BFNP Unit 2, was simulated to allow for operator control functions of the modeled RWCU system. The BFNP Unit 2 RWCU system was simulated using the RELAP5 Thermal/Hydraulic Simulation Model, which provided the steady-state and transient RWCU process variables and simulated the response of the system to control system inputs. Descriptions of the hardware and software developed are also included in this report. The testing and acceptance program and results are also detailed in this report. A discussion of potential installation of an actual RWCU process control system in BFNP Unit 2 is included. Finally, this report contains a section on industry issues associated with installation of process control systems in nuclear power plants

  4. Sandia`s computer support units: The first three years

    Energy Technology Data Exchange (ETDEWEB)

    Harris, R.N. [Sandia National Labs., Albuquerque, NM (United States). Labs. Computing Dept.

    1997-11-01

    This paper describes the method by which Sandia National Laboratories has deployed information technology to the line organizations and to the desktop as part of the integrated information services organization under the direction of the Chief Information officer. This deployment has been done by the Computer Support Unit (CSU) Department. The CSU approach is based on the principle of providing local customer service with a corporate perspective. Success required an approach that was both customer compelled at times and market or corporate focused in most cases. Above all, a complete solution was required that included a comprehensive method of technology choices and development, process development, technology implementation, and support. It is the authors hope that this information will be useful in the development of a customer-focused business strategy for information technology deployment and support. Descriptions of current status reflect the status as of May 1997.

  5. Application of process computers for automation of power plants in Hungary

    Energy Technology Data Exchange (ETDEWEB)

    Papp, G.; Szilagyi, R.

    1982-04-01

    An automation system for normal operation and accidents is presented. In normal operation, the operators have only a supervisory function. In case of disturbances, only a minimum number of units will fail. Process computer data are: Storage cycle: 750 ns; parallel system; length of configuration: 12 bit; one-address binary two-complement arithmetic; operative ferromagnetic storage: 24 K; core register: 5. There are two peripheral disk storages with a total capacity of 6 Mbit and two floppy disk storages, each with a capacity of 800 Kbit.

  6. Centralized digital computer control of a research nuclear reactor

    International Nuclear Information System (INIS)

    Crawford, K.C.

    1987-01-01

    A hardware and software design for the centralized control of a research nuclear reactor by a digital computer are presented, as well as an investigation of automatic-feedback control. Current reactor-control philosophies including redundancy, inherent safety in failure, and conservative-yet-operational scram initiation were used as the bases of the design. The control philosophies were applied to the power-monitoring system, the fuel-temperature monitoring system, the area-radiation monitoring system, and the overall system interaction. Unlike the single-function analog computers currently used to control research and commercial reactors, this system will be driven by a multifunction digital computer. Specifically, the system will perform control-rod movements to conform with operator requests, automatically log the required physical parameters during reactor operation, perform the required system tests, and monitor facility safety and security. Reactor power control is based on signals received from ion chambers located near the reactor core. Absorber-rod movements are made to control the rate of power increase or decrease during power changes and to control the power level during steady-state operation. Additionally, the system incorporates a rudimentary level of artificial intelligence

  7. 2008 Groundwater Monitoring Report Central Nevada Test Area, Corrective Action Unit 443

    Energy Technology Data Exchange (ETDEWEB)

    None

    2009-03-01

    This report presents the 2008 groundwater monitoring results collected by the U.S. Department of Energy (DOE) Office of Legacy Management (LM) for the Central Nevada Test Area (CNTA) Subsurface Corrective Action Unit (CAU) 443. Responsibility for the environmental site restoration of the CNTA was transferred from the DOE Office of Environmental Management (DOE-EM) to DOE-LM on October 1, 2006. The environmental restoration process and corrective action strategy for CAU 443 are conducted in accordance with the Federal Facility Agreement and Consent Order (FFACO 2005) entered into by DOE, the U.S. Department of Defense, and the State of Nevada. The corrective action strategy for the site includes proof-of-concept monitoring in support of site closure. This report summarizes investigation activities associated with CAU 443 that were conducted at the site during fiscal year 2008. This is the second groundwater monitoring report prepared by DOE-LM for the CNTA.

  8. 2008 Groundwater Monitoring Report Central Nevada Test Area, Corrective Action Unit 443

    International Nuclear Information System (INIS)

    2009-01-01

    This report presents the 2008 groundwater monitoring results collected by the U.S. Department of Energy (DOE) Office of Legacy Management (LM) for the Central Nevada Test Area (CNTA) Subsurface Corrective Action Unit (CAU) 443. Responsibility for the environmental site restoration of the CNTA was transferred from the DOE Office of Environmental Management (DOE-EM) to DOE-LM on October 1, 2006. The environmental restoration process and corrective action strategy for CAU 443 are conducted in accordance with the Federal Facility Agreement and Consent Order (FFACO 2005) entered into by DOE, the U.S. Department of Defense, and the State of Nevada. The corrective action strategy for the site includes proof-of-concept monitoring in support of site closure. This report summarizes investigation activities associated with CAU 443 that were conducted at the site during fiscal year 2008. This is the second groundwater monitoring report prepared by DOE-LM for the CNTA

  9. Computational Intelligence in Image Processing

    CERN Document Server

    Siarry, Patrick

    2013-01-01

    Computational intelligence based techniques have firmly established themselves as viable, alternate, mathematical tools for more than a decade. They have been extensively employed in many systems and application domains, among these signal processing, automatic control, industrial and consumer electronics, robotics, finance, manufacturing systems, electric power systems, and power electronics. Image processing is also an extremely potent area which has attracted the atten­tion of many researchers who are interested in the development of new computational intelligence-based techniques and their suitable applications, in both research prob­lems and in real-world problems. Part I of the book discusses several image preprocessing algorithms; Part II broadly covers image compression algorithms; Part III demonstrates how computational intelligence-based techniques can be effectively utilized for image analysis purposes; and Part IV shows how pattern recognition, classification and clustering-based techniques can ...

  10. BarraCUDA - a fast short read sequence aligner using graphics processing units

    Directory of Open Access Journals (Sweden)

    Klus Petr

    2012-01-01

    Full Text Available Abstract Background With the maturation of next-generation DNA sequencing (NGS technologies, the throughput of DNA sequencing reads has soared to over 600 gigabases from a single instrument run. General purpose computing on graphics processing units (GPGPU, extracts the computing power from hundreds of parallel stream processors within graphics processing cores and provides a cost-effective and energy efficient alternative to traditional high-performance computing (HPC clusters. In this article, we describe the implementation of BarraCUDA, a GPGPU sequence alignment software that is based on BWA, to accelerate the alignment of sequencing reads generated by these instruments to a reference DNA sequence. Findings Using the NVIDIA Compute Unified Device Architecture (CUDA software development environment, we ported the most computational-intensive alignment component of BWA to GPU to take advantage of the massive parallelism. As a result, BarraCUDA offers a magnitude of performance boost in alignment throughput when compared to a CPU core while delivering the same level of alignment fidelity. The software is also capable of supporting multiple CUDA devices in parallel to further accelerate the alignment throughput. Conclusions BarraCUDA is designed to take advantage of the parallelism of GPU to accelerate the alignment of millions of sequencing reads generated by NGS instruments. By doing this, we could, at least in part streamline the current bioinformatics pipeline such that the wider scientific community could benefit from the sequencing technology. BarraCUDA is currently available from http://seqbarracuda.sf.net

  11. BarraCUDA - a fast short read sequence aligner using graphics processing units

    LENUS (Irish Health Repository)

    Klus, Petr

    2012-01-13

    Abstract Background With the maturation of next-generation DNA sequencing (NGS) technologies, the throughput of DNA sequencing reads has soared to over 600 gigabases from a single instrument run. General purpose computing on graphics processing units (GPGPU), extracts the computing power from hundreds of parallel stream processors within graphics processing cores and provides a cost-effective and energy efficient alternative to traditional high-performance computing (HPC) clusters. In this article, we describe the implementation of BarraCUDA, a GPGPU sequence alignment software that is based on BWA, to accelerate the alignment of sequencing reads generated by these instruments to a reference DNA sequence. Findings Using the NVIDIA Compute Unified Device Architecture (CUDA) software development environment, we ported the most computational-intensive alignment component of BWA to GPU to take advantage of the massive parallelism. As a result, BarraCUDA offers a magnitude of performance boost in alignment throughput when compared to a CPU core while delivering the same level of alignment fidelity. The software is also capable of supporting multiple CUDA devices in parallel to further accelerate the alignment throughput. Conclusions BarraCUDA is designed to take advantage of the parallelism of GPU to accelerate the alignment of millions of sequencing reads generated by NGS instruments. By doing this, we could, at least in part streamline the current bioinformatics pipeline such that the wider scientific community could benefit from the sequencing technology. BarraCUDA is currently available from http:\\/\\/seqbarracuda.sf.net

  12. Central vestibular dysfunction in an otorhinolaryngological vestibular unit: incidence and diagnostic strategy.

    Science.gov (United States)

    Mostafa, Badr E; Kahky, Ayman O El; Kader, Hisham M Abdel; Rizk, Michael

    2014-07-01

    Introduction Vertigo can be due to a variety of central and peripheral causes. The relative incidence of central causes is underestimated. This may have an important impact of the patients' management and prognosis. Objective The objective of this work is to determine the incidence of central vestibular disorders in patients presenting to a vestibular unit in a tertiary referral academic center. It also aims at determining the best strategy to increase the diagnostic yield of the patients' visit. Methods This is a prospective observational study on 100 consecutive patients with symptoms suggestive of vestibular dysfunction. All patients completed a structured questionnaire and received bedside and vestibular examination and neuroimaging as required. Results There were 69 women and 31 men. Their ages ranged between 28 and 73 (mean 42.48 years). Provisional videonystagmography (VNG) results were: 40% benign paroxysmal positional vertigo (BPPV), 23% suspicious of central causes, 18% undiagnosed, 15% Meniere disease, and 4% vestibular neuronitis. Patients with an unclear diagnosis or central features (41) had magnetic resonance imaging (MRI) and Doppler studies. Combining data from history, VNG, and imaging studies, 23 patients (23%) were diagnosed as having a central vestibular lesion (10 with generalized ischemia/vertebra basilar insufficiency, 4 with multiple sclerosis, 4 with migraine vestibulopathy, 4 with phobic postural vertigo, and 1 with hyperventilation-induced nystagmus). Conclusions Combining a careful history with clinical examination, VNG, MRI, and Doppler studies decreases the number of undiagnosed cases and increases the detection of possible central lesions.

  13. Central Vestibular Dysfunction in an Otorhinolaryngological Vestibular Unit: Incidence and Diagnostic Strategy

    Directory of Open Access Journals (Sweden)

    Mostafa, Badr E.

    2014-03-01

    Full Text Available Introduction Vertigo can be due to a variety of central and peripheral causes. The relative incidence of central causes is underestimated. This may have an important impact of the patients' management and prognosis. Objective The objective of this work is to determine the incidence of central vestibular disorders in patients presenting to a vestibular unit in a tertiary referral academic center. It also aims at determining the best strategy to increase the diagnostic yield of the patients' visit. Methods This is a prospective observational study on 100 consecutive patients with symptoms suggestive of vestibular dysfunction. All patients completed a structured questionnaire and received bedside and vestibular examination and neuroimaging as required. Results There were 69 women and 31 men. Their ages ranged between 28 and 73 (mean 42.48 years. Provisional videonystagmography (VNG results were: 40% benign paroxysmal positional vertigo (BPPV, 23% suspicious of central causes, 18% undiagnosed, 15% Meniere disease, and 4% vestibular neuronitis. Patients with an unclear diagnosis or central features (41 had magnetic resonance imaging (MRI and Doppler studies. Combining data from history, VNG, and imaging studies, 23 patients (23% were diagnosed as having a central vestibular lesion (10 with generalized ischemia/vertebra basilar insufficiency, 4 with multiple sclerosis, 4 with migraine vestibulopathy, 4 with phobic postural vertigo, and 1 with hyperventilation-induced nystagmus. Conclusions Combining a careful history with clinical examination, VNG, MRI, and Doppler studies decreases the number of undiagnosed cases and increases the detection of possible central lesions.

  14. Future evolution of the Fast TracKer (FTK) processing unit

    CERN Document Server

    Gentsos, C; The ATLAS collaboration; Giannetti, P; Magalotti, D; Nikolaidis, S

    2014-01-01

    The Fast Tracker (FTK) processor [1] for the ATLAS experiment has a computing core made of 128 Processing Units that reconstruct tracks in the silicon detector in a ~100 μsec deep pipeline. The track parameter resolution provided by FTK enables the HLT trigger to identify efficiently and reconstruct significant samples of fermionic Higgs decays. Data processing speed is achieved with custom VLSI pattern recognition, linearized track fitting executed inside modern FPGAs, pipelining, and parallel processing. One large FPGA executes full resolution track fitting inside low resolution candidate tracks found by a set of 16 custom Asic devices, called Associative Memories (AM chips) [2]. The FTK dual structure, based on the cooperation of VLSI dedicated AM and programmable FPGAs, is maintained to achieve further technology performance, miniaturization and integration of the current state of the art prototypes. This allows to fully exploit new applications within and outside the High Energy Physics field. We plan t...

  15. Integration of process computer systems to Cofrentes NPP

    International Nuclear Information System (INIS)

    Saettone Justo, A.; Pindado Andres, R.; Buedo Jimenez, J.L.; Jimenez Fernandez-Sesma, A.; Delgado Muelas, J.A.

    1997-01-01

    The existence of three different process computer systems in Cofrentes NPP and the ageing of two of them have led to the need for their integration into a single real time computer system, known as Integrated ERIS-Computer System (SIEC), which covers the functionality of the three systems: Process Computer (PC), Emergency Response Information System (ERIS) and Nuclear Calculation Computer (OCN). The paper describes the integration project developed, which has essentially consisted in the integration of PC, ERIS and OCN databases into a single database, the migration of programs from the old process computer into the new SIEC hardware-software platform and the installation of a communications programme to transmit all necessary data for OCN programs from the SIEC computer, which in the new configuration is responsible for managing the databases of the whole system. (Author)

  16. Advanced computational modelling for drying processes – A review

    International Nuclear Information System (INIS)

    Defraeye, Thijs

    2014-01-01

    Highlights: • Understanding the product dehydration process is a key aspect in drying technology. • Advanced modelling thereof plays an increasingly important role for developing next-generation drying technology. • Dehydration modelling should be more energy-oriented. • An integrated “nexus” modelling approach is needed to produce more energy-smart products. • Multi-objective process optimisation requires development of more complete multiphysics models. - Abstract: Drying is one of the most complex and energy-consuming chemical unit operations. R and D efforts in drying technology have skyrocketed in the past decades, as new drivers emerged in this industry next to procuring prime product quality and high throughput, namely reduction of energy consumption and carbon footprint as well as improving food safety and security. Solutions are sought in optimising existing technologies or developing new ones which increase energy and resource efficiency, use renewable energy, recuperate waste heat and reduce product loss, thus also the embodied energy therein. Novel tools are required to push such technological innovations and their subsequent implementation. Particularly computer-aided drying process engineering has a large potential to develop next-generation drying technology, including more energy-smart and environmentally-friendly products and dryers systems. This review paper deals with rapidly emerging advanced computational methods for modelling dehydration of porous materials, particularly for foods. Drying is approached as a combined multiphysics, multiscale and multiphase problem. These advanced methods include computational fluid dynamics, several multiphysics modelling methods (e.g. conjugate modelling), multiscale modelling and modelling of material properties and the associated propagation of material property variability. Apart from the current challenges for each of these, future perspectives should be directed towards material property

  17. On-line satellite/central computer facility of the Multiparticle Argo Spectrometer System

    International Nuclear Information System (INIS)

    Anderson, E.W.; Fisher, G.P.; Hien, N.C.; Larson, G.P.; Thorndike, A.M.; Turkot, F.; von Lindern, L.; Clifford, T.S.; Ficenec, J.R.; Trower, W.P.

    1974-09-01

    An on-line satellite/central computer facility has been developed at Brookhaven National Laboratory as part of the Multiparticle Argo Spectrometer System (MASS). This facility consisting of a PDP-9 and a CDC-6600, has been successfully used in study of proton-proton interactions at 28.5 GeV/c. (U.S.)

  18. Towards a Unified Sentiment Lexicon Based on Graphics Processing Units

    Directory of Open Access Journals (Sweden)

    Liliana Ibeth Barbosa-Santillán

    2014-01-01

    Full Text Available This paper presents an approach to create what we have called a Unified Sentiment Lexicon (USL. This approach aims at aligning, unifying, and expanding the set of sentiment lexicons which are available on the web in order to increase their robustness of coverage. One problem related to the task of the automatic unification of different scores of sentiment lexicons is that there are multiple lexical entries for which the classification of positive, negative, or neutral {P,N,Z} depends on the unit of measurement used in the annotation methodology of the source sentiment lexicon. Our USL approach computes the unified strength of polarity of each lexical entry based on the Pearson correlation coefficient which measures how correlated lexical entries are with a value between 1 and −1, where 1 indicates that the lexical entries are perfectly correlated, 0 indicates no correlation, and −1 means they are perfectly inversely correlated and so is the UnifiedMetrics procedure for CPU and GPU, respectively. Another problem is the high processing time required for computing all the lexical entries in the unification task. Thus, the USL approach computes a subset of lexical entries in each of the 1344 GPU cores and uses parallel processing in order to unify 155802 lexical entries. The results of the analysis conducted using the USL approach show that the USL has 95.430 lexical entries, out of which there are 35.201 considered to be positive, 22.029 negative, and 38.200 neutral. Finally, the runtime was 10 minutes for 95.430 lexical entries; this allows a reduction of the time computing for the UnifiedMetrics by 3 times.

  19. Characteristics of the TRISTAN control computer network

    International Nuclear Information System (INIS)

    Kurokawa, Shinichi; Akiyama, Atsuyoshi; Katoh, Tadahiko; Kikutani, Eiji; Koiso, Haruyo; Oide, Katsunobu; Shinomoto, Manabu; Kurihara, Michio; Abe, Kenichi

    1986-01-01

    Twenty-four minicomputers forming an N-to-N token-ring network control the TRISTAN accelerator complex. The computers are linked by optical fiber cables with 10 Mbps transmission speed. The software system is based on NODAL, a multicomputer interpretive language developed at the CERN SPS. The high-level services offered to the users of the network are remote execution by the EXEC, EXEC-P and IMEX commands of NODAL and uniform file access throughout the system. The network software was designed to achieve the fast response of the EXEC command. The performance of the network is also reported. Tasks that overload the minicomputers are processed on the KEK central computers. One minicomputer in the network serves as a gateway to KEKNET, which connects the minicomputer network and the central computers. The communication with the central computers is managed within the framework of the KEK NODAL system. NODAL programs communicate with the central computers calling NODAL functions; functions for exchanging data between a data set on the central computers and a NODAL variable, submitting a batch job to the central computers, checking the status of the submitted job, etc. are prepared. (orig.)

  20. Launch Site Computer Simulation and its Application to Processes

    Science.gov (United States)

    Sham, Michael D.

    1995-01-01

    This paper provides an overview of computer simulation, the Lockheed developed STS Processing Model, and the application of computer simulation to a wide range of processes. The STS Processing Model is an icon driven model that uses commercial off the shelf software and a Macintosh personal computer. While it usually takes one year to process and launch 8 space shuttles, with the STS Processing Model this process is computer simulated in about 5 minutes. Facilities, orbiters, or ground support equipment can be added or deleted and the impact on launch rate, facility utilization, or other factors measured as desired. This same computer simulation technology can be used to simulate manufacturing, engineering, commercial, or business processes. The technology does not require an 'army' of software engineers to develop and operate, but instead can be used by the layman with only a minimal amount of training. Instead of making changes to a process and realizing the results after the fact, with computer simulation, changes can be made and processes perfected before they are implemented.

  1. Accelerating Electrostatic Surface Potential Calculation with Multiscale Approximation on Graphics Processing Units

    Science.gov (United States)

    Anandakrishnan, Ramu; Scogland, Tom R. W.; Fenley, Andrew T.; Gordon, John C.; Feng, Wu-chun; Onufriev, Alexey V.

    2010-01-01

    Tools that compute and visualize biomolecular electrostatic surface potential have been used extensively for studying biomolecular function. However, determining the surface potential for large biomolecules on a typical desktop computer can take days or longer using currently available tools and methods. Two commonly used techniques to speed up these types of electrostatic computations are approximations based on multi-scale coarse-graining and parallelization across multiple processors. This paper demonstrates that for the computation of electrostatic surface potential, these two techniques can be combined to deliver significantly greater speed-up than either one separately, something that is in general not always possible. Specifically, the electrostatic potential computation, using an analytical linearized Poisson Boltzmann (ALPB) method, is approximated using the hierarchical charge partitioning (HCP) multiscale method, and parallelized on an ATI Radeon 4870 graphical processing unit (GPU). The implementation delivers a combined 934-fold speed-up for a 476,040 atom viral capsid, compared to an equivalent non-parallel implementation on an Intel E6550 CPU without the approximation. This speed-up is significantly greater than the 42-fold speed-up for the HCP approximation alone or the 182-fold speed-up for the GPU alone. PMID:20452792

  2. Space Object Collision Probability via Monte Carlo on the Graphics Processing Unit

    Science.gov (United States)

    Vittaldev, Vivek; Russell, Ryan P.

    2017-09-01

    Fast and accurate collision probability computations are essential for protecting space assets. Monte Carlo (MC) simulation is the most accurate but computationally intensive method. A Graphics Processing Unit (GPU) is used to parallelize the computation and reduce the overall runtime. Using MC techniques to compute the collision probability is common in literature as the benchmark. An optimized implementation on the GPU, however, is a challenging problem and is the main focus of the current work. The MC simulation takes samples from the uncertainty distributions of the Resident Space Objects (RSOs) at any time during a time window of interest and outputs the separations at closest approach. Therefore, any uncertainty propagation method may be used and the collision probability is automatically computed as a function of RSO collision radii. Integration using a fixed time step and a quartic interpolation after every Runge Kutta step ensures that no close approaches are missed. Two orders of magnitude speedups over a serial CPU implementation are shown, and speedups improve moderately with higher fidelity dynamics. The tool makes the MC approach tractable on a single workstation, and can be used as a final product, or for verifying surrogate and analytical collision probability methods.

  3. Computer Vision and Image Processing: A Paper Review

    Directory of Open Access Journals (Sweden)

    victor - wiley

    2018-02-01

    Full Text Available Computer vision has been studied from many persective. It expands from raw data recording into techniques and ideas combining digital image processing, pattern recognition, machine learning and computer graphics. The wide usage has attracted many scholars to integrate with many disciplines and fields. This paper provide a survey of the recent technologies and theoretical concept explaining the development of computer vision especially related to image processing using different areas of their field application. Computer vision helps scholars to analyze images and video to obtain necessary information,    understand information on events or descriptions, and scenic pattern. It used method of multi-range application domain with massive data analysis. This paper provides contribution of recent development on reviews related to computer vision, image processing, and their related studies. We categorized the computer vision mainstream into four group e.g., image processing, object recognition, and machine learning. We also provide brief explanation on the up-to-date information about the techniques and their performance.

  4. Product- and Process Units in the CRITT Translation Process Research Database

    DEFF Research Database (Denmark)

    Carl, Michael

    than 300 hours of text production. The database provides the raw logging data, as well as Tables of pre-processed product- and processing units. The TPR-DB includes various types of simple and composed product and process units that are intended to support the analysis and modelling of human text......The first version of the "Translation Process Research Database" (TPR DB v1.0) was released In August 2012, containing logging data of more than 400 translation and text production sessions. The current version of the TPR DB, (v1.4), contains data from more than 940 sessions, which represents more...

  5. Accessible high performance computing solutions for near real-time image processing for time critical applications

    Science.gov (United States)

    Bielski, Conrad; Lemoine, Guido; Syryczynski, Jacek

    2009-09-01

    High Performance Computing (HPC) hardware solutions such as grid computing and General Processing on a Graphics Processing Unit (GPGPU) are now accessible to users with general computing needs. Grid computing infrastructures in the form of computing clusters or blades are becoming common place and GPGPU solutions that leverage the processing power of the video card are quickly being integrated into personal workstations. Our interest in these HPC technologies stems from the need to produce near real-time maps from a combination of pre- and post-event satellite imagery in support of post-disaster management. Faster processing provides a twofold gain in this situation: 1. critical information can be provided faster and 2. more elaborate automated processing can be performed prior to providing the critical information. In our particular case, we test the use of the PANTEX index which is based on analysis of image textural measures extracted using anisotropic, rotation-invariant GLCM statistics. The use of this index, applied in a moving window, has been shown to successfully identify built-up areas in remotely sensed imagery. Built-up index image masks are important input to the structuring of damage assessment interpretation because they help optimise the workload. The performance of computing the PANTEX workflow is compared on two different HPC hardware architectures: (1) a blade server with 4 blades, each having dual quad-core CPUs and (2) a CUDA enabled GPU workstation. The reference platform is a dual CPU-quad core workstation and the PANTEX workflow total computing time is measured. Furthermore, as part of a qualitative evaluation, the differences in setting up and configuring various hardware solutions and the related software coding effort is presented.

  6. Some computer applications and digital image processing in nuclear medicine

    International Nuclear Information System (INIS)

    Lowinger, T.

    1981-01-01

    Methods of digital image processing are applied to problems in nuclear medicine imaging. The symmetry properties of central nervous system lesions are exploited in an attempt to determine the three-dimensional radioisotope density distribution within the lesions. An algorithm developed by astronomers at the end of the 19th century to determine the distribution of matter in globular clusters is applied to tumors. This algorithm permits the emission-computed-tomographic reconstruction of spherical lesions from a single view. The three-dimensional radioisotope distribution derived by the application of the algorithm can be used to characterize the lesions. The applicability to nuclear medicine images of ten edge detection methods in general usage in digital image processing were evaluated. A general model of image formation by scintillation cameras is developed. The model assumes that objects to be imaged are composed of a finite set of points. The validity of the model has been verified by its ability to duplicate experimental results. Practical applications of this work involve quantitative assessment of the distribution of radipharmaceuticals under clinical situations and the study of image processing algorithms

  7. Computer-Aided Modeling of Lipid Processing Technology

    DEFF Research Database (Denmark)

    Diaz Tovar, Carlos Axel

    2011-01-01

    increase along with growing interest in biofuels, the oleochemical industry faces in the upcoming years major challenges in terms of design and development of better products and more sustainable processes to make them. Computer-aided methods and tools for process synthesis, modeling and simulation...... are widely used for design, analysis, and optimization of processes in the chemical and petrochemical industries. These computer-aided tools have helped the chemical industry to evolve beyond commodities toward specialty chemicals and ‘consumer oriented chemicals based products’. Unfortunately...... to develop systematic computer-aided methods (property models) and tools (database) related to the prediction of the necessary physical properties suitable for design and analysis of processes employing lipid technologies. The methods and tools include: the development of a lipid-database (CAPEC...

  8. Fast ray-tracing of human eye optics on Graphics Processing Units.

    Science.gov (United States)

    Wei, Qi; Patkar, Saket; Pai, Dinesh K

    2014-05-01

    We present a new technique for simulating retinal image formation by tracing a large number of rays from objects in three dimensions as they pass through the optic apparatus of the eye to objects. Simulating human optics is useful for understanding basic questions of vision science and for studying vision defects and their corrections. Because of the complexity of computing such simulations accurately, most previous efforts used simplified analytical models of the normal eye. This makes them less effective in modeling vision disorders associated with abnormal shapes of the ocular structures which are hard to be precisely represented by analytical surfaces. We have developed a computer simulator that can simulate ocular structures of arbitrary shapes, for instance represented by polygon meshes. Topographic and geometric measurements of the cornea, lens, and retina from keratometer or medical imaging data can be integrated for individualized examination. We utilize parallel processing using modern Graphics Processing Units (GPUs) to efficiently compute retinal images by tracing millions of rays. A stable retinal image can be generated within minutes. We simulated depth-of-field, accommodation, chromatic aberrations, as well as astigmatism and correction. We also show application of the technique in patient specific vision correction by incorporating geometric models of the orbit reconstructed from clinical medical images. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  9. General purpose graphic processing unit implementation of adaptive pulse compression algorithms

    Science.gov (United States)

    Cai, Jingxiao; Zhang, Yan

    2017-07-01

    This study introduces a practical approach to implement real-time signal processing algorithms for general surveillance radar based on NVIDIA graphical processing units (GPUs). The pulse compression algorithms are implemented using compute unified device architecture (CUDA) libraries such as CUDA basic linear algebra subroutines and CUDA fast Fourier transform library, which are adopted from open source libraries and optimized for the NVIDIA GPUs. For more advanced, adaptive processing algorithms such as adaptive pulse compression, customized kernel optimization is needed and investigated. A statistical optimization approach is developed for this purpose without needing much knowledge of the physical configurations of the kernels. It was found that the kernel optimization approach can significantly improve the performance. Benchmark performance is compared with the CPU performance in terms of processing accelerations. The proposed implementation framework can be used in various radar systems including ground-based phased array radar, airborne sense and avoid radar, and aerospace surveillance radar.

  10. Practical Secure Computation with Pre-Processing

    DEFF Research Database (Denmark)

    Zakarias, Rasmus Winther

    Secure Multiparty Computation has been divided between protocols best suited for binary circuits and protocols best suited for arithmetic circuits. With their MiniMac protocol in [DZ13], Damgård and Zakarias take an important step towards bridging these worlds with an arithmetic protocol tuned...... space for pre-processing material than computing the non-linear parts online (depends on the quality of circuit of course). Surprisingly, even for our optimized AES-circuit this is not the case. We further improve the design of the pre-processing material and end up with only 10 megabyes of pre...... a protocol for small field arithmetic to do fast large integer multipli- cations. This is achieved by devising pre-processing material that allows the Toom-Cook multiplication algorithm to run between the parties with linear communication complexity. With this result computation on the CPU by the parties...

  11. Marrying Content and Process in Computer Science Education

    Science.gov (United States)

    Zendler, A.; Spannagel, C.; Klaudt, D.

    2011-01-01

    Constructivist approaches to computer science education emphasize that as well as knowledge, thinking skills and processes are involved in active knowledge construction. K-12 computer science curricula must not be based on fashions and trends, but on contents and processes that are observable in various domains of computer science, that can be…

  12. Accelerating image reconstruction in three-dimensional optoacoustic tomography on graphics processing units.

    Science.gov (United States)

    Wang, Kun; Huang, Chao; Kao, Yu-Jiun; Chou, Cheng-Ying; Oraevsky, Alexander A; Anastasio, Mark A

    2013-02-01

    Optoacoustic tomography (OAT) is inherently a three-dimensional (3D) inverse problem. However, most studies of OAT image reconstruction still employ two-dimensional imaging models. One important reason is because 3D image reconstruction is computationally burdensome. The aim of this work is to accelerate existing image reconstruction algorithms for 3D OAT by use of parallel programming techniques. Parallelization strategies are proposed to accelerate a filtered backprojection (FBP) algorithm and two different pairs of projection/backprojection operations that correspond to two different numerical imaging models. The algorithms are designed to fully exploit the parallel computing power of graphics processing units (GPUs). In order to evaluate the parallelization strategies for the projection/backprojection pairs, an iterative image reconstruction algorithm is implemented. Computer simulation and experimental studies are conducted to investigate the computational efficiency and numerical accuracy of the developed algorithms. The GPU implementations improve the computational efficiency by factors of 1000, 125, and 250 for the FBP algorithm and the two pairs of projection/backprojection operators, respectively. Accurate images are reconstructed by use of the FBP and iterative image reconstruction algorithms from both computer-simulated and experimental data. Parallelization strategies for 3D OAT image reconstruction are proposed for the first time. These GPU-based implementations significantly reduce the computational time for 3D image reconstruction, complementing our earlier work on 3D OAT iterative image reconstruction.

  13. Development of interface technology between unit processes in E-Refining process

    Energy Technology Data Exchange (ETDEWEB)

    Lee, S. H.; Lee, H. S.; Kim, J. G. [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2010-10-15

    The pyroprocessing is composed mainly four subprocesses, such as an electrolytic reduction, an electrorefining, an electrowinning, and waste salt regeneration/ solidification processes. The electrorefining process, one of main processes which are composed of pyroprocess to recover the useful elements from spent fuel, is under development by Korea Atomic Energy Research Institute as a sub process of pyrochemical treatment of spent PWR fuel. The CERS(Continuous ElectroRefining System) is composed of some unit processes such as an electrorefiner, a salt distiller, a melting furnace for the U-ingot and U-chlorinator (UCl{sub 3} making equipment) as shown in Fig. 1. In this study, the interfaces technology between unit processes in E-Refining system is investigated and developed for the establishment of integrated E-Refining operation system as a part of integrated pyroprocessing

  14. Seismotectonic model of Central Europe

    International Nuclear Information System (INIS)

    Prochazkova, D.; Roth, Z.

    1994-01-01

    Earthquakes belong to natural disasters which are associated with tectonic processes in the interior of the Earth. They are extremely devastating in populated areas; they cause human losses and damage personal estates and the environment. To mitigate the potential effects of earthquakes it is necessary that relief and mitigation structures operate following an earthquake, but it is also essential to stimulate and enhance preparedness and prevention. Prevention includes the development of scenarios of potential earthquakes, hazard mapping, formulation of regulations, etc. Preparedness includes the installation and operation of warning systems, establishing communication networks to operate before, during, and after earthquakes. As nuclear technology belongs to high-risk technologies with regard to human health and the environment and its hazard substantially increases in consequence of earthquakes, in the siting of a nuclear plant engineering solutions are generally available to mitigate the potential vibratory effects through design. For the choice of a suitable engineering solution, reliable data must be processed by reliable techniques. The IAEA safety guide of the safety series No. 50-SG-S1(Rev. 1) specifies the demands on data and on their processing and also on the regional seismotectonic model. With a view of this the regional seismotectonic model of Central Europe was created. The paper presents regional geological characteristics of Central Europe and a chronological model of neotectonic movements in Central Europe with specification of neotectonic regional units and their present movements. Moreover, it contains earthquake characteristics for Central Europe and the specification of seismogenic movements. It was found that the genesis of local regions with occurrence of the strongest earthquakes is connected with several movement trends in the last 5 Ma. Six more or less tectonically separate regional units were revealed. The earthquake epicenters often

  15. Beowulf Distributed Processing and the United States Geological Survey

    Science.gov (United States)

    Maddox, Brian G.

    2002-01-01

    Introduction In recent years, the United States Geological Survey's (USGS) National Mapping Discipline (NMD) has expanded its scientific and research activities. Work is being conducted in areas such as emergency response research, scientific visualization, urban prediction, and other simulation activities. Custom-produced digital data have become essential for these types of activities. High-resolution, remotely sensed datasets are also seeing increased use. Unfortunately, the NMD is also finding that it lacks the resources required to perform some of these activities. Many of these projects require large amounts of computer processing resources. Complex urban-prediction simulations, for example, involve large amounts of processor-intensive calculations on large amounts of input data. This project was undertaken to learn and understand the concepts of distributed processing. Experience was needed in developing these types of applications. The idea was that this type of technology could significantly aid the needs of the NMD scientific and research programs. Porting a numerically intensive application currently being used by an NMD science program to run in a distributed fashion would demonstrate the usefulness of this technology. There are several benefits that this type of technology can bring to the USGS's research programs. Projects can be performed that were previously impossible due to a lack of computing resources. Other projects can be performed on a larger scale than previously possible. For example, distributed processing can enable urban dynamics research to perform simulations on larger areas without making huge sacrifices in resolution. The processing can also be done in a more reasonable amount of time than with traditional single-threaded methods (a scaled version of Chester County, Pennsylvania, took about fifty days to finish its first calibration phase with a single-threaded program). This paper has several goals regarding distributed processing

  16. A sampler of useful computational tools for applied geometry, computer graphics, and image processing foundations for computer graphics, vision, and image processing

    CERN Document Server

    Cohen-Or, Daniel; Ju, Tao; Mitra, Niloy J; Shamir, Ariel; Sorkine-Hornung, Olga; Zhang, Hao (Richard)

    2015-01-01

    A Sampler of Useful Computational Tools for Applied Geometry, Computer Graphics, and Image Processing shows how to use a collection of mathematical techniques to solve important problems in applied mathematics and computer science areas. The book discusses fundamental tools in analytical geometry and linear algebra. It covers a wide range of topics, from matrix decomposition to curvature analysis and principal component analysis to dimensionality reduction.Written by a team of highly respected professors, the book can be used in a one-semester, intermediate-level course in computer science. It

  17. Value of computed tomography and magnetic resonance imaging in diagnosis of central nervous system

    International Nuclear Information System (INIS)

    Walecka, I.; Sicinska, J.; Szymanska, E.; Rudnicka, L.; Furmanek, M.; Walecki, J.; Olszewska, M.; Rudnicka, L.; Walecki, J.

    2006-01-01

    Systemic sclerosis is an autoimmune connective tissue disease characterized by vascular abnormalities and fibrotic changes in skin and internal organs. The aim of the study was to investigate involvement of the central nervous system in systemic sclerosis and the value of computed tomography (CT) and magnetic resonance imaging (MRI) in evaluation of central nervous system involvement in systemic sclerosis. 26 patients with neuropsychiatric symptoms in the course of systemic sclerosis were investigated for central nervous system abnormalities by computed tomography (CT) and magnetic resonance imaging (MRI). Among these 26 symptomatic patients lesions in brain MRI and CT examinations were present in 54% and in 50% patients respectively. Most common findings (in 46% of all patients), were symptoms of cortical and subcortical atrophy, seen in both, MRI and CT. Single and multiple focal lesions, predominantly in the white matter, were detected by MRI significantly more frequently as compared to CT (62% and 15% patients respectively). These data indicate that brain involvement is common in patients with severe systemic sclerosis. MRI shows significantly higher than CT sensitivity in detection focal brain lesions in these patients. (author)

  18. Accelerating cardiac bidomain simulations using graphics processing units.

    Science.gov (United States)

    Neic, A; Liebmann, M; Hoetzl, E; Mitchell, L; Vigmond, E J; Haase, G; Plank, G

    2012-08-01

    Anatomically realistic and biophysically detailed multiscale computer models of the heart are playing an increasingly important role in advancing our understanding of integrated cardiac function in health and disease. Such detailed simulations, however, are computationally vastly demanding, which is a limiting factor for a wider adoption of in-silico modeling. While current trends in high-performance computing (HPC) hardware promise to alleviate this problem, exploiting the potential of such architectures remains challenging since strongly scalable algorithms are necessitated to reduce execution times. Alternatively, acceleration technologies such as graphics processing units (GPUs) are being considered. While the potential of GPUs has been demonstrated in various applications, benefits in the context of bidomain simulations where large sparse linear systems have to be solved in parallel with advanced numerical techniques are less clear. In this study, the feasibility of multi-GPU bidomain simulations is demonstrated by running strong scalability benchmarks using a state-of-the-art model of rabbit ventricles. The model is spatially discretized using the finite element methods (FEM) on fully unstructured grids. The GPU code is directly derived from a large pre-existing code, the Cardiac Arrhythmia Research Package (CARP), with very minor perturbation of the code base. Overall, bidomain simulations were sped up by a factor of 11.8 to 16.3 in benchmarks running on 6-20 GPUs compared to the same number of CPU cores. To match the fastest GPU simulation which engaged 20 GPUs, 476 CPU cores were required on a national supercomputing facility.

  19. Algorithms for image processing and computer vision

    CERN Document Server

    Parker, J R

    2010-01-01

    A cookbook of algorithms for common image processing applications Thanks to advances in computer hardware and software, algorithms have been developed that support sophisticated image processing without requiring an extensive background in mathematics. This bestselling book has been fully updated with the newest of these, including 2D vision methods in content-based searches and the use of graphics cards as image processing computational aids. It's an ideal reference for software engineers and developers, advanced programmers, graphics programmers, scientists, and other specialists wh

  20. Scintillation camera-computer systems: General principles of quality control

    International Nuclear Information System (INIS)

    Ganatra, R.D.

    1992-01-01

    Scintillation camera-computer systems are designed to allow the collection, digital analysis and display of the image data from a scintillation camera. The components of the computer in such a system are essentially the same as those of a computer used in any other application, i.e. a central processing unit (CPU), memory and magnetic storage. Additional hardware items necessary for nuclear medicine applications are an analogue-to-digital converter (ADC), which converts the analogue signals from the camera to digital numbers, and an image display. It is possible that the transfer of data from camera to computer degrades the information to some extent. The computer can generate the image for display, but it also provides the capability of manipulating the primary data to improve the display of the image. The first function of conversion from analogue to digital mode is not within the control of the operator, but the second type of manipulation is in the control of the operator. These type of manipulations should be done carefully without sacrificing the integrity of the incoming information

  1. Accelerating large-scale protein structure alignments with graphics processing units

    Directory of Open Access Journals (Sweden)

    Pang Bin

    2012-02-01

    Full Text Available Abstract Background Large-scale protein structure alignment, an indispensable tool to structural bioinformatics, poses a tremendous challenge on computational resources. To ensure structure alignment accuracy and efficiency, efforts have been made to parallelize traditional alignment algorithms in grid environments. However, these solutions are costly and of limited accessibility. Others trade alignment quality for speedup by using high-level characteristics of structure fragments for structure comparisons. Findings We present ppsAlign, a parallel protein structure Alignment framework designed and optimized to exploit the parallelism of Graphics Processing Units (GPUs. As a general-purpose GPU platform, ppsAlign could take many concurrent methods, such as TM-align and Fr-TM-align, into the parallelized algorithm design. We evaluated ppsAlign on an NVIDIA Tesla C2050 GPU card, and compared it with existing software solutions running on an AMD dual-core CPU. We observed a 36-fold speedup over TM-align, a 65-fold speedup over Fr-TM-align, and a 40-fold speedup over MAMMOTH. Conclusions ppsAlign is a high-performance protein structure alignment tool designed to tackle the computational complexity issues from protein structural data. The solution presented in this paper allows large-scale structure comparisons to be performed using massive parallel computing power of GPU.

  2. Monte Carlo method for neutron transport calculations in graphics processing units (GPUs)

    International Nuclear Information System (INIS)

    Pellegrino, Esteban

    2011-01-01

    Monte Carlo simulation is well suited for solving the Boltzmann neutron transport equation in an inhomogeneous media for complicated geometries. However, routine applications require the computation time to be reduced to hours and even minutes in a desktop PC. The interest in adopting Graphics Processing Units (GPUs) for Monte Carlo acceleration is rapidly growing. This is due to the massive parallelism provided by the latest GPU technologies which is the most promising solution to the challenge of performing full-size reactor core analysis on a routine basis. In this study, Monte Carlo codes for a fixed-source neutron transport problem were developed for GPU environments in order to evaluate issues associated with computational speedup using GPUs. Results obtained in this work suggest that a speedup of several orders of magnitude is possible using the state-of-the-art GPU technologies. (author) [es

  3. Technical evaluation of proposed Ukrainian Central Radioactive Waste Processing Facility

    International Nuclear Information System (INIS)

    Gates, R.; Glukhov, A.; Markowski, F.

    1996-06-01

    This technical report is a comprehensive evaluation of the proposal by the Ukrainian State Committee on Nuclear Power Utilization to create a central facility for radioactive waste (not spent fuel) processing. The central facility is intended to process liquid and solid radioactive wastes generated from all of the Ukrainian nuclear power plants and the waste generated as a result of Chernobyl 1, 2 and 3 decommissioning efforts. In addition, this report provides general information on the quantity and total activity of radioactive waste in the 30-km Zone and the Sarcophagus from the Chernobyl accident. Processing options are described that may ultimately be used in the long-term disposal of selected 30-km Zone and Sarcophagus wastes. A detailed report on the issues concerning the construction of a Ukrainian Central Radioactive Waste Processing Facility (CRWPF) from the Ukrainian Scientific Research and Design institute for Industrial Technology was obtained and incorporated into this report. This report outlines various processing options, their associated costs and construction schedules, which can be applied to solving the operating and decommissioning radioactive waste management problems in Ukraine. The costs and schedules are best estimates based upon the most current US industry practice and vendor information. This report focuses primarily on the handling and processing of what is defined in the US as low-level radioactive wastes

  4. Central load reduces peripheral processing: Evidence from incidental memory of background speech.

    Science.gov (United States)

    Halin, Niklas; Marsh, John E; Sörqvist, Patrik

    2015-12-01

    Is there a trade-off between central (working memory) load and peripheral (perceptual) processing? To address this question, participants were requested to undertake an n-back task in one of two levels of central/cognitive load (i.e., 1-back or 2-back) in the presence of a to-be-ignored story presented via headphones. Participants were told to ignore the background story, but they were given a surprise memory test of what had been said in the background story, immediately after the n-back task was completed. Memory was poorer in the high central load (2-back) condition in comparison with the low central load (1-back) condition. Hence, when people compensate for higher central load, by increasing attentional engagement, peripheral processing is constrained. Moreover, participants with high working memory capacity (WMC) - with a superior ability for attentional engagement - remembered less of the background story, but only in the low central load condition. Taken together, peripheral processing - as indexed by incidental memory of background speech - is constrained when task engagement is high. © 2015 The Authors. Scandinavian Journal of Psychology published by Scandinavian Psychological Associations and John Wiley & Sons Ltd.

  5. Central Processing Dysfunctions in Children: A Review of Research.

    Science.gov (United States)

    Chalfant, James C.; Scheffelin, Margaret A.

    Research on central processing dysfunctions in children is reviewed in three major areas. The first, dysfunctions in the analysis of sensory information, includes auditory, visual, and haptic processing. The second, dysfunction in the synthesis of sensory information, covers multiple stimulus integration and short-term memory. The third area of…

  6. Computer Processing Of Tunable-Diode-Laser Spectra

    Science.gov (United States)

    May, Randy D.

    1991-01-01

    Tunable-diode-laser spectrometer measuring transmission spectrum of gas operates under control of computer, which also processes measurement data. Measurements in three channels processed into spectra. Computer controls current supplied to tunable diode laser, stepping it through small increments of wavelength while processing spectral measurements at each step. Program includes library of routines for general manipulation and plotting of spectra, least-squares fitting of direct-transmission and harmonic-absorption spectra, and deconvolution for determination of laser linewidth and for removal of instrumental broadening of spectral lines.

  7. Burn Injury Caused by Laptop Computers

    African Journals Online (AJOL)

    generated in central processing unit (CPU), graphics processing unit, hard drive, internal ... change its position. Discussion ... Suzuki, et al. reported that the critical temperature for superficial burn was 37.8°C, for deep dermal burns 41.9°C and ... The laptop should be placed on a hard surface and not on soft surfaces like.

  8. Integration of distributed computing into the drug discovery process.

    Science.gov (United States)

    von Korff, Modest; Rufener, Christian; Stritt, Manuel; Freyss, Joel; Bär, Roman; Sander, Thomas

    2011-02-01

    Grid computing offers an opportunity to gain massive computing power at low costs. We give a short introduction into the drug discovery process and exemplify the use of grid computing for image processing, docking and 3D pharmacophore descriptor calculations. The principle of a grid and its architecture are briefly explained. More emphasis is laid on the issues related to a company-wide grid installation and embedding the grid into the research process. The future of grid computing in drug discovery is discussed in the expert opinion section. Most needed, besides reliable algorithms to predict compound properties, is embedding the grid seamlessly into the discovery process. User friendly access to powerful algorithms without any restrictions, that is, by a limited number of licenses, has to be the goal of grid computing in drug discovery.

  9. Central auditory processing and migraine: a controlled study.

    Science.gov (United States)

    Agessi, Larissa Mendonça; Villa, Thaís Rodrigues; Dias, Karin Ziliotto; Carvalho, Deusvenir de Souza; Pereira, Liliane Desgualdo

    2014-11-08

    This study aimed to verify and compare central auditory processing (CAP) performance in migraine with and without aura patients and healthy controls. Forty-one volunteers of both genders, aged between 18 and 40 years, diagnosed with migraine with and without aura by the criteria of "The International Classification of Headache Disorders" (ICDH-3 beta) and a control group of the same age range and with no headache history, were included. Gaps-in-noise (GIN), Duration Pattern test (DPT) and Dichotic Digits Test (DDT) tests were used to assess central auditory processing performance. The volunteers were divided into 3 groups: Migraine with aura (11), migraine without aura (15), and control group (15), matched by age and schooling. Subjects with aura and without aura performed significantly worse in GIN test for right ear (p = .006), for left ear (p = .005) and for DPT test (p UNIFESP.

  10. Central nervous system leukemia and lymphoma: computed tomographic manifestations

    International Nuclear Information System (INIS)

    Pagani, J.J.; Libshitz, H.I.; Wallace, S.; Hayman, L.A.

    1981-01-01

    Computed tomographic (CT) abnormalities in the brain were identified in 31 of 405 patients with leukemia or lymphoma. Abnormalities included neoplastic masses (15), hemorrhage (nine), abscess (two), other brain tumors (four), and methotrexate leukoencephalopathy (one). CT was normal in 374 patients including 148 with meningeal disease diagnosed by cerebrospinal fluid cytologic examination. Prior to treatment, malignant masses were isodense or of greater density with varying amounts of edema. Increase in size or number of the masses indicated worsening. Response to radiation and chemotherapy was manifested by development of a central low density region with an enhancing rim. CT findings correlated with clinical and cerebrospinal fluid findings. The differential diagnosis of the various abnormalities is considered

  11. Computer Drawing Method for Operating Characteristic Curve of PV Power Plant Array Unit

    Science.gov (United States)

    Tan, Jianbin

    2018-02-01

    According to the engineering design of large-scale grid-connected photovoltaic power stations and the research and development of many simulation and analysis systems, it is necessary to draw a good computer graphics of the operating characteristic curves of photovoltaic array elements and to propose a good segmentation non-linear interpolation algorithm. In the calculation method, Component performance parameters as the main design basis, the computer can get 5 PV module performances. At the same time, combined with the PV array series and parallel connection, the computer drawing of the performance curve of the PV array unit can be realized. At the same time, the specific data onto the module of PV development software can be calculated, and the good operation of PV array unit can be improved on practical application.

  12. Computer simulation of nonequilibrium processes

    International Nuclear Information System (INIS)

    Wallace, D.C.

    1985-07-01

    The underlying concepts of nonequilibrium statistical mechanics, and of irreversible thermodynamics, will be described. The question at hand is then, how are these concepts to be realize in computer simulations of many-particle systems. The answer will be given for dissipative deformation processes in solids, on three hierarchical levels: heterogeneous plastic flow, dislocation dynamics, an molecular dynamics. Aplication to the shock process will be discussed

  13. An Overview of Computer-Based Natural Language Processing.

    Science.gov (United States)

    Gevarter, William B.

    Computer-based Natural Language Processing (NLP) is the key to enabling humans and their computer-based creations to interact with machines using natural languages (English, Japanese, German, etc.) rather than formal computer languages. NLP is a major research area in the fields of artificial intelligence and computational linguistics. Commercial…

  14. Real time 3D structural and Doppler OCT imaging on graphics processing units

    Science.gov (United States)

    Sylwestrzak, Marcin; Szlag, Daniel; Szkulmowski, Maciej; Gorczyńska, Iwona; Bukowska, Danuta; Wojtkowski, Maciej; Targowski, Piotr

    2013-03-01

    In this report the application of graphics processing unit (GPU) programming for real-time 3D Fourier domain Optical Coherence Tomography (FdOCT) imaging with implementation of Doppler algorithms for visualization of the flows in capillary vessels is presented. Generally, the time of the data processing of the FdOCT data on the main processor of the computer (CPU) constitute a main limitation for real-time imaging. Employing additional algorithms, such as Doppler OCT analysis, makes this processing even more time consuming. Lately developed GPUs, which offers a very high computational power, give a solution to this problem. Taking advantages of them for massively parallel data processing, allow for real-time imaging in FdOCT. The presented software for structural and Doppler OCT allow for the whole processing with visualization of 2D data consisting of 2000 A-scans generated from 2048 pixels spectra with frame rate about 120 fps. The 3D imaging in the same mode of the volume data build of 220 × 100 A-scans is performed at a rate of about 8 frames per second. In this paper a software architecture, organization of the threads and optimization applied is shown. For illustration the screen shots recorded during real time imaging of the phantom (homogeneous water solution of Intralipid in glass capillary) and the human eye in-vivo is presented.

  15. The Association of State Legal Mandates for Data Submission of Central Line-associated Blood Stream Infections in Neonatal Intensive Care Units with Process and Outcome Measures

    Science.gov (United States)

    Zachariah, Philip; Reagan, Julie; Furuya, E. Yoko; Dick, Andrew; Liu, Hangsheng; Herzig, Carolyn T.A; Pogorzelska-Maziarz, Monika; Stone, Patricia W.; Saiman, Lisa

    2014-01-01

    Objective To determine the association between state legal mandates for data submission of central line-associated blood stream infections (CLABSIs) in neonatal intensive care units (NICUs) with process/outcome measures. Design Cross-sectional study. Participants National sample of level II/III and III NICUs participating in National Healthcare Safety Network (NHSN) surveillance. Methods State mandates for data submission of CLABSIs in NICUs in place by 2011 were compiled and verified with state healthcare-associated infection coordinators. A web-based survey of infection control departments in October 2011 assessed CLABSI prevention practices i.e. compliance with checklist and bundle components (process measures) in ICUs including NICUs. Corresponding 2011 NHSN NICU CLABSI rates (outcome measures) were used to calculate Standardized Infection Ratios (SIR). The association between mandates and process/outcome measures was assessed by multivariable logistic regression. Results Among 190 study NICUs, 107 (56.3%) NICUs were located in states with mandates, with mandates in place for 3 or more years for half. More NICUs in states with mandates reported ≥95% compliance to at least one CLABSI prevention practice (52.3% – 66.4%) than NICUs in states without mandates (28.9% – 48.2%). Mandates were predictors of ≥95% compliance with all practices (OR 2.8; 95% CI 1.4–6.1). NICUs in states with mandates reported lower mean CLABSI rates in the prevention practices but not with lower CLABSI rates. PMID:25111921

  16. A comparative approach for the investigation of biological information processing: An examination of the structure and function of computer hard drives and DNA

    Science.gov (United States)

    2010-01-01

    Background The robust storage, updating and utilization of information are necessary for the maintenance and perpetuation of dynamic systems. These systems can exist as constructs of metal-oxide semiconductors and silicon, as in a digital computer, or in the "wetware" of organic compounds, proteins and nucleic acids that make up biological organisms. We propose that there are essential functional properties of centralized information-processing systems; for digital computers these properties reside in the computer's hard drive, and for eukaryotic cells they are manifest in the DNA and associated structures. Methods Presented herein is a descriptive framework that compares DNA and its associated proteins and sub-nuclear structure with the structure and function of the computer hard drive. We identify four essential properties of information for a centralized storage and processing system: (1) orthogonal uniqueness, (2) low level formatting, (3) high level formatting and (4) translation of stored to usable form. The corresponding aspects of the DNA complex and a computer hard drive are categorized using this classification. This is intended to demonstrate a functional equivalence between the components of the two systems, and thus the systems themselves. Results Both the DNA complex and the computer hard drive contain components that fulfill the essential properties of a centralized information storage and processing system. The functional equivalence of these components provides insight into both the design process of engineered systems and the evolved solutions addressing similar system requirements. However, there are points where the comparison breaks down, particularly when there are externally imposed information-organizing structures on the computer hard drive. A specific example of this is the imposition of the File Allocation Table (FAT) during high level formatting of the computer hard drive and the subsequent loading of an operating system (OS). Biological

  17. A comparative approach for the investigation of biological information processing: an examination of the structure and function of computer hard drives and DNA.

    Science.gov (United States)

    D'Onofrio, David J; An, Gary

    2010-01-21

    The robust storage, updating and utilization of information are necessary for the maintenance and perpetuation of dynamic systems. These systems can exist as constructs of metal-oxide semiconductors and silicon, as in a digital computer, or in the "wetware" of organic compounds, proteins and nucleic acids that make up biological organisms. We propose that there are essential functional properties of centralized information-processing systems; for digital computers these properties reside in the computer's hard drive, and for eukaryotic cells they are manifest in the DNA and associated structures. Presented herein is a descriptive framework that compares DNA and its associated proteins and sub-nuclear structure with the structure and function of the computer hard drive. We identify four essential properties of information for a centralized storage and processing system: (1) orthogonal uniqueness, (2) low level formatting, (3) high level formatting and (4) translation of stored to usable form. The corresponding aspects of the DNA complex and a computer hard drive are categorized using this classification. This is intended to demonstrate a functional equivalence between the components of the two systems, and thus the systems themselves. Both the DNA complex and the computer hard drive contain components that fulfill the essential properties of a centralized information storage and processing system. The functional equivalence of these components provides insight into both the design process of engineered systems and the evolved solutions addressing similar system requirements. However, there are points where the comparison breaks down, particularly when there are externally imposed information-organizing structures on the computer hard drive. A specific example of this is the imposition of the File Allocation Table (FAT) during high level formatting of the computer hard drive and the subsequent loading of an operating system (OS). Biological systems do not have an

  18. A comparative approach for the investigation of biological information processing: An examination of the structure and function of computer hard drives and DNA

    Directory of Open Access Journals (Sweden)

    D'Onofrio David J

    2010-01-01

    Full Text Available Abstract Background The robust storage, updating and utilization of information are necessary for the maintenance and perpetuation of dynamic systems. These systems can exist as constructs of metal-oxide semiconductors and silicon, as in a digital computer, or in the "wetware" of organic compounds, proteins and nucleic acids that make up biological organisms. We propose that there are essential functional properties of centralized information-processing systems; for digital computers these properties reside in the computer's hard drive, and for eukaryotic cells they are manifest in the DNA and associated structures. Methods Presented herein is a descriptive framework that compares DNA and its associated proteins and sub-nuclear structure with the structure and function of the computer hard drive. We identify four essential properties of information for a centralized storage and processing system: (1 orthogonal uniqueness, (2 low level formatting, (3 high level formatting and (4 translation of stored to usable form. The corresponding aspects of the DNA complex and a computer hard drive are categorized using this classification. This is intended to demonstrate a functional equivalence between the components of the two systems, and thus the systems themselves. Results Both the DNA complex and the computer hard drive contain components that fulfill the essential properties of a centralized information storage and processing system. The functional equivalence of these components provides insight into both the design process of engineered systems and the evolved solutions addressing similar system requirements. However, there are points where the comparison breaks down, particularly when there are externally imposed information-organizing structures on the computer hard drive. A specific example of this is the imposition of the File Allocation Table (FAT during high level formatting of the computer hard drive and the subsequent loading of an operating

  19. Software for a magnetic dick drive unit connected with a computer TPA-1001-i

    International Nuclear Information System (INIS)

    Elizarov, O.I.; Mateeva, A.; Salamatin, I.M.

    1977-01-01

    The disk drive unit with capacity 1250 K and minimal addressing part of memory 1 sector (128 10 -12-bit words) is connected with a TPA-1001-i computer. The operation regimes of the controller, functions and formats of the commands used are described as well as the software. The data transfer between the computer and magnetic disk drive unit is realized by means of programs relocatable in a binary form. These are inserted in a standard program library with modular structure. The manner of control handling and data transfer betweeen programs stored in the library on a magnetic disk drive are described. The resident program (100sub(8) words) inserted in a monitor takes into account special features of the disk drive unit being used. The algorithms of correction programs for a disk drive unit, programs for rewriting the library from papertape to disk drive unit and of the program for writing and reading the monitor are described

  20. CUDA/GPU Technology : Parallel Programming For High Performance Scientific Computing

    OpenAIRE

    YUHENDRA; KUZE, Hiroaki; JOSAPHAT, Tetuko Sri Sumantyo

    2009-01-01

    [ABSTRACT]Graphics processing units (GP Us) originally designed for computer video cards have emerged as the most powerful chip in a high-performance workstation. In the high performance computation capabilities, graphic processing units (GPU) lead to much more powerful performance than conventional CPUs by means of parallel processing. In 2007, the birth of Compute Unified Device Architecture (CUDA) and CUDA-enabled GPUs by NVIDIA Corporation brought a revolution in the general purpose GPU a...

  1. Single neuron computation

    CERN Document Server

    McKenna, Thomas M; Zornetzer, Steven F

    1992-01-01

    This book contains twenty-two original contributions that provide a comprehensive overview of computational approaches to understanding a single neuron structure. The focus on cellular-level processes is twofold. From a computational neuroscience perspective, a thorough understanding of the information processing performed by single neurons leads to an understanding of circuit- and systems-level activity. From the standpoint of artificial neural networks (ANNs), a single real neuron is as complex an operational unit as an entire ANN, and formalizing the complex computations performed by real n

  2. STRATEGIC BUSINESS UNIT – THE CENTRAL ELEMENT OF THE BUSINESS PORTFOLIO STRATEGIC PLANNING PROCESS

    OpenAIRE

    FLORIN TUDOR IONESCU

    2011-01-01

    Over time, due to changes in the marketing environment, generated by the tightening competition, technological, social and political pressures the companies have adopted a new approach, by which the potential businesses began to be treated as strategic business units. A strategic business unit can be considered a part of a company, a product line within a division, and sometimes a single product or brand. From a strategic perspective, the diversified companies represent a collection of busine...

  3. A comparative approach for the investigation of biological information processing: An examination of the structure and function of computer hard drives and DNA

    OpenAIRE

    D'Onofrio, David J; An, Gary

    2010-01-01

    Abstract Background The robust storage, updating and utilization of information are necessary for the maintenance and perpetuation of dynamic systems. These systems can exist as constructs of metal-oxide semiconductors and silicon, as in a digital computer, or in the "wetware" of organic compounds, proteins and nucleic acids that make up biological organisms. We propose that there are essential functional properties of centralized information-processing systems; for digital computers these pr...

  4. Interventions on central computing services during the weekend of 21 and 22 August

    CERN Multimedia

    2004-01-01

    As part of the planned upgrade of the computer centre infrastructure to meet the LHC computing needs, approximately 150 servers, hosting in particular the NICE home directories, Mail services and Web services, will need to be physically relocated to another part of the computing hall during the weekend of the 21 and 22 August. On Saturday 21 August, starting from 8:30a.m. interruptions of typically 60 minutes will take place on the following central computing services: NICE and the whole Windows infrastructure, Mail services, file services (including home directories and DFS workspaces), Web services, VPN access, Windows Terminal Services. During any interruption, incoming mail from outside CERN will be queued and delivered as soon as the service is operational again. All Services should be available again on Saturday 21 at 17:30 but a few additional interruptions will be possible after that time and on Sunday 22 August. IT Department

  5. The influence of (central) auditory processing disorder on the severity of speech-sound disorders in children.

    Science.gov (United States)

    Vilela, Nadia; Barrozo, Tatiane Faria; Pagan-Neves, Luciana de Oliveira; Sanches, Seisse Gabriela Gandolfi; Wertzner, Haydée Fiszbein; Carvallo, Renata Mota Mamede

    2016-02-01

    To identify a cutoff value based on the Percentage of Consonants Correct-Revised index that could indicate the likelihood of a child with a speech-sound disorder also having a (central) auditory processing disorder . Language, audiological and (central) auditory processing evaluations were administered. The participants were 27 subjects with speech-sound disorders aged 7 to 10 years and 11 months who were divided into two different groups according to their (central) auditory processing evaluation results. When a (central) auditory processing disorder was present in association with a speech disorder, the children tended to have lower scores on phonological assessments. A greater severity of speech disorder was related to a greater probability of the child having a (central) auditory processing disorder. The use of a cutoff value for the Percentage of Consonants Correct-Revised index successfully distinguished between children with and without a (central) auditory processing disorder. The severity of speech-sound disorder in children was influenced by the presence of (central) auditory processing disorder. The attempt to identify a cutoff value based on a severity index was successful.

  6. Field programmable gate array-assigned complex-valued computation and its limits

    Energy Technology Data Exchange (ETDEWEB)

    Bernard-Schwarz, Maria, E-mail: maria.bernardschwarz@ni.com [National Instruments, Ganghoferstrasse 70b, 80339 Munich (Germany); Institute of Applied Physics, TU Wien, Wiedner Hauptstrasse 8, 1040 Wien (Austria); Zwick, Wolfgang; Klier, Jochen [National Instruments, Ganghoferstrasse 70b, 80339 Munich (Germany); Wenzel, Lothar [National Instruments, 11500 N MOPac Expy, Austin, Texas 78759 (United States); Gröschl, Martin [Institute of Applied Physics, TU Wien, Wiedner Hauptstrasse 8, 1040 Wien (Austria)

    2014-09-15

    We discuss how leveraging Field Programmable Gate Array (FPGA) technology as part of a high performance computing platform reduces latency to meet the demanding real time constraints of a quantum optics simulation. Implementations of complex-valued operations using fixed point numeric on a Virtex-5 FPGA compare favorably to more conventional solutions on a central processing unit. Our investigation explores the performance of multiple fixed point options along with a traditional 64 bits floating point version. With this information, the lowest execution times can be estimated. Relative error is examined to ensure simulation accuracy is maintained.

  7. Altered central pain processing after pancreatic surgery for chronic pancreatitis

    NARCIS (Netherlands)

    Bouwense, S. A.; Ahmed Ali, U.; ten Broek, R. P.; Issa, Y.; van Eijck, C. H.; Wilder-Smith, O. H.; van Goor, H.

    2013-01-01

    Chronic abdominal pain is common in chronic pancreatitis (CP) and may involve altered central pain processing. This study evaluated the relationship between pain processing and pain outcome after pancreatic duct decompression and/or pancreatic resection in patients with CP. Patients with CP

  8. Sono-leather technology with ultrasound: a boon for unit operations in leather processing - review of our research work at Central Leather Research Institute (CLRI), India.

    Science.gov (United States)

    Sivakumar, Venkatasubramanian; Swaminathan, Gopalaraman; Rao, Paruchuri Gangadhar; Ramasami, Thirumalachari

    2009-01-01

    Ultrasound is a sound wave with a frequency above the human audible range of 16 Hz to 16 kHz. In recent years, numerous unit operations involving physical as well as chemical processes are reported to have been enhanced by ultrasonic irradiation. There have been benefits such as improvement in process efficiency, process time reduction, performing the processes under milder conditions and avoiding the use of some toxic chemicals to achieve cleaner processing. These could be a better way of augmentation for the processes as an advanced technique. The important point here is that ultrasonic irradiation is physical method activation rather than using chemical entities. Detailed studies have been made in the unit operations related to leather such as diffusion rate enhancement through porous leather matrix, cleaning, degreasing, tanning, dyeing, fatliquoring, oil-water emulsification process and solid-liquid tannin extraction from vegetable tanning materials as well as in precipitation reaction in wastewater treatment. The fundamental mechanism involved in these processes is ultrasonic cavitation in liquid media. In addition to this there also exist some process specific mechanisms for the enhancement of the processes. For instance, possible real-time reversible pore-size changes during ultrasound propagation through skin/leather matrix could be a reason for diffusion rate enhancement in leather processing as reported for the first time. Exhaustive scientific research work has been carried out in this area by our group working in Chemical Engineering Division of CLRI and most of these benefits have been proven with publications in valued peer-reviewed international journals. The overall results indicate that about 2-5-fold increase in the process efficiency due to ultrasound under the given process conditions for various unit operations with additional benefits. Scale-up studies are underway for converting these concepts in to a real viable larger scale operation. In

  9. Accelerating electrostatic surface potential calculation with multi-scale approximation on graphics processing units.

    Science.gov (United States)

    Anandakrishnan, Ramu; Scogland, Tom R W; Fenley, Andrew T; Gordon, John C; Feng, Wu-chun; Onufriev, Alexey V

    2010-06-01

    Tools that compute and visualize biomolecular electrostatic surface potential have been used extensively for studying biomolecular function. However, determining the surface potential for large biomolecules on a typical desktop computer can take days or longer using currently available tools and methods. Two commonly used techniques to speed-up these types of electrostatic computations are approximations based on multi-scale coarse-graining and parallelization across multiple processors. This paper demonstrates that for the computation of electrostatic surface potential, these two techniques can be combined to deliver significantly greater speed-up than either one separately, something that is in general not always possible. Specifically, the electrostatic potential computation, using an analytical linearized Poisson-Boltzmann (ALPB) method, is approximated using the hierarchical charge partitioning (HCP) multi-scale method, and parallelized on an ATI Radeon 4870 graphical processing unit (GPU). The implementation delivers a combined 934-fold speed-up for a 476,040 atom viral capsid, compared to an equivalent non-parallel implementation on an Intel E6550 CPU without the approximation. This speed-up is significantly greater than the 42-fold speed-up for the HCP approximation alone or the 182-fold speed-up for the GPU alone. Copyright (c) 2010 Elsevier Inc. All rights reserved.

  10. Process computer system for the prototype ATR 'Fugen'

    International Nuclear Information System (INIS)

    Oteru, Shigeru

    1979-01-01

    In recent nuclear power plants, computers are regarded as one of component equipments, and data processing, plant monitoring and performance calculation tend to be carried out with one on-line computer. As plants become large and complex, and the operational conditions become strict, the system having the function of performance calculation and reflecting the results immediately to operation is introduced. In the process computer for the prototype ATR ''Fugen'', the function of prediction to simulate the state after operation before the operation accompanied by the change of reactivity in a core, such as the operation of control rods and the control of liquid poison during operation, was given in addition to the functions of data processing, plant monitoring and detailed performance calculation. Core periodic monitoring program, core operational aid program, core any time data collecting program, and core periodic data collecting program, and their application programs are explained. Core performance calculation is the calculation of thermal output distribution in the core and the various accompanying characteristics and the monitoring of thermal limiting values. The computer used is a Hitachi control computer HIDIC-500, and typewriters, a process colored display, an operating console and other peripheral equipments are connected to it. (Kako, I.)

  11. U.S. Central Station Nuclear Power Plants: operating history

    International Nuclear Information System (INIS)

    1976-01-01

    The information assembled in this booklet highlights the operating history of U. S. Central Station nuclear power plants through December 31, 1976. The information presented is based on data furnished by the operating electric utilities. The information is presented in the form of statistical tables and computer printouts of major shutdown periods for each nuclear unit. The capacity factor data for each unit is presented both on the basis of its net design electrical rating and its net maximum dependable capacity, as reported by the operating utility to the Nuclear Regulatory Commission

  12. Study guide to accompany computers data and processing

    CERN Document Server

    Deitel, Harvey M

    1985-01-01

    Study Guide to Accompany Computer and Data Processing provides information pertinent to the fundamental aspects of computers and computer technology. This book presents the key benefits of using computers.Organized into five parts encompassing 19 chapters, this book begins with an overview of the evolution of modern computing systems from the earliest mechanical calculating devices to microchips. This text then introduces computer hardware and describes the processor. Other chapters describe how microprocessors are made and describe the physical operation of computers. This book discusses as w

  13. Plant process computer replacements - techniques to limit installation schedules and costs

    International Nuclear Information System (INIS)

    Baker, M.D.; Olson, J.L.

    1992-01-01

    Plant process computer systems, a standard fixture in all nuclear power plants, are used to monitor and display important plant process parameters. Scanning thousands of field sensors and alarming out-of-limit values, these computer systems are heavily relied on by control room operators. The original nuclear steam supply system (NSSS) vendor for the power plant often supplied the plant process computer. Designed using sixties and seventies technology, a plant's original process computer has been obsolete for some time. Driven by increased maintenance costs and new US Nuclear Regulatory Commission regulations such as NUREG-0737, Suppl. 1, many utilities have replaced their process computers with more modern computer systems. Given that computer systems are by their nature prone to rapid obsolescence, this replacement cycle will likely repeat. A process computer replacement project can be a significant capital expenditure and must be performed during a scheduled refueling outage. The object of the installation process is to install a working system on schedule. Experience gained by supervising several computer replacement installations has taught lessons that, if applied, will shorten the schedule and limit the risk of costly delays. Examples illustrating this technique are given. This paper and these examples deal only with the installation process and assume that the replacement computer system has been adequately designed, and development and factory tested

  14. Centralized processing of contact-handled TRU waste feasibility analysis

    International Nuclear Information System (INIS)

    1986-12-01

    This report presents work for the feasibility study of central processing of contact-handled TRU waste. Discussion of scenarios, transportation options, summary of cost estimates, and institutional issues are a few of the subjects discussed

  15. 15 CFR 971.209 - Processing outside the United States.

    Science.gov (United States)

    2010-01-01

    ... 15 Commerce and Foreign Trade 3 2010-01-01 2010-01-01 false Processing outside the United States... THE ENVIRONMENTAL DATA SERVICE DEEP SEABED MINING REGULATIONS FOR COMMERCIAL RECOVERY PERMITS Applications Contents § 971.209 Processing outside the United States. (a) Except as provided in this section...

  16. The distribution of mercury in a forest floor transect across the central United States

    Science.gov (United States)

    Charles H. (Hobie) Perry; Michael C. Amacher; William Cannon; Randall K. Kolka; Laurel Woodruff

    2009-01-01

    Mercury (Hg) stored in soil organic matter may be released when the forest floor is consumed by fire. Our objective is to document the spatial distribution of forest floor Hg for a transect crossing the central United States. Samples collected by the Forest Service, U.S. Department of Agriculture's Forest Inventory and Analysis Soil Quality Indicator were tested...

  17. Closure Report Central Nevada Test Area Subsurface Corrective Action Unit 443 January 2016

    Energy Technology Data Exchange (ETDEWEB)

    Findlay, Rick [US Department of Energy, Washington, DC (United States). Office of Legacy Management

    2015-11-01

    The U.S. Department of Energy (DOE) Office of Legacy Management (LM) prepared this Closure Report for the subsurface Corrective Action Unit (CAU) 443 at the Central Nevada Test Area (CNTA), Nevada, Site. CNTA was the site of a 0.2- to 1-megaton underground nuclear test in 1968. Responsibility for the site’s environmental restoration was transferred from the DOE, National Nuclear Security Administration, Nevada Field Office to LM on October 1, 2006. The environmental restoration process and corrective action strategy for CAU 443 are conducted in accordance with the Federal Facility Agreement and Consent Order (FFACO 1996, as amended 2011) and all applicable Nevada Division of Environmental Protection (NDEP) policies and regulations. This Closure Report provides justification for closure of CAU 443 and provides a summary of completed closure activities; describes the selected corrective action alternative; provides an implementation plan for long-term monitoring with well network maintenance and approaches/policies for institutional controls (ICs); and presents the contaminant, compliance, and use-restriction boundaries for the site.

  18. 40 CFR 63.765 - Glycol dehydration unit process vent standards.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 10 2010-07-01 2010-07-01 false Glycol dehydration unit process vent... Facilities § 63.765 Glycol dehydration unit process vent standards. (a) This section applies to each glycol dehydration unit subject to this subpart with an actual annual average natural gas flowrate equal to or...

  19. 40 CFR 63.1275 - Glycol dehydration unit process vent standards.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 11 2010-07-01 2010-07-01 true Glycol dehydration unit process vent... Facilities § 63.1275 Glycol dehydration unit process vent standards. (a) This section applies to each glycol dehydration unit subject to this subpart with an actual annual average natural gas flowrate equal to or...

  20. Future trends in power plant process computer techniques

    International Nuclear Information System (INIS)

    Dettloff, K.

    1975-01-01

    The development of new concepts of the process computer technique has advanced in great steps. The steps are in the three sections: hardware, software, application concept. New computers with a new periphery such as, e.g., colour layer equipment, have been developed in hardware. In software, a decisive step in the sector 'automation software' has been made. Through these components, a step forwards has also been made in the question of incorporating the process computer in the structure of the whole power plant control technique. (orig./LH) [de

  1. Computer vision camera with embedded FPGA processing

    Science.gov (United States)

    Lecerf, Antoine; Ouellet, Denis; Arias-Estrada, Miguel

    2000-03-01

    Traditional computer vision is based on a camera-computer system in which the image understanding algorithms are embedded in the computer. To circumvent the computational load of vision algorithms, low-level processing and imaging hardware can be integrated in a single compact module where a dedicated architecture is implemented. This paper presents a Computer Vision Camera based on an open architecture implemented in an FPGA. The system is targeted to real-time computer vision tasks where low level processing and feature extraction tasks can be implemented in the FPGA device. The camera integrates a CMOS image sensor, an FPGA device, two memory banks, and an embedded PC for communication and control tasks. The FPGA device is a medium size one equivalent to 25,000 logic gates. The device is connected to two high speed memory banks, an IS interface, and an imager interface. The camera can be accessed for architecture programming, data transfer, and control through an Ethernet link from a remote computer. A hardware architecture can be defined in a Hardware Description Language (like VHDL), simulated and synthesized into digital structures that can be programmed into the FPGA and tested on the camera. The architecture of a classical multi-scale edge detection algorithm based on a Laplacian of Gaussian convolution has been developed to show the capabilities of the system.

  2. Proton Testing of Advanced Stellar Compass Digital Processing Unit

    DEFF Research Database (Denmark)

    Thuesen, Gøsta; Denver, Troelz; Jørgensen, Finn E

    1999-01-01

    The Advanced Stellar Compass Digital Processing Unit was radiation tested with 300 MeV protons at Proton Irradiation Facility (PIF), Paul Scherrer Institute, Switzerland.......The Advanced Stellar Compass Digital Processing Unit was radiation tested with 300 MeV protons at Proton Irradiation Facility (PIF), Paul Scherrer Institute, Switzerland....

  3. 24 CFR 290.21 - Computing annual number of units eligible for substitution of tenant-based assistance or...

    Science.gov (United States)

    2010-04-01

    ... 24 Housing and Urban Development 2 2010-04-01 2010-04-01 false Computing annual number of units eligible for substitution of tenant-based assistance or alternative uses. 290.21 Section 290.21 Housing and... Multifamily Projects § 290.21 Computing annual number of units eligible for substitution of tenant-based...

  4. Current Trends in Cloud Computing A Survey of Cloud Computing Systems

    OpenAIRE

    Harjit Singh

    2012-01-01

    Cloud computing that has become an increasingly important trend, is a virtualization technology that uses the internet and central remote servers to offer the sharing of resources that include infrastructures, software, applications and business processes to the market environment to fulfill the elastic demand. In today’s competitive environment, the service vitality, elasticity, choices and flexibility offered by this scalable technology are too attractive that makes the cloud computing to i...

  5. LHCb: Control and Monitoring of the Online Computer Farm for Offline processing in LHCb

    CERN Multimedia

    Granado Cardoso, L A; Closier, J; Frank, M; Gaspar, C; Jost, B; Liu, G; Neufeld, N; Callot, O

    2013-01-01

    LHCb, one of the 4 experiments at the LHC accelerator at CERN, uses approximately 1500 PCs (averaging 12 cores each) for processing the High Level Trigger (HLT) during physics data taking. During periods when data acquisition is not required most of these PCs are idle. In these periods it is possible to profit from the unused processing capacity to run offline jobs, such as Monte Carlo simulation. The LHCb offline computing environment is based on LHCbDIRAC (Distributed Infrastructure with Remote Agent Control). In LHCbDIRAC, job agents are started on Worker Nodes, pull waiting tasks from the central WMS (Workload Management System) and process them on the available resources. A Control System was developed which is able to launch, control and monitor the job agents for the offline data processing on the HLT Farm. This control system is based on the existing Online System Control infrastructure, the PVSS SCADA and the FSM toolkit. It has been extensively used launching and monitoring 22.000+ agents simultaneo...

  6. Is banking supervision central to central banking?

    OpenAIRE

    Joe Peek; Eric S. Rosengren; Geoffrey M. B. Tootell

    1997-01-01

    Whether central banks should play an active role in bank supervision and regulation is being debated both in the United States and abroad. While the Bank of England has recently been stripped of its supervisory responsibilities and several proposals in the United States have advocated removing bank supervision from the Federal Reserve System, other countries are considering enhancing central bank involvement in this area. Many of the arguments for and against these proposals hinge on the effe...

  7. Evaluation of Selected Resource Allocation and Scheduling Methods in Heterogeneous Many-Core Processors and Graphics Processing Units

    Directory of Open Access Journals (Sweden)

    Ciznicki Milosz

    2014-12-01

    Full Text Available Heterogeneous many-core computing resources are increasingly popular among users due to their improved performance over homogeneous systems. Many developers have realized that heterogeneous systems, e.g. a combination of a shared memory multi-core CPU machine with massively parallel Graphics Processing Units (GPUs, can provide significant performance opportunities to a wide range of applications. However, the best overall performance can only be achieved if application tasks are efficiently assigned to different types of processor units in time taking into account their specific resource requirements. Additionally, one should note that available heterogeneous resources have been designed as general purpose units, however, with many built-in features accelerating specific application operations. In other words, the same algorithm or application functionality can be implemented as a different task for CPU or GPU. Nevertheless, from the perspective of various evaluation criteria, e.g. the total execution time or energy consumption, we may observe completely different results. Therefore, as tasks can be scheduled and managed in many alternative ways on both many-core CPUs or GPUs and consequently have a huge impact on the overall computing resources performance, there are needs for new and improved resource management techniques. In this paper we discuss results achieved during experimental performance studies of selected task scheduling methods in heterogeneous computing systems. Additionally, we present a new architecture for resource allocation and task scheduling library which provides a generic application programming interface at the operating system level for improving scheduling polices taking into account a diversity of tasks and heterogeneous computing resources characteristics.

  8. Management of planned unit outages

    International Nuclear Information System (INIS)

    Brune, W.

    1984-01-01

    Management of planned unit outages at the Bruno Leuschner Nuclear Power Plant is based on the experience gained with Soviet PWR units of the WWER type over a period of more than 50 reactor-years. For PWR units, planned outages concentrate almost exclusively on annual refuellings and major maintenance of the power plant facilities involved. Planning of such major maintenance work is based on a standardized basic network plan and a catalogue of standardized maintenance and inspection measures. From these, an overall maintenance schedule of the unit and partial process plans of the individual main components are derived (manually or by computer) and, in the temporal integration of major maintenance at every unit, fixed starting times and durations are determined. More than 75% of the maintenance work at the Bruno Leuschner Nuclear Power Plant is carried out by the plant's own maintenance personnel. Large-scale maintenance of every unit is controlled by a special project head. He is assisted by commissioners, each of whom is responsible for his own respective item. A daily control report is made. The organizational centre is a central office which works in shifts around the clock. All maintenance orders and reports of completion pass through this office; thus, the overall maintenance schedule can be corrected daily. To enforce the proposed operational strategy, suitable accompanying technical measures are required with respect to effective facility monitoring and technical diagnosis, purposeful improvement of particularly sensitive components and an increase in the effectiveness of maintenance work by special technologies and devices. (author)

  9. Inovation of the computer system for the WWER-440 simulator

    International Nuclear Information System (INIS)

    Schrumpf, L.

    1988-01-01

    The configuration of the WWER-440 simulator computer system consists of four SMEP computers. The basic data processing unit consists of two interlinked SM 52/11.M1 computers with 1 MB of main memory. This part of the computer system of the simulator controls the operation of the entire simulator, processes the programs of technology behavior simulation, of the unit information system and of other special systems, guarantees program support and the operation of the instructor's console. An SM 52/11 computer with 256 kB of main memory is connected to each unit. It is used as a communication unit for data transmission using the DASIO 600 interface. Semigraphic color displays are based on the microprocessor modules of the SM 50/40 and SM 53/10 kit supplemented with a modified TESLA COLOR 110 ST tv receiver. (J.B.). 1 fig

  10. Mapping tectonic and anthropogenic processes in central California using satellite and airborne InSAR

    Science.gov (United States)

    Liu, Z.; Lundgren, P.; Liang, C.; Farr, T. G.; Fielding, E. J.

    2017-12-01

    The improved spatiotemporal resolution of surface deformation from recent satellite and airborne InSAR measurements provides a great opportunity to improve our understanding of both tectonic and non-tectonic processes. In central California the primary plate boundary fault system (San Andreas fault) lies adjacent to the San Joaquin Valley (SJV), a vast structural trough that accounts for about one-sixth of the United Sates' irrigated land and one-fifth of its extracted groundwater. The central San Andreas fault (CSAF) displays a range of fault slip behavior with creeping in its central segment that decreases towards its northwest and southeast ends, where it transitions to being fully locked. Despite much progress, many questions regarding fault and anthropogenic processes in the region still remain. In this study, we combine satellite InSAR and NASA airborne UAVSAR data to image fault and anthropogenic deformation. The UAVSAR data cover fault perpendicular swaths imaged from opposing look directions and fault parallel swaths since 2009. The much finer spatial resolution and optimized viewing geometry provide important constraints on near fault deformation and fault slip at very shallow depth. We performed a synoptic InSAR time series analysis using Sentinel-1, ALOS, and UAVSAR interferograms. We estimate azimuth mis-registration between single look complex (SLC) images of Sentinel-1 in a stack sense to achieve accurate azimuth co-registration between SLC images for low coherence and/or long interval interferometric pairs. We show that it is important to correct large-scale ionosphere features in ALOS-2 ScanSAR data for accurate deformation measurements. Joint analysis of UAVSAR and ALOS interferometry measurements show clear variability in deformation along the fault strike, suggesting variable fault creep and locking at depth and along strike. In addition to fault creep, the L-band ALOS, and especially ALOS-2 ScanSAR interferometry, show large-scale ground

  11. Biomimetic design processes in architecture: morphogenetic and evolutionary computational design

    International Nuclear Information System (INIS)

    Menges, Achim

    2012-01-01

    Design computation has profound impact on architectural design methods. This paper explains how computational design enables the development of biomimetic design processes specific to architecture, and how they need to be significantly different from established biomimetic processes in engineering disciplines. The paper first explains the fundamental difference between computer-aided and computational design in architecture, as the understanding of this distinction is of critical importance for the research presented. Thereafter, the conceptual relation and possible transfer of principles from natural morphogenesis to design computation are introduced and the related developments of generative, feature-based, constraint-based, process-based and feedback-based computational design methods are presented. This morphogenetic design research is then related to exploratory evolutionary computation, followed by the presentation of two case studies focusing on the exemplary development of spatial envelope morphologies and urban block morphologies. (paper)

  12. Simplified techniques of cerebral angiography using a mobile X-ray unit and computed radiography

    International Nuclear Information System (INIS)

    Gondo, Gakuji; Ishiwata, Yusuke; Yamashita, Toshinori; Iida, Takashi; Moro, Yutaka

    1989-01-01

    Simplified techniques of cerebral angiography using a mobile X-ray unit and computed radiography (CR) are discussed. Computed radiography is a digital radiography system in which an imaging plate is used as an X-ray detector and a final image is displayed on the film. In the angiograms performed with CR, the spatial frequency components can be enhanced for the easy analysis of fine blood vessels. Computed radiography has an automatic sensitivity and a latitude-setting mechanism, thus serving as an 'automatic camera.' This mechanism is useful for radiography with a mobile X-ray unit in hospital wards, intensive care units, or operating rooms where the appropriate setting of exposure conditions is difficult. We applied this mechanism to direct percutaneous carotid angiography and intravenous digital subtraction angiography with a mobile X-ray unit. Direct percutaneous carotid angiography using CR and a mobile X-ray unit were taken after the manual injection of a small amount of a contrast material through a fine needle. We performed direct percutaneous carotid angiography with this method 68 times on 25 cases from August 1986 to December 1987. Of the 68 angiograms, 61 were evaluated as good, compared with conventional angiography. Though the remaining seven were evaluated as poor, they were still diagnostically effective. This method is found useful for carotid angiography in emergency rooms, intensive care units, or operating rooms. Cerebral venography using CR and a mobile X-ray unit was done after the manual injection of a contrast material through the bilateral cubital veins. The cerebral venous system could be visualized from 16 to 24 seconds after the beginning of the injection of the contrast material. We performed cerebral venography with this method 14 times on six cases. These venograms were better than conventional angiograms in all cases. This method may be useful in managing patients suffering from cerebral venous thrombosis. (J.P.N.)

  13. On the hazard rate process for imperfectly monitored multi-unit systems

    International Nuclear Information System (INIS)

    Barros, A.; Berenguer, C.; Grall, A.

    2005-01-01

    The aim of this paper is to present a stochastic model to characterize the failure distribution of multi-unit systems when the current units state is imperfectly monitored. The definition of the hazard rate process existing with perfect monitoring is extended to the realistic case where the units failure time are not always detected (non-detection events). The so defined observed hazard rate process gives a better representation of the system behavior than the classical failure rate calculated without any information on the units state and than the hazard rate process based on perfect monitoring information. The quality of this representation is, however, conditioned by the monotony property of the process. This problem is mainly discussed and illustrated on a practical example (two parallel units). The results obtained motivate the use of the observed hazard rate process to characterize the stochastic behavior of the multi-unit systems and to optimize for example preventive maintenance policies

  14. On the hazard rate process for imperfectly monitored multi-unit systems

    Energy Technology Data Exchange (ETDEWEB)

    Barros, A. [Institut des Sciences et Techonologies de l' Information de Troyes (ISTIT-CNRS), Equipe de Modelisation et Surete des Systemes, Universite de Technologie de Troyes (UTT), 12, rue Marie Curie, BP2060, 10010 Troyes cedex (France)]. E-mail: anne.barros@utt.fr; Berenguer, C. [Institut des Sciences et Techonologies de l' Information de Troyes (ISTIT-CNRS), Equipe de Modelisation et Surete des Systemes, Universite de Technologie de Troyes (UTT), 12, rue Marie Curie, BP2060, 10010 Troyes cedex (France); Grall, A. [Institut des Sciences et Techonologies de l' Information de Troyes (ISTIT-CNRS), Equipe de Modelisation et Surete des Systemes, Universite de Technologie de Troyes (UTT), 12, rue Marie Curie, BP2060, 10010 Troyes cedex (France)

    2005-12-01

    The aim of this paper is to present a stochastic model to characterize the failure distribution of multi-unit systems when the current units state is imperfectly monitored. The definition of the hazard rate process existing with perfect monitoring is extended to the realistic case where the units failure time are not always detected (non-detection events). The so defined observed hazard rate process gives a better representation of the system behavior than the classical failure rate calculated without any information on the units state and than the hazard rate process based on perfect monitoring information. The quality of this representation is, however, conditioned by the monotony property of the process. This problem is mainly discussed and illustrated on a practical example (two parallel units). The results obtained motivate the use of the observed hazard rate process to characterize the stochastic behavior of the multi-unit systems and to optimize for example preventive maintenance policies.

  15. Implementing a multifaceted intervention to decrease central line-associated bloodstream infections in SEHA (Abu Dhabi Health Services Company) intensive care units: the Abu Dhabi experience.

    Science.gov (United States)

    Latif, Asad; Kelly, Bernadette; Edrees, Hanan; Kent, Paula S; Weaver, Sallie J; Jovanovic, Branislava; Attallah, Hadeel; de Grouchy, Kristin K; Al-Obaidli, Ali; Goeschel, Christine A; Berenholtz, Sean M

    2015-07-01

    OBJECTIVE To determine whether implementation of a multifaceted intervention would significantly reduce the incidence of central line-associated bloodstream infections. DESIGN Prospective cohort collaborative. SETTING AND PARTICIPANTS Intensive care units of the Abu Dhabi Health Services Company hospitals in the Emirate of Abu Dhabi. INTERVENTIONS A bundled intervention consisting of 3 components was implemented as part of the program. It consisted of a multifaceted approach that targeted clinician use of evidence-based infection prevention recommendations, tools that supported the identification of local barriers to these practices, and implementation ideas to help ensure patients received the practices. Comprehensive unit-based safety teams were created to improve safety culture and teamwork. Finally, the measurement and feedback of monthly infection rate data to safety teams, senior leaders, and staff in participating intensive care units was encouraged. The main outcome measure was the quarterly rate of central line-associated bloodstream infections. RESULTS Eighteen intensive care units from 7 hospitals in Abu Dhabi implemented the program and achieved an overall 38% reduction in their central line-associated bloodstream infection rate, adjusted at the hospital and unit level. The number of units with a quarterly central line-associated bloodstream infection rate of less than 1 infection per 1,000 catheter-days increased by almost 40% between the baseline and postintervention periods. CONCLUSION A significant reduction in the global morbidity and mortality associated with central line-associated bloodstream infections is possible across intensive care units in disparate settings using a multifaceted intervention.

  16. HMI Data Processing and Electronics Departmenmt. Scientific report 1984

    International Nuclear Information System (INIS)

    1985-01-01

    The Data Processing and Electronics Department carries out application-centered R+D work in the fields of general and process-related data processing, digital and analog measuring systems, and electronic elements. As part of the HMI infrastructure, the Department carries out central data processing and electronics functions. The R+D activities of the Department and its infrastructural tasks were carried out in seven Working Groups and one Project Group: Computer systems; Mathematics and graphical data processing; Software developments; Process computer systems, hardware; Nuclear electronics, measuring and control systems; Research on structural elements and irradiation testing; Computer center and cooperation in the 'Central Project Leader Group of the German Research Network' (DFN). (orig./RB) [de

  17. 78 FR 47011 - Software Unit Testing for Digital Computer Software Used in Safety Systems of Nuclear Power Plants

    Science.gov (United States)

    2013-08-02

    ... NUCLEAR REGULATORY COMMISSION [NRC-2012-0195] Software Unit Testing for Digital Computer Software... revised regulatory guide (RG), revision 1 of RG 1.171, ``Software Unit Testing for Digital Computer Software Used in Safety Systems of Nuclear Power Plants.'' This RG endorses American National Standards...

  18. 77 FR 50722 - Software Unit Testing for Digital Computer Software Used in Safety Systems of Nuclear Power Plants

    Science.gov (United States)

    2012-08-22

    ... NUCLEAR REGULATORY COMMISSION [NRC-2012-0195] Software Unit Testing for Digital Computer Software...) is issuing for public comment draft regulatory guide (DG), DG-1208, ``Software Unit Testing for Digital Computer Software used in Safety Systems of Nuclear Power Plants.'' The DG-1208 is proposed...

  19. The AMchip04 and the processing unit prototype for the FastTracker

    International Nuclear Information System (INIS)

    Andreani, A; Alberti, F; Stabile, A; Annovi, A; Beretta, M; Volpi, G; Bogdan, M; Shochet, M; Tang, J; Tompkins, L; Citterio, M; Giannetti, P; Lanza, A; Magalotti, D; Piendibene, M

    2012-01-01

    Modern experiments search for extremely rare processes hidden in much larger background levels. As the experiment's complexity, the accelerator backgrounds and luminosity increase we need increasingly complex and exclusive event selection. We present the first prototype of a new Processing Unit (PU), the core of the FastTracker processor (FTK). FTK is a real time tracking device for the ATLAS experiment's trigger upgrade. The computing power of the PU is such that a few hundred of them will be able to reconstruct all the tracks with transverse momentum above 1 GeV/c in ATLAS events up to Phase II instantaneous luminosities (3 × 10 34 cm −2 s −1 ) with an event input rate of 100 kHz and a latency below a hundred microseconds. The PU provides massive computing power to minimize the online execution time of complex tracking algorithms. The time consuming pattern recognition problem, generally referred to as the ''combinatorial challenge'', is solved by the Associative Memory (AM) technology exploiting parallelism to the maximum extent; it compares the event to all pre-calculated ''expectations'' or ''patterns'' (pattern matching) simultaneously, looking for candidate tracks called ''roads''. This approach reduces to a linear behavior the typical exponential complexity of the CPU based algorithms. Pattern recognition is completed by the time data are loaded into the AM devices. We report on the design of the first Processing Unit prototypes. The design had to address the most challenging aspects of this technology: a huge number of detector clusters (''hits'') must be distributed at high rate with very large fan-out to all patterns (10 Million patterns will be located on 128 chips placed on a single board) and a huge number of roads must be collected and sent back to the FTK post-pattern-recognition functions. A network of high speed serial links is used to solve the data distribution problem.

  20. Mercury contamination in bats from the central United States.

    Science.gov (United States)

    Korstian, Jennifer M; Chumchal, Matthew M; Bennett, Victoria J; Hale, Amanda M

    2018-01-01

    Mercury (Hg) is a highly toxic metal that has detrimental effects on wildlife. We surveyed Hg concentrations in 10 species of bats collected at wind farms in the central United States and found contamination in all species. Mercury concentration in fur was highly variable both within and between species (range: 1.08-10.52 µg/g). Despite the distance between sites (up to 1200 km), only 2 of the 5 species sampled at multiple locations had fur Hg concentrations that differed between sites. Mercury concentrations observed in the present study all fell within the previously reported ranges for bats collected from the northeastern United States and Canada, although many of the bats we sampled had lower maximum Hg concentrations. Juvenile bats had lower concentrations of Hg in fur compared with adult bats, and we found no significant effect of sex on Hg concentrations in fur. For a subset of 2 species, we also measured Hg concentration in muscle tissue; concentrations were much higher in fur than in muscle, and Hg concentrations in the 2 tissue types were weakly correlated. Abundant wind farms and ongoing postconstruction fatality surveys offer an underutilized opportunity to obtain tissue samples that can be used to assess Hg contamination in bats. Environ Toxicol Chem 2018;37:160-165. © 2018 SETAC. © 2017 SETAC.

  1. Smoldyn on graphics processing units: massively parallel Brownian dynamics simulations.

    Science.gov (United States)

    Dematté, Lorenzo

    2012-01-01

    Space is a very important aspect in the simulation of biochemical systems; recently, the need for simulation algorithms able to cope with space is becoming more and more compelling. Complex and detailed models of biochemical systems need to deal with the movement of single molecules and particles, taking into consideration localized fluctuations, transportation phenomena, and diffusion. A common drawback of spatial models lies in their complexity: models can become very large, and their simulation could be time consuming, especially if we want to capture the systems behavior in a reliable way using stochastic methods in conjunction with a high spatial resolution. In order to deliver the promise done by systems biology to be able to understand a system as whole, we need to scale up the size of models we are able to simulate, moving from sequential to parallel simulation algorithms. In this paper, we analyze Smoldyn, a widely diffused algorithm for stochastic simulation of chemical reactions with spatial resolution and single molecule detail, and we propose an alternative, innovative implementation that exploits the parallelism of Graphics Processing Units (GPUs). The implementation executes the most computational demanding steps (computation of diffusion, unimolecular, and bimolecular reaction, as well as the most common cases of molecule-surface interaction) on the GPU, computing them in parallel on each molecule of the system. The implementation offers good speed-ups and real time, high quality graphics output

  2. Mapping the Information Trace in Local Field Potentials by a Computational Method of Two-Dimensional Time-Shifting Synchronization Likelihood Based on Graphic Processing Unit Acceleration.

    Science.gov (United States)

    Zhao, Zi-Fang; Li, Xue-Zhu; Wan, You

    2017-12-01

    The local field potential (LFP) is a signal reflecting the electrical activity of neurons surrounding the electrode tip. Synchronization between LFP signals provides important details about how neural networks are organized. Synchronization between two distant brain regions is hard to detect using linear synchronization algorithms like correlation and coherence. Synchronization likelihood (SL) is a non-linear synchronization-detecting algorithm widely used in studies of neural signals from two distant brain areas. One drawback of non-linear algorithms is the heavy computational burden. In the present study, we proposed a graphic processing unit (GPU)-accelerated implementation of an SL algorithm with optional 2-dimensional time-shifting. We tested the algorithm with both artificial data and raw LFP data. The results showed that this method revealed detailed information from original data with the synchronization values of two temporal axes, delay time and onset time, and thus can be used to reconstruct the temporal structure of a neural network. Our results suggest that this GPU-accelerated method can be extended to other algorithms for processing time-series signals (like EEG and fMRI) using similar recording techniques.

  3. Using distributed processing on a local area network to increase available computing power

    International Nuclear Information System (INIS)

    Capps, K.S.; Sherry, K.J.

    1996-01-01

    The migration from central computers to desktop computers distributed the total computing horsepower of a system over many different machines. A typical engineering office may have several networked desktop computers that are sometimes idle, especially after work hours and when people are absent. Users would benefit if applications were able to use these networked computers collectively. This paper describes a method of distributing the workload of an application on one desktop system to otherwise idle systems on the network. The authors present this discussion from a developer's viewpoint, because the developer must modify an application before the user can realize any benefit of distributed computing on available systems

  4. On Tour... Primary Hardwood Processing, Products and Recycling Unit

    Science.gov (United States)

    Philip A. Araman; Daniel L. Schmoldt

    1995-01-01

    Housed within the Department of Wood Science and Forest Products at Virginia Polytechnic Institute is a three-person USDA Forest Service research work unit (with one vacancy) devoted to hardwood processing and recycling research. Phil Araman is the project leader of this truly unique and productive unit, titled ãPrimary Hardwood Processing, Products and Recycling.ä The...

  5. Toward a computational theory of conscious processing.

    Science.gov (United States)

    Dehaene, Stanislas; Charles, Lucie; King, Jean-Rémi; Marti, Sébastien

    2014-04-01

    The study of the mechanisms of conscious processing has become a productive area of cognitive neuroscience. Here we review some of the recent behavioral and neuroscience data, with the specific goal of constraining present and future theories of the computations underlying conscious processing. Experimental findings imply that most of the brain's computations can be performed in a non-conscious mode, but that conscious perception is characterized by an amplification, global propagation and integration of brain signals. A comparison of these data with major theoretical proposals suggests that firstly, conscious access must be carefully distinguished from selective attention; secondly, conscious perception may be likened to a non-linear decision that 'ignites' a network of distributed areas; thirdly, information which is selected for conscious perception gains access to additional computations, including temporary maintenance, global sharing, and flexible routing; and finally, measures of the complexity, long-distance correlation and integration of brain signals provide reliable indices of conscious processing, clinically relevant to patients recovering from coma. Copyright © 2013 Elsevier Ltd. All rights reserved.

  6. Report on the Fourth Reactor Refueling. Laguna Verde Nuclear Central. Unit 1. April-May 1995; Informe de la Cuarta Recarga de Combustible. Central Laguna Verde. Unidad 1. Abril-Mayo 1995

    Energy Technology Data Exchange (ETDEWEB)

    Mendoza L, A; Flores C, E; Lopez G, C P.F.

    1996-12-31

    The fourth refueling of the Unit 1 of Laguna Verde Nuclear Central was executed in the period of April 17 to May 31 of 1995 with the participation of a task group of 358 persons, included technicians and radiation protection officials and auxiliaries.The radiation monitoring and radiological surveillance to the workers was present length ways the refueling process and always attached to the ALARA criteria. The check points for radiation levels were set at: primary container or dry well, reloading floor, decontamination room (level 10.5), turbine building and radioactive waste building. To take advantage of the refueling process, rooms 203 and 213 of the turbine buildings were subject to inspection and maintenance work in valves, heaters and drains of heaters. Management aspects as personnel selection and training, costs, and countable are also presented in this report. Owing to the high cost of man-hour of the members of the ININ staff, its participation in the refueling process was in smaller number than years before. (Author).

  7. Proceedings: Distributed digital systems, plant process computers, and networks

    International Nuclear Information System (INIS)

    1995-03-01

    These are the proceedings of a workshop on Distributed Digital Systems, Plant Process Computers, and Networks held in Charlotte, North Carolina on August 16--18, 1994. The purpose of the workshop was to provide a forum for technology transfer, technical information exchange, and education. The workshop was attended by more than 100 representatives of electric utilities, equipment manufacturers, engineering service organizations, and government agencies. The workshop consisted of three days of presentations, exhibitions, a panel discussion and attendee interactions. Original plant process computers at the nuclear power plants are becoming obsolete resulting in increasing difficulties in their effectiveness to support plant operations and maintenance. Some utilities have already replaced their plant process computers by more powerful modern computers while many other utilities intend to replace their aging plant process computers in the future. Information on recent and planned implementations are presented. Choosing an appropriate communications and computing network architecture facilitates integrating new systems and provides functional modularity for both hardware and software. Control room improvements such as CRT-based distributed monitoring and control, as well as digital decision and diagnostic aids, can improve plant operations. Commercially available digital products connected to the plant communications system are now readily available to provide distributed processing where needed. Plant operations, maintenance activities, and engineering analyses can be supported in a cost-effective manner. Selected papers are indexed separately for inclusion in the Energy Science and Technology Database

  8. Amorphous computing in the presence of stochastic disturbances.

    Science.gov (United States)

    Chu, Dominique; Barnes, David J; Perkins, Samuel

    2014-11-01

    Amorphous computing is a non-standard computing paradigm that relies on massively parallel execution of computer code by a large number of small, spatially distributed, weakly interacting processing units. Over the last decade or so, amorphous computing has attracted a great deal of interest both as an alternative model of computing and as an inspiration to understand developmental biology. A number of algorithms have been developed that can take advantage of the massive parallelism of this computing paradigm to solve specific problems. One of the interesting properties of amorphous computers is that they are robust with respect to the loss of individual processing units, in the sense that a removal of some of them should not impact on the computation as a whole. However, much less understood is to what extent amorphous computers are robust with respect to minor disturbances to the individual processing units, such as random motion or occasional faulty computation short of total component failure. In this article we address this question. As an example problem we choose an algorithm to calculate a straight line between two points. Using this example, we find that amorphous computers are not in general robust with respect to Brownian motion and noise, but we find strategies that restore reliable computation even in their presence. We will argue that these strategies are generally applicable and not specific to the particular AC we consider, or even specific to electronic computers. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  9. Tomography system having an ultrahigh-speed processing unit

    International Nuclear Information System (INIS)

    Brunnett, C.J.; Gerth, V.W. Jr.

    1977-01-01

    A transverse section tomography system has an ultrahigh-speed data processing unit for performing back projection and updating. An x-ray scanner directs x-ray beams through a planar section of a subject from a sequence of orientations and positions. The data processing unit includes a scan storage section for retrievably storing a set of filtered scan signals in scan storage locations corresponding to predetermined beam orientations. An array storage section is provided for storing image signals as they are generated

  10. Embracing the quantum limit in silicon computing.

    Science.gov (United States)

    Morton, John J L; McCamey, Dane R; Eriksson, Mark A; Lyon, Stephen A

    2011-11-16

    Quantum computers hold the promise of massive performance enhancements across a range of applications, from cryptography and databases to revolutionary scientific simulation tools. Such computers would make use of the same quantum mechanical phenomena that pose limitations on the continued shrinking of conventional information processing devices. Many of the key requirements for quantum computing differ markedly from those of conventional computers. However, silicon, which plays a central part in conventional information processing, has many properties that make it a superb platform around which to build a quantum computer. © 2011 Macmillan Publishers Limited. All rights reserved

  11. A quantum computer based on recombination processes in microelectronic devices

    International Nuclear Information System (INIS)

    Theodoropoulos, K; Ntalaperas, D; Petras, I; Konofaos, N

    2005-01-01

    In this paper a quantum computer based on the recombination processes happening in semiconductor devices is presented. A 'data element' and a 'computational element' are derived based on Schokley-Read-Hall statistics and they can later be used to manifest a simple and known quantum computing process. Such a paradigm is shown by the application of the proposed computer onto a well known physical system involving traps in semiconductor devices

  12. Identification of Learning Processes by Means of Computer Graphics.

    Science.gov (United States)

    Sorensen, Birgitte Holm

    1993-01-01

    Describes a development project for the use of computer graphics and video in connection with an inservice training course for primary education teachers in Denmark. Topics addressed include research approaches to computers; computer graphics in learning processes; activities relating to computer graphics; the role of the teacher; and student…

  13. Control system design specification of advanced spent fuel management process units

    Energy Technology Data Exchange (ETDEWEB)

    Ahn, S. H.; Kim, S. H.; Yoon, J. S

    2003-06-01

    In this study, the design specifications of instrumentation and control system for advanced spent fuel management process units are presented. The advanced spent fuel management process consists of several process units such as slitting device, dry pulverizing/mixing device, metallizer, etc. In this study, the control and operation characteristics of the advanced spent fuel management mockup process devices and the process devices developed in 2001 and 2002 are analysed. Also, a integral processing system of the unit process control signals is proposed, which the operation efficiency is improved. And a redundant PLC control system is constructed which the reliability is improved. A control scheme is proposed for the time delayed systems compensating the control performance degradation caused by time delay. The control system design specification is presented for the advanced spent fuel management process units. This design specifications can be effectively used for the detail design of the advanced spent fuel management process.

  14. Cloud Computing for radiologists.

    Science.gov (United States)

    Kharat, Amit T; Safvi, Amjad; Thind, Ss; Singh, Amarjit

    2012-07-01

    Cloud computing is a concept wherein a computer grid is created using the Internet with the sole purpose of utilizing shared resources such as computer software, hardware, on a pay-per-use model. Using Cloud computing, radiology users can efficiently manage multimodality imaging units by using the latest software and hardware without paying huge upfront costs. Cloud computing systems usually work on public, private, hybrid, or community models. Using the various components of a Cloud, such as applications, client, infrastructure, storage, services, and processing power, Cloud computing can help imaging units rapidly scale and descale operations and avoid huge spending on maintenance of costly applications and storage. Cloud computing allows flexibility in imaging. It sets free radiology from the confines of a hospital and creates a virtual mobile office. The downsides to Cloud computing involve security and privacy issues which need to be addressed to ensure the success of Cloud computing in the future.

  15. Cloud Computing for radiologists

    International Nuclear Information System (INIS)

    Kharat, Amit T; Safvi, Amjad; Thind, SS; Singh, Amarjit

    2012-01-01

    Cloud computing is a concept wherein a computer grid is created using the Internet with the sole purpose of utilizing shared resources such as computer software, hardware, on a pay-per-use model. Using Cloud computing, radiology users can efficiently manage multimodality imaging units by using the latest software and hardware without paying huge upfront costs. Cloud computing systems usually work on public, private, hybrid, or community models. Using the various components of a Cloud, such as applications, client, infrastructure, storage, services, and processing power, Cloud computing can help imaging units rapidly scale and descale operations and avoid huge spending on maintenance of costly applications and storage. Cloud computing allows flexibility in imaging. It sets free radiology from the confines of a hospital and creates a virtual mobile office. The downsides to Cloud computing involve security and privacy issues which need to be addressed to ensure the success of Cloud computing in the future

  16. Cloud computing for radiologists

    Directory of Open Access Journals (Sweden)

    Amit T Kharat

    2012-01-01

    Full Text Available Cloud computing is a concept wherein a computer grid is created using the Internet with the sole purpose of utilizing shared resources such as computer software, hardware, on a pay-per-use model. Using Cloud computing, radiology users can efficiently manage multimodality imaging units by using the latest software and hardware without paying huge upfront costs. Cloud computing systems usually work on public, private, hybrid, or community models. Using the various components of a Cloud, such as applications, client, infrastructure, storage, services, and processing power, Cloud computing can help imaging units rapidly scale and descale operations and avoid huge spending on maintenance of costly applications and storage. Cloud computing allows flexibility in imaging. It sets free radiology from the confines of a hospital and creates a virtual mobile office. The downsides to Cloud computing involve security and privacy issues which need to be addressed to ensure the success of Cloud computing in the future.

  17. BEAGLE: an application programming interface and high-performance computing library for statistical phylogenetics.

    Science.gov (United States)

    Ayres, Daniel L; Darling, Aaron; Zwickl, Derrick J; Beerli, Peter; Holder, Mark T; Lewis, Paul O; Huelsenbeck, John P; Ronquist, Fredrik; Swofford, David L; Cummings, Michael P; Rambaut, Andrew; Suchard, Marc A

    2012-01-01

    Phylogenetic inference is fundamental to our understanding of most aspects of the origin and evolution of life, and in recent years, there has been a concentration of interest in statistical approaches such as Bayesian inference and maximum likelihood estimation. Yet, for large data sets and realistic or interesting models of evolution, these approaches remain computationally demanding. High-throughput sequencing can yield data for thousands of taxa, but scaling to such problems using serial computing often necessitates the use of nonstatistical or approximate approaches. The recent emergence of graphics processing units (GPUs) provides an opportunity to leverage their excellent floating-point computational performance to accelerate statistical phylogenetic inference. A specialized library for phylogenetic calculation would allow existing software packages to make more effective use of available computer hardware, including GPUs. Adoption of a common library would also make it easier for other emerging computing architectures, such as field programmable gate arrays, to be used in the future. We present BEAGLE, an application programming interface (API) and library for high-performance statistical phylogenetic inference. The API provides a uniform interface for performing phylogenetic likelihood calculations on a variety of compute hardware platforms. The library includes a set of efficient implementations and can currently exploit hardware including GPUs using NVIDIA CUDA, central processing units (CPUs) with Streaming SIMD Extensions and related processor supplementary instruction sets, and multicore CPUs via OpenMP. To demonstrate the advantages of a common API, we have incorporated the library into several popular phylogenetic software packages. The BEAGLE library is free open source software licensed under the Lesser GPL and available from http://beagle-lib.googlecode.com. An example client program is available as public domain software.

  18. Quantum Computation-Based Image Representation, Processing Operations and Their Applications

    Directory of Open Access Journals (Sweden)

    Fei Yan

    2014-10-01

    Full Text Available A flexible representation of quantum images (FRQI was proposed to facilitate the extension of classical (non-quantum-like image processing applications to the quantum computing domain. The representation encodes a quantum image in the form of a normalized state, which captures information about colors and their corresponding positions in the images. Since its conception, a handful of processing transformations have been formulated, among which are the geometric transformations on quantum images (GTQI and the CTQI that are focused on the color information of the images. In addition, extensions and applications of FRQI representation, such as multi-channel representation for quantum images (MCQI, quantum image data searching, watermarking strategies for quantum images, a framework to produce movies on quantum computers and a blueprint for quantum video encryption and decryption have also been suggested. These proposals extend classical-like image and video processing applications to the quantum computing domain and offer a significant speed-up with low computational resources in comparison to performing the same tasks on traditional computing devices. Each of the algorithms and the mathematical foundations for their execution were simulated using classical computing resources, and their results were analyzed alongside other classical computing equivalents. The work presented in this review is intended to serve as the epitome of advances made in FRQI quantum image processing over the past five years and to simulate further interest geared towards the realization of some secure and efficient image and video processing applications on quantum computers.

  19. Central and Eastern United States (CEUS) Seismic Source Characterization (SSC) for Nuclear Facilities

    International Nuclear Information System (INIS)

    Coppersmith, Kevin J.; Salomone, Lawrence A.; Fuller, Chris W.; Glaser, Laura L.; Hanson, Kathryn L.; Hartleb, Ross D.; Lettis, William R.; Lindvall, Scott C.; McDuffie, Stephen M.; McGuire, Robin K.; Stirewalt, Gerry L.; Toro, Gabriel R.; Youngs, Robert R.; Slayter, David L.; Bozkurt, Serkan B.; Cumbest, Randolph J.; Falero, Valentina Montaldo; Perman, Roseanne C.; Shumway, Allison M.; Syms, Frank H.; Tuttle, Martitia P.

    2012-01-01

    This report describes a new seismic source characterization (SSC) model for the Central and Eastern United States (CEUS). It will replace the Seismic Hazard Methodology for the Central and Eastern United States, EPRI Report NP-4726 (July 1986) and the Seismic Hazard Characterization of 69 Nuclear Plant Sites East of the Rocky Mountains, Lawrence Livermore National Laboratory Model, (Bernreuter et al., 1989). The objective of the CEUS SSC Project is to develop a new seismic source model for the CEUS using a Senior Seismic Hazard Analysis Committee (SSHAC) Level 3 assessment process. The goal of the SSHAC process is to represent the center, body, and range of technically defensible interpretations of the available data, models, and methods. Input to a probabilistic seismic hazard analysis (PSHA) consists of both seismic source characterization and ground motion characterization. These two components are used to calculate probabilistic hazard results (or seismic hazard curves) at a particular site. This report provides a new seismic source model. Results and Findings The product of this report is a regional CEUS SSC model. This model includes consideration of an updated database, full assessment and incorporation of uncertainties, and the range of diverse technical interpretations from the larger technical community. The SSC model will be widely applicable to the entire CEUS, so this project uses a ground motion model that includes generic variations to allow for a range of representative site conditions (deep soil, shallow soil, hard rock). Hazard and sensitivity calculations were conducted at seven test sites representative of different CEUS hazard environments. Challenges and Objectives The regional CEUS SSC model will be of value to readers who are involved in PSHA work, and who wish to use an updated SSC model. This model is based on a comprehensive and traceable process, in accordance with SSHAC guidelines in NUREG/CR-6372, Recommendations for Probabilistic

  20. Computer processing of dynamic scintigraphic studies

    International Nuclear Information System (INIS)

    Ullmann, V.

    1985-01-01

    The methods are discussed of the computer processing of dynamic scintigraphic studies which were developed, studied or implemented by the authors within research task no. 30-02-03 in nuclear medicine within the five year plan 1981 to 85. This was mainly the method of computer processing radionuclide angiography, phase radioventriculography, regional lung ventilation, dynamic sequential scintigraphy of kidneys and radionuclide uroflowmetry. The problems are discussed of the automatic definition of fields of interest, the methodology of absolute volumes of the heart chamber in radionuclide cardiology, the design and uses are described of the multipurpose dynamic phantom of heart activity for radionuclide angiocardiography and ventriculography developed within the said research task. All methods are documented with many figures showing typical clinical (normal and pathological) and phantom measurements. (V.U.)

  1. Implementing Molecular Dynamics for Hybrid High Performance Computers - 1. Short Range Forces

    International Nuclear Information System (INIS)

    Brown, W. Michael; Wang, Peng; Plimpton, Steven J.; Tharrington, Arnold N.

    2011-01-01

    The use of accelerators such as general-purpose graphics processing units (GPGPUs) have become popular in scientific computing applications due to their low cost, impressive floating-point capabilities, high memory bandwidth, and low electrical power requirements. Hybrid high performance computers, machines with more than one type of floating-point processor, are now becoming more prevalent due to these advantages. In this work, we discuss several important issues in porting a large molecular dynamics code for use on parallel hybrid machines - (1) choosing a hybrid parallel decomposition that works on central processing units (CPUs) with distributed memory and accelerator cores with shared memory, (2) minimizing the amount of code that must be ported for efficient acceleration, (3) utilizing the available processing power from both many-core CPUs and accelerators, and (4) choosing a programming model for acceleration. We present our solution to each of these issues for short-range force calculation in the molecular dynamics package LAMMPS. We describe algorithms for efficient short range force calculation on hybrid high performance machines. We describe a new approach for dynamic load balancing of work between CPU and accelerator cores. We describe the Geryon library that allows a single code to compile with both CUDA and OpenCL for use on a variety of accelerators. Finally, we present results on a parallel test cluster containing 32 Fermi GPGPUs and 180 CPU cores.

  2. Computer Use and Vision-Related Problems Among University Students In Ajman, United Arab Emirate

    OpenAIRE

    Shantakumari, N; Eldeeb, R; Sreedharan, J; Gopal, K

    2014-01-01

    Background: The extensive use of computers as medium of teaching and learning in universities necessitates introspection into the extent of computer related health disorders among student population. Aim: This study was undertaken to assess the pattern of computer usage and related visual problems, among University students in Ajman, United Arab Emirates. Materials and Methods: A total of 500 Students studying in Gulf Medical University, Ajman and Ajman University of Science and Technology we...

  3. Computer Aided Continuous Time Stochastic Process Modelling

    DEFF Research Database (Denmark)

    Kristensen, N.R.; Madsen, Henrik; Jørgensen, Sten Bay

    2001-01-01

    A grey-box approach to process modelling that combines deterministic and stochastic modelling is advocated for identification of models for model-based control of batch and semi-batch processes. A computer-aided tool designed for supporting decision-making within the corresponding modelling cycle...

  4. Towards Process Support for Migrating Applications to Cloud Computing

    DEFF Research Database (Denmark)

    Chauhan, Muhammad Aufeef; Babar, Muhammad Ali

    2012-01-01

    Cloud computing is an active area of research for industry and academia. There are a large number of organizations providing cloud computing infrastructure and services. In order to utilize these infrastructure resources and services, existing applications need to be migrated to clouds. However...... for supporting migration to cloud computing based on our experiences from migrating an Open Source System (OSS), Hackystat, to two different cloud computing platforms. We explained the process by performing a comparative analysis of our efforts to migrate Hackystate to Amazon Web Services and Google App Engine....... We also report the potential challenges, suitable solutions, and lesson learned to support the presented process framework. We expect that the reported experiences can serve guidelines for those who intend to migrate software applications to cloud computing....

  5. A computer control system for a research reactor

    International Nuclear Information System (INIS)

    Crawford, K.C.; Sandquist, G.M.

    1987-01-01

    Most reactor applications until now, have not required computer control of core output. Commercial reactors are generally operated at a constant power output to provide baseline power. However, if commercial reactor cores are to become load following over a wide range, then centralized digital computer control is required to make the entire facility respond as a single unit to continual changes in power demand. Navy and research reactors are much smaller and simpler and are operated at constant power levels as required, without concern for the number of operators required to operate the facility. For navy reactors, centralized digital computer control may provide space savings and reduced personnel requirements. Computer control offers research reactors versatility to efficiently change a system to develop new ideas. The operation of any reactor facility would be enhanced by a controller that does not panic and is continually monitoring all facility parameters. Eventually very sophisticated computer control systems may be developed which will sense operational problems, diagnose the problem, and depending on the severity of the problem, immediately activate safety systems or consult with operators before taking action

  6. Conceptual design of centralized control system for LHD

    International Nuclear Information System (INIS)

    Kaneko, H.; Yamazaki, K.; Taniguchi, Y.

    1992-01-01

    A centralized control system for a fusion experimental machine is discussed. A configuration whereby a number of complete and uniform local systems are controlled by a central computer, a timer and an interlock system is appropriate for the control system of the Large Helical Device (LHD). A connection among local systems can be made by Ethernet, because a faster transmission of control data is processed by a specific system. (author)

  7. Facilitating NASA Earth Science Data Processing Using Nebula Cloud Computing

    Science.gov (United States)

    Chen, A.; Pham, L.; Kempler, S.; Theobald, M.; Esfandiari, A.; Campino, J.; Vollmer, B.; Lynnes, C.

    2011-12-01

    Cloud Computing technology has been used to offer high-performance and low-cost computing and storage resources for both scientific problems and business services. Several cloud computing services have been implemented in the commercial arena, e.g. Amazon's EC2 & S3, Microsoft's Azure, and Google App Engine. There are also some research and application programs being launched in academia and governments to utilize Cloud Computing. NASA launched the Nebula Cloud Computing platform in 2008, which is an Infrastructure as a Service (IaaS) to deliver on-demand distributed virtual computers. Nebula users can receive required computing resources as a fully outsourced service. NASA Goddard Earth Science Data and Information Service Center (GES DISC) migrated several GES DISC's applications to the Nebula as a proof of concept, including: a) The Simple, Scalable, Script-based Science Processor for Measurements (S4PM) for processing scientific data; b) the Atmospheric Infrared Sounder (AIRS) data process workflow for processing AIRS raw data; and c) the GES-DISC Interactive Online Visualization ANd aNalysis Infrastructure (GIOVANNI) for online access to, analysis, and visualization of Earth science data. This work aims to evaluate the practicability and adaptability of the Nebula. The initial work focused on the AIRS data process workflow to evaluate the Nebula. The AIRS data process workflow consists of a series of algorithms being used to process raw AIRS level 0 data and output AIRS level 2 geophysical retrievals. Migrating the entire workflow to the Nebula platform is challenging, but practicable. After installing several supporting libraries and the processing code itself, the workflow is able to process AIRS data in a similar fashion to its current (non-cloud) configuration. We compared the performance of processing 2 days of AIRS level 0 data through level 2 using a Nebula virtual computer and a local Linux computer. The result shows that Nebula has significantly

  8. Social processes underlying acculturation: a study of drinking behavior among immigrant Latinos in the Northeast United States

    Science.gov (United States)

    LEE, CHRISTINA S.; LÓPEZ, STEVEN REGESER; COBLY, SUZANNE M.; TEJADA, MONICA; GARCÍA-COLL, CYNTHIA; SMITH, MARCIA

    2010-01-01

    Study Goals To identify social processes that underlie the relationship of acculturation and heavy drinking behavior among Latinos who have immigrated to the Northeast United States of America (USA). Method Community-based recruitment strategies were used to identify 36 Latinos who reported heavy drinking. Participants were 48% female, 23 to 56 years of age, and were from South or Central America (39%) and the Caribbean (24%). Six focus groups were audiotaped and transcribed. Results Content analyses indicated that the social context of drinking is different in the participants’ countries of origin and in the United States. In Latin America, alcohol consumption was part of everyday living (being with friends and family). Nostalgia and isolation reflected some of the reasons for drinking in the USA. Results suggest that drinking in the Northeastern United States (US) is related to Latinos’ adaptation to a new sociocultural environment. Knowledge of the shifting social contexts of drinking can inform health interventions. PMID:20376331

  9. Lithofacies analysis of the Simpson Group in south-central Kansas

    International Nuclear Information System (INIS)

    Doveton, J.H.; Charpentier, R.R.; Metzger, E.P.

    1990-01-01

    This book discusses detailed stratigraphy and lithofacies of the oil-productive Middle Ordovician Simpson Group in south-central Kansas. The report presents results of studies of the Simpson Group in Barber, Comanche, Kiowa, and Pratt counties. The high density of exploration holes and their associated logs allowed a detailed stratigraphic subdivision to be made of shale, sandstone, and sandy carbonate units. The lateral changes in these units are depicted in a series of maps and cross sections and show distinctive lithofacies patterns that reflect a history of northward-moving marine transgression. Working with digital data from gamma-ray logs, the geologists used computer methods to generate a series of cross sections of the Simpson Group, based on the statistical moments of the log traces. Automated mapping displayed the shapes and disposition of shale and non-shale units as continuous features in three dimensions. The ground truth information from drill cuttings further refined interpretations of stratigraphy, lithofacies, and depositional history implied by these computer models

  10. A study on mineralization U,REE and related processes in anomaly No.6 Khoshomy area central Iran

    International Nuclear Information System (INIS)

    Heidaryan, F.

    2005-01-01

    Uranium mineralization in Khoshomy prospect, located in central. part of Iran, with 303-15000 (cps) and 14 to 4000 (ppm) released, The main rock types include: gneiss, granite, pegmatite and migmatite, that influenced by pegmatite-albitic vines (quartz-heldespatic). Acidic and basic dykes, granodioritic, units and dolomite and marble have been seen. The alteration associated with the mineralization is potassic, argillic, propylitic, carbonization, silisificaition and hematitizaition. Uranium mineralization occurred in a hydrothermal phase with Cu, Mo, Ni and Au elements. Uranium primary minerals include pitchblende, coffinite, uraninite; and uranium secondary minerals include uranophane and . boltwoodite. REE mineralization occurred by the potassic phase in peginatitization process

  11. Thinking processes used by high-performing students in a computer programming task

    Directory of Open Access Journals (Sweden)

    Marietjie Havenga

    2011-07-01

    Full Text Available Computer programmers must be able to understand programming source code and write programs that execute complex tasks to solve real-world problems. This article is a trans- disciplinary study at the intersection of computer programming, education and psychology. It outlines the role of mental processes in the process of programming and indicates how successful thinking processes can support computer science students in writing correct and well-defined programs. A mixed methods approach was used to better understand the thinking activities and programming processes of participating students. Data collection involved both computer programs and students’ reflective thinking processes recorded in their journals. This enabled analysis of psychological dimensions of participants’ thinking processes and their problem-solving activities as they considered a programming problem. Findings indicate that the cognitive, reflective and psychological processes used by high-performing programmers contributed to their success in solving a complex programming problem. Based on the thinking processes of high performers, we propose a model of integrated thinking processes, which can support computer programming students. Keywords: Computer programming, education, mixed methods research, thinking processes.  Disciplines: Computer programming, education, psychology

  12. X/Qs and unit dose calculations for Central Waste Complex interim safety basis effort

    International Nuclear Information System (INIS)

    Huang, C.H.

    1996-01-01

    The objective for this problem is to calculate the ground-level release dispersion factors (X/Q) and unit doses for onsite facility and offsite receptors at the site boundary and at Highway 240 for plume meander, building wake effect, plume rise, and the combined effect. The release location is at Central Waste Complex Building P4 in the 200 West Area. The onsite facility is located at Building P7. Acute ground level release 99.5 percentile dispersion factors (X/Q) were generated using the GXQ. The unit doses were calculated using the GENII code. The dimensions of Building P4 are 15 m in W x 24 m in L x 6 m in H

  13. Perceptual weights for loudness reflect central spectral processing

    DEFF Research Database (Denmark)

    Joshi, Suyash Narendra; Jesteadt, Walt

    2011-01-01

    Weighting patterns for loudness obtained using the reverse correlation method are thought to reveal the relative contributions of different frequency regions to total loudness, the equivalent of specific loudness. Current models of loudness assume that specific loudness is determined by peripheral...... processes such as compression and masking. Here we test this hypothesis using 20-tone harmonic complexes (200Hz f0, 200 to 4000Hz, 250 ms, 65 dB/Component) added in opposite phase relationships (Schroeder positive and negative). Due to the varying degree of envelope modulations, these time-reversed harmonic...... processes and reflect a central frequency weighting template....

  14. On dosimetry of radiodiagnosis facilities, mainly focused on computed tomography units

    International Nuclear Information System (INIS)

    Ghitulescu, Zoe

    2008-01-01

    The 'talk' refers to the Dosimetry of computed tomography units and it has been thought and structured in three parts, more or less stressed each of them, thus: 1) Basics of image acquisition using computed tomography technique; 2) Effective Dose calculation for a patient and its assessment using BERT concept; 3) Recommended actions of getting a good compromise in between related dose and the image quality. The aim of the first part is that the reader to become acquainted with the CT technique in order to be able of understanding the Effective Dose calculation given example and its conversion into time units using the BERT concept . The drown conclusion is that: 1) Effective dose calculation accomplished by the medical physicist (using a special soft for the CT scanner and the exam type) and, converted in time units through BERT concept, could be then communicated by the radiologist together with the diagnostic notes. Thus, it is obviously necessary a minimum informal of the patients as regards the nature and type of radiation, for instance, by the help of some leaflets. In the third part are discussed the factors which lead to get a good image quality taking into account the ALARA principle of Radiation Protection which states the fact that the dose should be 'as low as reasonable achievable'. (author)

  15. [The nursing process at a burns unit: an ethnographic study].

    Science.gov (United States)

    Rossi, L A; Casagrande, L D

    2001-01-01

    This ethnographic study aimed at understanding the cultural meaning that nursing professionals working at a Burns Unit attribute to the nursing process as well as at identifying the factors affecting the implementation of this methodology. Data were collected through participant observation and semi-structured interviews. The findings indicate that, to the nurses from the investigated unit, the nursing process seems to be identified as bureaucratic management. Some factors determining this perception are: the way in which the nursing process has been taught and interpreted, routine as a guideline for nursing activity, and knowledge and power in the life-world of the Burns Unit.

  16. Status and trends of land change in the Midwest–South Central United States—1973 to 2000

    Science.gov (United States)

    Auch, Roger F.; Karstensen, Krista A.; Auch, Roger F.; Karstensen, Krista A.

    2015-12-10

    U.S. Geological Survey (USGS) Professional Paper 1794–C is the third in a four-volume series on the status and trends of the Nation’s land use and land cover, providing an assessment of the rates and causes of land-use and land-cover change in the Midwest–South Central United States between 1973 and 2000. Volumes A, B, and D provide similar analyses for the Western United States, the Great Plains of the United States, and the Eastern United States, respectively. The assessments of land-use and land-cover trends are conducted on an ecoregion-by-ecoregion basis, and each ecoregion assessment is guided by a nationally consistent study design that includes mapping, statistical methods, field studies, and analysis. Individual assessments provide a picture of the characteristics of land change occurring in a given ecoregion; in combination, they provide a framework for understanding the complex national mosaic of change and also the causes and consequences of change. Thus, each volume in this series provides a regional assessment of how (and how fast) land use and land cover are changing, and why. The four volumes together form the first comprehensive picture of land change across the Nation.Geographic understanding of land-use and land-cover change is directly relevant to a wide variety of stakeholders, including land and resource managers, policymakers, and scientists. The chapters in this volume present brief summaries of the patterns and rates of land change observed in each ecoregion in the Midwest–South Central United States, together with field photographs, statistics, and comparisons with other assessments. In addition, a synthesis chapter summarizes the scope of land change observed across the entire Midwest–South Central United States. The studies provide a way of integrating information across the landscape, and they form a critical component in the efforts to understand how land use and land cover affect important issues such as the provision of

  17. A Fast MHD Code for Gravitationally Stratified Media using Graphical Processing Units: SMAUG

    Science.gov (United States)

    Griffiths, M. K.; Fedun, V.; Erdélyi, R.

    2015-03-01

    Parallelization techniques have been exploited most successfully by the gaming/graphics industry with the adoption of graphical processing units (GPUs), possessing hundreds of processor cores. The opportunity has been recognized by the computational sciences and engineering communities, who have recently harnessed successfully the numerical performance of GPUs. For example, parallel magnetohydrodynamic (MHD) algorithms are important for numerical modelling of highly inhomogeneous solar, astrophysical and geophysical plasmas. Here, we describe the implementation of SMAUG, the Sheffield Magnetohydrodynamics Algorithm Using GPUs. SMAUG is a 1-3D MHD code capable of modelling magnetized and gravitationally stratified plasma. The objective of this paper is to present the numerical methods and techniques used for porting the code to this novel and highly parallel compute architecture. The methods employed are justified by the performance benchmarks and validation results demonstrating that the code successfully simulates the physics for a range of test scenarios including a full 3D realistic model of wave propagation in the solar atmosphere.

  18. Computational approach on PEB process in EUV resist: multi-scale simulation

    Science.gov (United States)

    Kim, Muyoung; Moon, Junghwan; Choi, Joonmyung; Lee, Byunghoon; Jeong, Changyoung; Kim, Heebom; Cho, Maenghyo

    2017-03-01

    For decades, downsizing has been a key issue for high performance and low cost of semiconductor, and extreme ultraviolet lithography is one of the promising candidates to achieve the goal. As a predominant process in extreme ultraviolet lithography on determining resolution and sensitivity, post exposure bake has been mainly studied by experimental groups, but development of its photoresist is at the breaking point because of the lack of unveiled mechanism during the process. Herein, we provide theoretical approach to investigate underlying mechanism on the post exposure bake process in chemically amplified resist, and it covers three important reactions during the process: acid generation by photo-acid generator dissociation, acid diffusion, and deprotection. Density functional theory calculation (quantum mechanical simulation) was conducted to quantitatively predict activation energy and probability of the chemical reactions, and they were applied to molecular dynamics simulation for constructing reliable computational model. Then, overall chemical reactions were simulated in the molecular dynamics unit cell, and final configuration of the photoresist was used to predict the line edge roughness. The presented multiscale model unifies the phenomena of both quantum and atomic scales during the post exposure bake process, and it will be helpful to understand critical factors affecting the performance of the resulting photoresist and design the next-generation material.

  19. Fire and climate suitability for woody vegetation communities in the south central United States

    Science.gov (United States)

    Stroh, Esther; Struckhoff, Matthew; Stambaugh, Michael C.; Guyette, Richard P.

    2018-01-01

    Climate and fire are primary drivers of plant species distributions. Long-term management of south central United States woody vegetation communities can benefit from information on potential changes in climate and fire frequencies, and how these changes might affect plant communities. We used historical (1900 to 1929) and future (2040 to 2069 and 2070 to 2099) projected climate data for the conterminous US to estimate reference and future fire probabilities

  20. permGPU: Using graphics processing units in RNA microarray association studies

    Directory of Open Access Journals (Sweden)

    George Stephen L

    2010-06-01

    Full Text Available Abstract Background Many analyses of microarray association studies involve permutation, bootstrap resampling and cross-validation, that are ideally formulated as embarrassingly parallel computing problems. Given that these analyses are computationally intensive, scalable approaches that can take advantage of multi-core processor systems need to be developed. Results We have developed a CUDA based implementation, permGPU, that employs graphics processing units in microarray association studies. We illustrate the performance and applicability of permGPU within the context of permutation resampling for a number of test statistics. An extensive simulation study demonstrates a dramatic increase in performance when using permGPU on an NVIDIA GTX 280 card compared to an optimized C/C++ solution running on a conventional Linux server. Conclusions permGPU is available as an open-source stand-alone application and as an extension package for the R statistical environment. It provides a dramatic increase in performance for permutation resampling analysis in the context of microarray association studies. The current version offers six test statistics for carrying out permutation resampling analyses for binary, quantitative and censored time-to-event traits.

  1. [INVITED] Computational intelligence for smart laser materials processing

    Science.gov (United States)

    Casalino, Giuseppe

    2018-03-01

    Computational intelligence (CI) involves using a computer algorithm to capture hidden knowledge from data and to use them for training ;intelligent machine; to make complex decisions without human intervention. As simulation is becoming more prevalent from design and planning to manufacturing and operations, laser material processing can also benefit from computer generating knowledge through soft computing. This work is a review of the state-of-the-art on the methodology and applications of CI in laser materials processing (LMP), which is nowadays receiving increasing interest from world class manufacturers and 4.0 industry. The focus is on the methods that have been proven effective and robust in solving several problems in welding, cutting, drilling, surface treating and additive manufacturing using the laser beam. After a basic description of the most common computational intelligences employed in manufacturing, four sections, namely, laser joining, machining, surface, and additive covered the most recent applications in the already extensive literature regarding the CI in LMP. Eventually, emerging trends and future challenges were identified and discussed.

  2. Programmable neural processing on a smartdust for brain-computer interfaces.

    Science.gov (United States)

    Yuwen Sun; Shimeng Huang; Oresko, Joseph J; Cheng, Allen C

    2010-10-01

    Brain-computer interfaces (BCIs) offer tremendous promise for improving the quality of life for disabled individuals. BCIs use spike sorting to identify the source of each neural firing. To date, spike sorting has been performed by either using off-chip analysis, which requires a wired connection penetrating the skull to a bulky external power/processing unit, or via custom application-specific integrated circuits that lack the programmability to perform different algorithms and upgrades. In this research, we propose and test the feasibility of performing on-chip, real-time spike sorting on a programmable smartdust, including feature extraction, classification, compression, and wireless transmission. A detailed power/performance tradeoff analysis using DVFS is presented. Our experimental results show that the execution time and power density meet the requirements to perform real-time spike sorting and wireless transmission on a single neural channel.

  3. 77 FR 31026 - Use of Computer Simulation of the United States Blood Supply in Support of Planning for Emergency...

    Science.gov (United States)

    2012-05-24

    ...] Use of Computer Simulation of the United States Blood Supply in Support of Planning for Emergency... entitled: ``Use of Computer Simulation of the United States Blood Supply in Support of Planning for... and panel discussions with experts from academia, regulated industry, government, and other...

  4. Dynamic wavefront creation for processing units using a hybrid compactor

    Energy Technology Data Exchange (ETDEWEB)

    Puthoor, Sooraj; Beckmann, Bradford M.; Yudanov, Dmitri

    2018-02-20

    A method, a non-transitory computer readable medium, and a processor for repacking dynamic wavefronts during program code execution on a processing unit, each dynamic wavefront including multiple threads are presented. If a branch instruction is detected, a determination is made whether all wavefronts following a same control path in the program code have reached a compaction point, which is the branch instruction. If no branch instruction is detected in executing the program code, a determination is made whether all wavefronts following the same control path have reached a reconvergence point, which is a beginning of a program code segment to be executed by both a taken branch and a not taken branch from a previous branch instruction. The dynamic wavefronts are repacked with all threads that follow the same control path, if all wavefronts following the same control path have reached the branch instruction or the reconvergence point.

  5. Examining the central and peripheral processes of written word production through meta-analysis

    Directory of Open Access Journals (Sweden)

    Jeremy ePurcell

    2011-10-01

    Full Text Available Producing written words requires central cognitive processes (such as orthographic long-term and working memory as well as more peripheral processes responsible for generating the motor actions needed for producing written words in a variety of formats (handwriting, typing, etc.. In recent years, various functional neuroimaging studies have examined the neural substrates underlying the central and peripheral processes of written word production. This study provides the first quantitative meta-analysis of these studies by applying Activation Likelihood Estimation methods (Turkeltaub et al., 2002. For alphabet languages, we identified 11 studies (with a total of 17 experimental contrasts that had been designed to isolate central and/or peripheral processes of word spelling (total number of participants = 146. Three ALE meta-analyses were carried out. One involved the complete set of 17 contrasts; two others were applied to subsets of contrasts to distinguish the neural substrates of central from peripheral processes. These analyses identified a network of brain regions reliably associated with the central and peripheral processes of word spelling. Among the many significant results, is the finding that the regions with the greatest correspondence across studies were in the left inferior temporal/fusiform gyri and left inferior frontal gyrus. Furthermore, although the angular gyrus has traditionally been identified as a key site within the written word production network, none of the meta-analyses found it to be a consistent site of activation, identifying instead a region just superior/medial to the left angular gyrus in the left posterior intraparietal sulcus. In general these meta-analyses and the discussion of results provide a valuable foundation upon which future studies that examine the neural basis of written word production can build.

  6. Report of the Central Tracking Group

    International Nuclear Information System (INIS)

    Cassel, D.G.; Hanson, G.G.

    1986-10-01

    Issues involved in building a realistic central tracking system for a general-purpose 4π detector for the SSC are addressed. Such a central tracking system must be capable of running at the full design luminosity of 10 33 cm -2 s -1 . Momentum measurement was required in a general-purpose 4π detector. Limitations on charged particle tracking detectors at the SSC imposed by rates and radiation damage are reviewed. Cell occupancy is the dominant constraint, which led us to the conclusion that only small cells, either wires or straw tubes, are suitable for a central tracking system at the SSC. Mechanical problems involved in building a central tracking system of either wires or straw tubes were studied, and our conclusion was that it is possible to build such a large central tracking system. Of course, a great deal of research and development is required. We also considered central tracking systems made of scintillating fibers or silicon microstrips, but our conclusion was that neither is a realistic candidate given the current state of technology. We began to work on computer simulation of a realistic central tracking system. Events from interesting physics processes at the SSC will be complex and will be further complicated by hits from out-of-time bunch crossings and multiple interactions within the same bunch crossing. Detailed computer simulations are needed to demonstrate that the pattern recognition and tracking problems can be solved

  7. Process-centric IT in Practice

    DEFF Research Database (Denmark)

    Siurdyban, Artur; Nielsen, Peter Axel

    2012-01-01

    , they should find governance structures which ensure that there is a fruitful collaboration between the corporate IT department and the local business units. This collaboration should also include different competences with both IT and process management and competences differing because they are centralized......This case illustrates and discusses the issues and challenges at Kerrtec Corporation in their effort to establish process-centric IT management. The case describes how a local business unit in Kerrtec managed their business processes and how that created a necessity for IT to be managed to match...... the business processes. It also describes how the central IT department at corporate headquarters responded to requests rooted in business processes. In discussing the challenges for Kerrtec, it is clear that they will have to map out the needed competences for process-centric IT management. In particular...

  8. Investigation of the Dynamic Melting Process in a Thermal Energy Storage Unit Using a Helical Coil Heat Exchanger

    Directory of Open Access Journals (Sweden)

    Xun Yang

    2017-08-01

    Full Text Available In this study, the dynamic melting process of the phase change material (PCM in a vertical cylindrical tube-in-tank thermal energy storage (TES unit was investigated through numerical simulations and experimental measurements. To ensure good heat exchange performance, a concentric helical coil was inserted into the TES unit to pipe the heat transfer fluid (HTF. A numerical model using the computational fluid dynamics (CFD approach was developed based on the enthalpy-porosity method to simulate the unsteady melting process including temperature and liquid fraction variations. Temperature measurements using evenly spaced thermocouples were conducted, and the temperature variation at three locations inside the TES unit was recorded. The effects of the HTF inlet parameters were investigated by parametric studies with different temperatures and flow rate values. Reasonably good agreement was achieved between the numerical prediction and the temperature measurement, which confirmed the numerical simulation accuracy. The numerical results showed the significance of buoyancy effect for the dynamic melting process. The system TES performance was very sensitive to the HTF inlet temperature. By contrast, no apparent influences can be found when changing the HTF flow rates. This study provides a comprehensive solution to investigate the heat exchange process of the TES system using PCM.

  9. GAMER: A GRAPHIC PROCESSING UNIT ACCELERATED ADAPTIVE-MESH-REFINEMENT CODE FOR ASTROPHYSICS

    International Nuclear Information System (INIS)

    Schive, H.-Y.; Tsai, Y.-C.; Chiueh Tzihong

    2010-01-01

    We present the newly developed code, GPU-accelerated Adaptive-MEsh-Refinement code (GAMER), which adopts a novel approach in improving the performance of adaptive-mesh-refinement (AMR) astrophysical simulations by a large factor with the use of the graphic processing unit (GPU). The AMR implementation is based on a hierarchy of grid patches with an oct-tree data structure. We adopt a three-dimensional relaxing total variation diminishing scheme for the hydrodynamic solver and a multi-level relaxation scheme for the Poisson solver. Both solvers have been implemented in GPU, by which hundreds of patches can be advanced in parallel. The computational overhead associated with the data transfer between the CPU and GPU is carefully reduced by utilizing the capability of asynchronous memory copies in GPU, and the computing time of the ghost-zone values for each patch is diminished by overlapping it with the GPU computations. We demonstrate the accuracy of the code by performing several standard test problems in astrophysics. GAMER is a parallel code that can be run in a multi-GPU cluster system. We measure the performance of the code by performing purely baryonic cosmological simulations in different hardware implementations, in which detailed timing analyses provide comparison between the computations with and without GPU(s) acceleration. Maximum speed-up factors of 12.19 and 10.47 are demonstrated using one GPU with 4096 3 effective resolution and 16 GPUs with 8192 3 effective resolution, respectively.

  10. High Input Voltage, Silicon Carbide Power Processing Unit Performance Demonstration

    Science.gov (United States)

    Bozak, Karin E.; Pinero, Luis R.; Scheidegger, Robert J.; Aulisio, Michael V.; Gonzalez, Marcelo C.; Birchenough, Arthur G.

    2015-01-01

    A silicon carbide brassboard power processing unit has been developed by the NASA Glenn Research Center in Cleveland, Ohio. The power processing unit operates from two sources: a nominal 300 Volt high voltage input bus and a nominal 28 Volt low voltage input bus. The design of the power processing unit includes four low voltage, low power auxiliary supplies, and two parallel 7.5 kilowatt (kW) discharge power supplies that are capable of providing up to 15 kilowatts of total power at 300 to 500 Volts (V) to the thruster. Additionally, the unit contains a housekeeping supply, high voltage input filter, low voltage input filter, and master control board, such that the complete brassboard unit is capable of operating a 12.5 kilowatt Hall effect thruster. The performance of the unit was characterized under both ambient and thermal vacuum test conditions, and the results demonstrate exceptional performance with full power efficiencies exceeding 97%. The unit was also tested with a 12.5kW Hall effect thruster to verify compatibility and output filter specifications. With space-qualified silicon carbide or similar high voltage, high efficiency power devices, this would provide a design solution to address the need for high power electric propulsion systems.

  11. Motivation enhances visual working memory capacity through the modulation of central cognitive processes.

    Science.gov (United States)

    Sanada, Motoyuki; Ikeda, Koki; Kimura, Kenta; Hasegawa, Toshikazu

    2013-09-01

    Motivation is well known to enhance working memory (WM) capacity, but the mechanism underlying this effect remains unclear. The WM process can be divided into encoding, maintenance, and retrieval, and in a change detection visual WM paradigm, the encoding and retrieval processes can be subdivided into perceptual and central processing. To clarify which of these segments are most influenced by motivation, we measured ERPs in a change detection task with differential monetary rewards. The results showed that the enhancement of WM capacity under high motivation was accompanied by modulations of late central components but not those reflecting attentional control on perceptual inputs across all stages of WM. We conclude that the "state-dependent" shift of motivation impacted the central, rather than the perceptual functions in order to achieve better behavioral performances. Copyright © 2013 Society for Psychophysiological Research.

  12. Effect of delayed auditory feedback on stuttering with and without central auditory processing disorders.

    Science.gov (United States)

    Picoloto, Luana Altran; Cardoso, Ana Cláudia Vieira; Cerqueira, Amanda Venuti; Oliveira, Cristiane Moço Canhetti de

    2017-12-07

    To verify the effect of delayed auditory feedback on speech fluency of individuals who stutter with and without central auditory processing disorders. The participants were twenty individuals with stuttering from 7 to 17 years old and were divided into two groups: Stuttering Group with Auditory Processing Disorders (SGAPD): 10 individuals with central auditory processing disorders, and Stuttering Group (SG): 10 individuals without central auditory processing disorders. Procedures were: fluency assessment with non-altered auditory feedback (NAF) and delayed auditory feedback (DAF), assessment of the stuttering severity and central auditory processing (CAP). Phono Tools software was used to cause a delay of 100 milliseconds in the auditory feedback. The "Wilcoxon Signal Post" test was used in the intragroup analysis and "Mann-Whitney" test in the intergroup analysis. The DAF caused a statistically significant reduction in SG: in the frequency score of stuttering-like disfluencies in the analysis of the Stuttering Severity Instrument, in the amount of blocks and repetitions of monosyllabic words, and in the frequency of stuttering-like disfluencies of duration. Delayed auditory feedback did not cause statistically significant effects on SGAPD fluency, individuals with stuttering with auditory processing disorders. The effect of delayed auditory feedback in speech fluency of individuals who stutter was different in individuals of both groups, because there was an improvement in fluency only in individuals without auditory processing disorder.

  13. Initial quantitative evaluation of computed radiography in an intensive care unit

    International Nuclear Information System (INIS)

    Hillis, D.J.; McDonald, I.G.; Kelly, W.J.

    1996-01-01

    The first computed radiography (CR) unit in Australia was installed at St Vincent's Hospital, Melbourne, in February 1994. An initial qualitative evaluation of the attitude of the intensive care unit (ICU) physicians to the CR unit was conducted by use of a survey. The results of the survey of ICU physicians indicated that images were available faster than under the previous system and that the use of the CR system was preferred to evaluate chest tubes and line placements. While it is recognized that a further detailed radiological evaluation of the CR system is required to establish the diagnostic performance of CR compared with conventional film, some comments on the implementation of the system and ICU physician attitudes to the CR system are put forward for consideration by other hospitals examining the possible use of CR systems. 11 refs., 1 tab

  14. Scale up risk of developing oil shale processing units

    International Nuclear Information System (INIS)

    Oepik, I.

    1991-01-01

    The experiences in oil shale processing in three large countries, China, the U.S.A. and the U.S.S.R. have demonstrated, that the relative scale up risk of developing oil shale processing units is related to the scale up factor. On the background of large programmes for developing the oil shale industry branch, i.e. the $30 billion investments in colorado and Utah or 50 million t/year oil shale processing in Estonia and Leningrad Region planned in the late seventies, the absolute scope of the scale up risk of developing single retorting plants, seems to be justified. But under the conditions of low crude oil prices, when the large-scale development of oil shale processing industry is stopped, the absolute scope of the scale up risk is to be divided between a small number of units. Therefore, it is reasonable to build the new commercial oil shale processing plants with a minimum scale up risk. For example, in Estonia a new oil shale processing plant with gas combustion retorts projected to start in the early nineties will be equipped with four units of 1500 t/day enriched oil shale throughput each, designed with scale up factor M=1.5 and with a minimum scale up risk, only r=2.5-4.5%. The oil shale retorting unit for the PAMA plant in Israel [1] is planned to develop in three steps, also with minimum scale up risk: feasibility studies in Colorado with Israel's shale at Paraho 250 t/day retort and other tests, demonstration retort of 700 t/day and M=2.8 in Israel, and commercial retorts in the early nineties with the capacity of about 1000 t/day with M=1.4. The scale up risk of the PAMA project r=2-4% is approximately the same as that in Estonia. the knowledge of the scope of the scale up risk of developing oil shale processing retorts assists on the calculation of production costs in erecting new units. (author). 9 refs., 2 tabs

  15. Arithmetical unit, interrupt hardware and input-output channel for the computer Bel

    International Nuclear Information System (INIS)

    Fyroe, Karl-Johan

    1969-01-01

    This thesis contains a description of a small general purpose computer using characters, variable word-length and two-address instructions and which is working in decimal (NBCD). We have realized three interruption lines with a fixed priority. The channel is selective and has generally access to the entire memory. Using slow IO-devices, time sharing is possible between the channel and the processor in the central memory buffer area. (author) [fr

  16. Computer-Aided Multiscale Modelling for Chemical Process Engineering

    DEFF Research Database (Denmark)

    Morales Rodriguez, Ricardo; Gani, Rafiqul

    2007-01-01

    Chemical processes are generally modeled through monoscale approaches, which, while not adequate, satisfy a useful role in product-process design. In this case, use of a multi-dimensional and multi-scale model-based approach has importance in product-process development. A computer-aided framework...

  17. AN APPROACH TO EFFICIENT FEM SIMULATIONS ON GRAPHICS PROCESSING UNITS USING CUDA

    Directory of Open Access Journals (Sweden)

    Björn Nutti

    2014-04-01

    Full Text Available The paper presents a highly efficient way of simulating the dynamic behavior of deformable objects by means of the finite element method (FEM with computations performed on Graphics Processing Units (GPU. The presented implementation reduces bottlenecks related to memory accesses by grouping the necessary data per node pairs, in contrast to the classical way done per element. This strategy reduces the memory access patterns that are not suitable for the GPU memory architecture. Furthermore, the presented implementation takes advantage of the underlying sparse-block-matrix structure, and it has been demonstrated how to avoid potential bottlenecks in the algorithm. To achieve plausible deformational behavior for large local rotations, the objects are modeled by means of a simplified co-rotational FEM formulation.

  18. Real-time processing for full-range Fourier-domain optical-coherence tomography with zero-filling interpolation using multiple graphic processing units.

    Science.gov (United States)

    Watanabe, Yuuki; Maeno, Seiya; Aoshima, Kenji; Hasegawa, Haruyuki; Koseki, Hitoshi

    2010-09-01

    The real-time display of full-range, 2048?axial pixelx1024?lateral pixel, Fourier-domain optical-coherence tomography (FD-OCT) images is demonstrated. The required speed was achieved by using dual graphic processing units (GPUs) with many stream processors to realize highly parallel processing. We used a zero-filling technique, including a forward Fourier transform, a zero padding to increase the axial data-array size to 8192, an inverse-Fourier transform back to the spectral domain, a linear interpolation from wavelength to wavenumber, a lateral Hilbert transform to obtain the complex spectrum, a Fourier transform to obtain the axial profiles, and a log scaling. The data-transfer time of the frame grabber was 15.73?ms, and the processing time, which includes the data transfer between the GPU memory and the host computer, was 14.75?ms, for a total time shorter than the 36.70?ms frame-interval time using a line-scan CCD camera operated at 27.9?kHz. That is, our OCT system achieved a processed-image display rate of 27.23 frames/s.

  19. Processing techniques for data from the Kuosheng Unit 1 shakedown safety-relief-valve tests

    International Nuclear Information System (INIS)

    McCauley, E.W.; Rompel, S.L.; Weaver, H.J.; Altenbach, T.J.

    1982-08-01

    This report describes techniques developed at the Lawrence Livermore National Laobratory, Livermore, CA for processing original data from the Taiwan Power Company's Kuosheng MKIII Unit 1 Safety Relief Valve Shakedown Tests conducted in April/May 1981. The computer codes used, TPSORT, TPPLOT, and TPPSD, form a special evaluation system for treating the data from its original packed binary form to ordered, calibrated ASCII transducer files and then to production of time-history plots, numerical output files, and spectral analyses. Using the data processing techniques described, a convenient means of independently examining and analyzing a unique data base for steam condensation phenomena in the MARKIII wetwell is described. The techniques developed for handling these data are applicable to the treatment of similar, but perhaps differently structured, experiment data sets

  20. A management system of data for department of diagnostic radiology and patients using the personal computer

    International Nuclear Information System (INIS)

    Kim, Jin Hee; Park, Tae Joon; Choi, Tae Haing; Lim, Se Hwan; Joon Yang Noh; Kim, Sung Jin

    1996-01-01

    With the use of personal computers generalized, departmental society leveled computerization is going on in some other departments. So we tried to develop a program having a simple user interface, various retrieval functions and, analytic and statistic process system to effectively help patient care suitable for works concerned with department of diagnostic radiology and works of department. This program deals with such target works as department of diagnostic radiology and some works to need a lot of bookkeeping. It is deviced to operate with Windows (Microsoft, America), and central processing unit(486DX-2), memory unit(8 Mbyte). As a developmental tool, Foxpro 2.6 for windows R (Microsoft, America). This program can be easily accessed even by staffs poor at computer and it can make many books recording various check-ups and operations unnecessary, which were difficult to keep. Besides, it can keep data as a unified form, and so it provides patient care and other works with convenience and helps applying those stored data scientific research. The above result shows that works of department can be effectively controlled by analyzing or printing various check-up and operation done by department of diagnostic radiology

  1. Utero-fetal unit and pregnant woman modeling using a computer graphics approach for dosimetry studies.

    Science.gov (United States)

    Anquez, Jérémie; Boubekeur, Tamy; Bibin, Lazar; Angelini, Elsa; Bloch, Isabelle

    2009-01-01

    Potential sanitary effects related to electromagnetic fields exposure raise public concerns, especially for fetuses during pregnancy. Human fetus exposure can only be assessed through simulated dosimetry studies, performed on anthropomorphic models of pregnant women. In this paper, we propose a new methodology to generate a set of detailed utero-fetal unit (UFU) 3D models during the first and third trimesters of pregnancy, based on segmented 3D ultrasound and MRI data. UFU models are built using recent geometry processing methods derived from mesh-based computer graphics techniques and embedded in a synthetic woman body. Nine pregnant woman models have been generated using this approach and validated by obstetricians, for anatomical accuracy and representativeness.

  2. Modern-Day Demographic Processes in Central Europe and Their Potential Interactions with Climate Change

    Science.gov (United States)

    Bański, Jerzy

    2013-01-01

    The aim of this article is to evaluate the effect of contemporary transformations in the population of Central European countries on climate change, in addition to singling out the primary points of interaction between demographic processes and the climate. In analyzing the interactions between climate and demographics, we can formulate three basic hypotheses regarding the region in question: 1) as a result of current demographic trends in Central Europe, the influence of the region on its climate will probably diminish, 2) the importance of the "climatically displaced" in global migratory movements will increase, and some of those concerned will move to Central Europe, 3) the contribution of the region to global food security will increase. In the last decade most of what comprises the region of Central Europe has reported a decline in population growth and a negative migration balance. As a process, this loss of population may have a positive effect on the environment and the climate. We can expect ongoing climate change to intensify migration processes, particularly from countries outside Europe. Interactions between climate and demographic processes can also be viewed in the context of food security. The global warming most sources foresee for the coming decades is the process most likely to result in spatial polarization of food production in agriculture. Central Europe will then face the challenge of assuring and improving food security, albeit this time on a global scale.

  3. Research on application of computer technologies in jewelry process

    Directory of Open Access Journals (Sweden)

    Junbo Xia

    2017-06-01

    Full Text Available Jewelry production is a process of precious raw materials and low losses in processing. The traditional manual mode is unable to meet the needs of enterprises in reality, while the involvement of computer technology can just solve this practical problem. At present, the problem of restricting the application for computer in jewelry production is mainly a failure to find a production model that can serve the whole industry chain with the computer as the core of production. This paper designs a “synchronous and diversified” production model with “computer aided design technology” and “rapid prototyping technology” as the core, and tests with actual production cases, and achieves certain results, which are forward-looking and advanced.

  4. Central auditory processing outcome after stroke in children

    Directory of Open Access Journals (Sweden)

    Karla M. I. Freiria Elias

    2014-09-01

    Full Text Available Objective To investigate central auditory processing in children with unilateral stroke and to verify whether the hemisphere affected by the lesion influenced auditory competence. Method 23 children (13 male between 7 and 16 years old were evaluated through speech-in-noise tests (auditory closure; dichotic digit test and staggered spondaic word test (selective attention; pitch pattern and duration pattern sequence tests (temporal processing and their results were compared with control children. Auditory competence was established according to the performance in auditory analysis ability. Results Was verified similar performance between groups in auditory closure ability and pronounced deficits in selective attention and temporal processing abilities. Most children with stroke showed an impaired auditory ability in a moderate degree. Conclusion Children with stroke showed deficits in auditory processing and the degree of impairment was not related to the hemisphere affected by the lesion.

  5. Automatic processing of radioimmunological research data on a computer

    International Nuclear Information System (INIS)

    Korolyuk, I.P.; Gorodenko, A.N.; Gorodenko, S.I.

    1979-01-01

    A program ''CRITEST'' in the language PL/1 for the EC computer intended for automatic processing of the results of radioimmunological research has been elaborated. The program works in the operation system of the OC EC computer and is performed in the section OC 60 kb. When compiling the program Eitken's modified algorithm was used. The program was clinically approved when determining a number of hormones: CTH, T 4 , T 3 , TSH. The automatic processing of the radioimmunological research data on the computer makes it possible to simplify the labour-consuming analysis and to raise its accuracy

  6. Splash, pop, sizzle: Information processing with phononic computing

    Directory of Open Access Journals (Sweden)

    Sophia R. Sklan

    2015-05-01

    Full Text Available Phonons, the quanta of mechanical vibration, are important to the transport of heat and sound in solid materials. Recent advances in the fundamental control of phonons (phononics have brought into prominence the potential role of phonons in information processing. In this review, the many directions of realizing phononic computing and information processing are examined. Given the relative similarity of vibrational transport at different length scales, the related fields of acoustic, phononic, and thermal information processing are all included, as are quantum and classical computer implementations. Connections are made between the fundamental questions in phonon transport and phononic control and the device level approach to diodes, transistors, memory, and logic.

  7. Teaching Psychology Students Computer Applications.

    Science.gov (United States)

    Atnip, Gilbert W.

    This paper describes an undergraduate-level course designed to teach the applications of computers that are most relevant in the social sciences, especially psychology. After an introduction to the basic concepts and terminology of computing, separate units were devoted to word processing, data analysis, data acquisition, artificial intelligence,…

  8. Use of Six Sigma strategies to pull the line on central line-associated bloodstream infections in a neurotrauma intensive care unit.

    Science.gov (United States)

    Loftus, Kelli; Tilley, Terry; Hoffman, Jason; Bradburn, Eric; Harvey, Ellen

    2015-01-01

    The creation of a consistent culture of safety and quality in an intensive care unit is challenging. We applied the Six Sigma Define-Measure-Analyze-Improve-Control (DMAIC) model for quality improvement (QI) to develop a long-term solution to improve outcomes in a high-risk neurotrauma intensive care unit. We sought to reduce central line utilization as a cornerstone in preventing central line-associated bloodstream infections (CLABSIs). This study describes the successful application of the DMAIC model in the creation and implementation of evidence-based quality improvement designed to reduce CLABSIs to below national benchmarks.

  9. 32 CFR 516.12 - Service of civil process outside the United States.

    Science.gov (United States)

    2010-07-01

    ... 32 National Defense 3 2010-07-01 2010-07-01 true Service of civil process outside the United... AID OF CIVIL AUTHORITIES AND PUBLIC RELATIONS LITIGATION Service of Process § 516.12 Service of civil process outside the United States. (a) Process of foreign courts. In foreign countries service of process...

  10. Ecosystem process interactions between central Chilean habitats

    Directory of Open Access Journals (Sweden)

    Meredith Root-Bernstein

    2015-01-01

    Full Text Available Understanding ecosystem processes is vital for developing dynamic adaptive management of human-dominated landscapes. We focus on conservation and management of the central Chilean silvopastoral savanna habitat called “espinal”, which often occurs near matorral, a shrub habitat. Although matorral, espinal and native sclerophyllous forest are linked successionally, they are not jointly managed and conserved. Management goals in “espinal” include increasing woody cover, particularly of the dominant tree Acacia caven, improving herbaceous forage quality, and increasing soil fertility. We asked whether adjacent matorral areas contribute to espinal ecosystem processes related to the three main espinal management goals. We examined input and outcome ecosystem processes related to these goals in matorral and espinal with and without shrub understory. We found that matorral had the largest sets of inputs to ecosystem processes, and espinal with shrub understory had the largest sets of outcomes. Moreover, we found that these outcomes were broadly in the directions preferred by management goals. This supports our prediction that matorral acts as an ecosystem process bank for espinal. We recommend that management plans for landscape resilience consider espinal and matorral as a single landscape cover class that should be maintained as a dynamic mosaic. Joint management of espinal and matorral could create new management and policy opportunities.

  11. Retrofitting of NPP Computer systems

    International Nuclear Information System (INIS)

    Pettersen, G.

    1994-01-01

    Retrofitting of nuclear power plant control rooms is a continuing process for most utilities. This involves introducing and/or extending computer-based solutions for surveillance and control as well as improving the human-computer interface. The paper describes typical requirements when retrofitting NPP process computer systems, and focuses on the activities of Institute for energieteknikk, OECD Halden Reactor project with respect to such retrofitting, using examples from actual delivery projects. In particular, a project carried out for Forsmarksverket in Sweden comprising upgrade of the operator system in the control rooms of units 1 and 2 is described. As many of the problems of retrofitting NPP process computer systems are similar to such work in other kinds of process industries, an example from a non-nuclear application area is also given

  12. Central and Eastern United States (CEUS) Seismic Source Characterization (SSC) for Nuclear Facilities Project

    Energy Technology Data Exchange (ETDEWEB)

    Kevin J. Coppersmith; Lawrence A. Salomone; Chris W. Fuller; Laura L. Glaser; Kathryn L. Hanson; Ross D. Hartleb; William R. Lettis; Scott C. Lindvall; Stephen M. McDuffie; Robin K. McGuire; Gerry L. Stirewalt; Gabriel R. Toro; Robert R. Youngs; David L. Slayter; Serkan B. Bozkurt; Randolph J. Cumbest; Valentina Montaldo Falero; Roseanne C. Perman' Allison M. Shumway; Frank H. Syms; Martitia (Tish) P. Tuttle

    2012-01-31

    This report describes a new seismic source characterization (SSC) model for the Central and Eastern United States (CEUS). It will replace the Seismic Hazard Methodology for the Central and Eastern United States, EPRI Report NP-4726 (July 1986) and the Seismic Hazard Characterization of 69 Nuclear Plant Sites East of the Rocky Mountains, Lawrence Livermore National Laboratory Model, (Bernreuter et al., 1989). The objective of the CEUS SSC Project is to develop a new seismic source model for the CEUS using a Senior Seismic Hazard Analysis Committee (SSHAC) Level 3 assessment process. The goal of the SSHAC process is to represent the center, body, and range of technically defensible interpretations of the available data, models, and methods. Input to a probabilistic seismic hazard analysis (PSHA) consists of both seismic source characterization and ground motion characterization. These two components are used to calculate probabilistic hazard results (or seismic hazard curves) at a particular site. This report provides a new seismic source model. Results and Findings The product of this report is a regional CEUS SSC model. This model includes consideration of an updated database, full assessment and incorporation of uncertainties, and the range of diverse technical interpretations from the larger technical community. The SSC model will be widely applicable to the entire CEUS, so this project uses a ground motion model that includes generic variations to allow for a range of representative site conditions (deep soil, shallow soil, hard rock). Hazard and sensitivity calculations were conducted at seven test sites representative of different CEUS hazard environments. Challenges and Objectives The regional CEUS SSC model will be of value to readers who are involved in PSHA work, and who wish to use an updated SSC model. This model is based on a comprehensive and traceable process, in accordance with SSHAC guidelines in NUREG/CR-6372, Recommendations for Probabilistic

  13. Efficient particle-in-cell simulation of auroral plasma phenomena using a CUDA enabled graphics processing unit

    Science.gov (United States)

    Sewell, Stephen

    This thesis introduces a software framework that effectively utilizes low-cost commercially available Graphic Processing Units (GPUs) to simulate complex scientific plasma phenomena that are modeled using the Particle-In-Cell (PIC) paradigm. The software framework that was developed conforms to the Compute Unified Device Architecture (CUDA), a standard for general purpose graphic processing that was introduced by NVIDIA Corporation. This framework has been verified for correctness and applied to advance the state of understanding of the electromagnetic aspects of the development of the Aurora Borealis and Aurora Australis. For each phase of the PIC methodology, this research has identified one or more methods to exploit the problem's natural parallelism and effectively map it for execution on the graphic processing unit and its host processor. The sources of overhead that can reduce the effectiveness of parallelization for each of these methods have also been identified. One of the novel aspects of this research was the utilization of particle sorting during the grid interpolation phase. The final representation resulted in simulations that executed about 38 times faster than simulations that were run on a single-core general-purpose processing system. The scalability of this framework to larger problem sizes and future generation systems has also been investigated.

  14. Analysis spectral shapes from California and central United States ground motion

    International Nuclear Information System (INIS)

    1994-01-01

    The objective of this study is to analyze the spectral shapes from earthquake records with magnitudes and distances comparable to those that dominate seismic hazard at Oak Ridge, in order to provide guidance for the selection of site-specific design-spectrum shapes for use in Oak Ridge. The authors rely heavily on California records because the number of relevant records from the central and eastern United States (CEUS) is not large enough for drawing statistically significant conclusions. They focus on the 0.5 to 10-Hz frequency range for two reasons: (1) this is the frequency range of most engineering interest, and (2) they avoid the effect of well-known differences in the high-frequency energy content between California and CEUS ground motions

  15. Digital computer control of a research nuclear reactor

    International Nuclear Information System (INIS)

    Crawford, Kevan

    1986-01-01

    Currently, the use of digital computers in energy producing systems has been limited to data acquisition functions. These computers have greatly reduced human involvement in the moment to moment decision process and the crisis decision process, thereby improving the safety of the dynamic energy producing systems. However, in addition to data acquisition, control of energy producing systems also includes data comparison, decision making, and control actions. The majority of the later functions are accomplished through the use of analog computers in a distributed configuration. The lack of cooperation and hence, inefficiency in distributed control, and the extent of human interaction in critical phases of control have provided the incentive to improve the later three functions of energy systems control. Properly applied, centralized control by digital computers can increase efficiency by making the system react as a single unit and by implementing efficient power changes to match demand. Additionally, safety will be improved by further limiting human involvement to action only in the case of a failure of the centralized control system. This paper presents a hardware and software design for the centralized control of a research nuclear reactor by a digital computer. Current nuclear reactor control philosophies which include redundancy, inherent safety in failure, and conservative yet operational scram initiation were used as the bases of the design. The control philosophies were applied to the power monitoring system, the fuel temperature monitoring system, the area radiation monitoring system, and the overall system interaction. Unlike the single function analog computers that are currently used to control research and commercial reactors, this system will be driven by a multifunction digital computer. Specifically, the system will perform control rod movements to conform with operator requests, automatically log the required physical parameters during reactor

  16. Computer aided design of fast neutron therapy units

    International Nuclear Information System (INIS)

    Gileadi, A.E.; Gomberg, H.J.; Lampe, I.

    1980-01-01

    Conceptual design of a radiation-therapy unit using fusion neutrons is presently being considered by KMS Fusion, Inc. As part of this effort, a powerful and versatile computer code, TBEAM, has been developed which enables the user to determine physical characteristics of the fast neutron beam generated in the facility under consideration, using certain given design parameters of the facility as inputs. TBEAM uses the method of statistical sampling (Monte Carlo) to solve the space, time and energy dependent neutron transport equation relating to the conceptual design described by the user-supplied input parameters. The code traces the individual source neutrons as they propagate throughout the shield-collimator structure of the unit, and it keeps track of each interaction by type, position and energy. In its present version, TBEAM is applicable to homogeneous and laminated shields of spherical geometry, to collimator apertures of conical shape, and to neutrons emitted by point sources or such plate sources as are used in neutron generators of various types. TBEAM-generated results comparing the performance of point or plate sources in otherwise identical shield-collimator configurations are presented in numerical form. (H.K.)

  17. Digital control computer upgrade at the Cernavoda NPP simulator

    International Nuclear Information System (INIS)

    Ionescu, T.

    2006-01-01

    The Plant Process Computer equips some Nuclear Power Plants, like CANDU-600, with Centralized Control performed by an assembly of two computers known as Digital Control Computers (DCC) and working in parallel for safely driving of the plan at steady state and during normal maneuvers but also during abnormal transients when the plant is automatically steered to a safe state. The Centralized Control means both hardware and software with obligatory presence in the frame of the Full Scope Simulator and subject to changing its configuration with specific requirements during the plant and simulator life and covered by this subsection

  18. Evaluation of Pb and Cu contents of selected component parts of ...

    African Journals Online (AJOL)

    Thirty five (35) units of waste computer central processing unit (CPU) and 24 units of waste computer monitors of different brands, manufacturers, year of manufacture, and model were collected from different electronic repairers' shops in Ibadan, South-western Nigeria and investigated for the lead and copper contents.

  19. Central tarsal bone fractures in horses not used for racing: Computed tomographic configuration and long-term outcome of lag screw fixation

    OpenAIRE

    Gunst, S; Del Chicca, Francesca; Fürst, Anton; Kuemmerle, Jan M

    2016-01-01

    REASONS FOR PERFORMING STUDY: There are no reports on the configuration of equine central tarsal bone fractures based on cross-sectional imaging and clinical and radiographic long-term outcome after internal fixation. OBJECTIVES: To report clinical, radiographic and computed tomographic findings of equine central tarsal bone fractures and to evaluate the long-term outcome of internal fixation. STUDY DESIGN: Retrospective case series. METHODS: All horses diagnosed with a central tarsa...

  20. Computer Modelling of Dynamic Processes

    Directory of Open Access Journals (Sweden)

    B. Rybakin

    2000-10-01

    Full Text Available Results of numerical modeling of dynamic problems are summed in the article up. These problems are characteristic for various areas of human activity, in particular for problem solving in ecology. The following problems are considered in the present work: computer modeling of dynamic effects on elastic-plastic bodies, calculation and determination of performances of gas streams in gas cleaning equipment, modeling of biogas formation processes.

  1. Investigation of The regularities of the process and development of method of management of technological line operation within the process of mass raw mate-rials supply in terms of dynamics of inbound traffic of unit trains

    Directory of Open Access Journals (Sweden)

    Катерина Ігорівна Сізова

    2015-03-01

    Full Text Available Large-scale sinter plants at metallurgical enterprises incorporate highly productive transport-and-handling complexes (THC that receive and process mass iron-bearing raw materials. Such THCs as a rule include unloading facilities and freight railway station. The central part of the THC is a technological line that carries out operations of reception and unloading of unit trains with raw materials. The technological line consists of transport and freight modules. The latter plays a leading role and, in its turn, consists of rotary car dumpers and conveyor belts. This module represents a determinate system that carries out preparation and unloading operations. Its processing capacity is set in accordance with manufacturing capacity of the sinter plant. The research has shown that in existing operating conditions, which is characterized by “arrhythmia” of interaction between external transport operation and production, technological line of THC functions inefficiently. Thus, it secures just 18-20 % of instances of processing of inbound unit trains within set standard time. It was determined that duration of the cycle of processing of inbound unit train can play a role of regulator, under stochastic characteristics of intervals between inbound unit trains with raw materials on the one hand, and determined unloading system on the other hand. That is why evaluation of interdependence between these factors allows determination of duration of cycle of processing of inbound unit trains. Basing on the results of the study, the method of logistical management of the processing of inbound unit trains was offered. At the same time, real duration of processing of inbound unit train is taken as the regulated value. The regulation process implies regular evaluation and comparison of these values, and, taking into account different disturbances, decision-making concerning adaptation of functioning of technological line. According to the offered principles

  2. Organization of the M-6000 computer calculating process in the CAMAC on-line measurement systems for a physical experiment

    International Nuclear Information System (INIS)

    Bespalova, T.V.; Volkov, A.S.; Golutvin, I.A.; Maslov, V.V.; Nevskaya, N.A.; Okonishnikov, A.A.; Terekhov, V.E.; Shilkin, I.P.

    1977-01-01

    Discussed are the basic results of the work on designing the software of the computer measuring complex (CMC) which uses the M-6000 computer and operates on line with an accelerator. All the CMC units comply with the CAMAC standard. The CMC incorporates a mainframe memory, twenty-four kilobytes of 16-digit words in size, and external memory on magnetic disks, 1 megabyte in size. Suggested is a modification of the technique for designing the CMC software providing for program complexes which are dynamically adjusted by an experimentalist for the given experiment for a short time. The CMC software comprises the following major portions: a software generator, data acquisition program, on-line data processing routines, off-line data processing programs and programs for data recording on magnetic tapes and disks. Testing of the designed CMC has revealed that the total data processing time equals to from 150 to 500 ms

  3. Developments in medical image processing and computational vision

    CERN Document Server

    Jorge, Renato

    2015-01-01

    This book presents novel and advanced topics in Medical Image Processing and Computational Vision in order to solidify knowledge in the related fields and define their key stakeholders. It contains extended versions of selected papers presented in VipIMAGE 2013 – IV International ECCOMAS Thematic Conference on Computational Vision and Medical Image, which took place in Funchal, Madeira, Portugal, 14-16 October 2013.  The twenty-two chapters were written by invited experts of international recognition and address important issues in medical image processing and computational vision, including: 3D vision, 3D visualization, colour quantisation, continuum mechanics, data fusion, data mining, face recognition, GPU parallelisation, image acquisition and reconstruction, image and video analysis, image clustering, image registration, image restoring, image segmentation, machine learning, modelling and simulation, object detection, object recognition, object tracking, optical flow, pattern recognition, pose estimat...

  4. AGRIS: Description of computer programs

    International Nuclear Information System (INIS)

    Schmid, H.; Schallaboeck, G.

    1976-01-01

    The set of computer programs used at the AGRIS (Agricultural Information System) Input Unit at the IAEA, Vienna, Austria to process the AGRIS computer-readable data is described. The processing flow is illustrated. The configuration of the IAEA's computer, a list of error messages generated by the computer, the EBCDIC code table extended for AGRIS and INIS, the AGRIS-6 bit code, the work sheet format, and job control listings are included as appendixes. The programs are written for an IBM 370, model 145, operating system OS or VS, and require a 130K partition. The programming languages are PL/1 (F-compiler) and Assembler

  5. 2011 floods of the central United States

    Science.gov (United States)

    ,

    2013-01-01

    The Central United States experienced record-setting flooding during 2011, with floods that extended from headwater streams in the Rocky Mountains, to transboundary rivers in the upper Midwest and Northern Plains, to the deep and wide sand-bedded lower Mississippi River. The U.S. Geological Survey (USGS), as part of its mission, collected extensive information during and in the aftermath of the 2011 floods to support scientific analysis of the origins and consequences of extreme floods. The information collected for the 2011 floods, combined with decades of past data, enables scientists and engineers from the USGS to provide syntheses and scientific analyses to inform emergency managers, planners, and policy makers about life-safety, economic, and environmental-health issues surrounding flood hazards for the 2011 floods and future floods like it. USGS data, information, and scientific analyses provide context and understanding of the effect of floods on complex societal issues such as ecosystem and human health, flood-plain management, climate-change adaptation, economic security, and the associated policies enacted for mitigation. Among the largest societal questions is "How do we balance agricultural, economic, life-safety, and environmental needs in and along our rivers?" To address this issue, many scientific questions have to be answered including the following: * How do the 2011 weather and flood conditions compare to the past weather and flood conditions and what can we reasonably expect in the future for flood magnitudes?

  6. Computational performance of a smoothed particle hydrodynamics simulation for shared-memory parallel computing

    Science.gov (United States)

    Nishiura, Daisuke; Furuichi, Mikito; Sakaguchi, Hide

    2015-09-01

    The computational performance of a smoothed particle hydrodynamics (SPH) simulation is investigated for three types of current shared-memory parallel computer devices: many integrated core (MIC) processors, graphics processing units (GPUs), and multi-core CPUs. We are especially interested in efficient shared-memory allocation methods for each chipset, because the efficient data access patterns differ between compute unified device architecture (CUDA) programming for GPUs and OpenMP programming for MIC processors and multi-core CPUs. We first introduce several parallel implementation techniques for the SPH code, and then examine these on our target computer architectures to determine the most effective algorithms for each processor unit. In addition, we evaluate the effective computing performance and power efficiency of the SPH simulation on each architecture, as these are critical metrics for overall performance in a multi-device environment. In our benchmark test, the GPU is found to produce the best arithmetic performance as a standalone device unit, and gives the most efficient power consumption. The multi-core CPU obtains the most effective computing performance. The computational speed of the MIC processor on Xeon Phi approached that of two Xeon CPUs. This indicates that using MICs is an attractive choice for existing SPH codes on multi-core CPUs parallelized by OpenMP, as it gains computational acceleration without the need for significant changes to the source code.

  7. Centralized digital control of accelerators

    International Nuclear Information System (INIS)

    Melen, R.E.

    1983-09-01

    In contrasting the title of this paper with a second paper to be presented at this conference entitled Distributed Digital Control of Accelerators, a potential reader might be led to believe that this paper will focus on systems whose computing intelligence is centered in one or more computers in a centralized location. Instead, this paper will describe the architectural evolution of SLAC's computer based accelerator control systems with respect to the distribution of their intelligence. However, the use of the word centralized in the title is appropriate because these systems are based on the use of centralized large and computationally powerful processors that are typically supported by networks of smaller distributed processors

  8. Directory of computer users in nuclear medicine

    Energy Technology Data Exchange (ETDEWEB)

    Erickson, J.J.; Gurney, J.; McClain, W.J. (eds.)

    1979-09-01

    The Directory of Computer Users in Nuclear Medicine consists primarily of detailed descriptions and indexes to these descriptions. A typical Installation Description contains the name, address, type, and size of the institution and the names of persons within the institution who can be contacted for further information. If the department has access to a central computer facility for data analysis or timesharing, the type of equipment available and the method of access to that central computer is included. The dedicated data processing equipment used by the department in its nuclear medicine studies is described, including the peripherals, languages used, modes of data collection, and other pertinent information. Following the hardware descriptions are listed the types of studies for which the data processing equipment is used, including the language(s) used, the method of output, and an estimate of the frequency of the particular study. An Installation Index and an Organ Studies Index are also included. (PCS)

  9. Directory of computer users in nuclear medicine

    International Nuclear Information System (INIS)

    Erickson, J.J.; Gurney, J.; McClain, W.J.

    1979-09-01

    The Directory of Computer Users in Nuclear Medicine consists primarily of detailed descriptions and indexes to these descriptions. A typical Installation Description contains the name, address, type, and size of the institution and the names of persons within the institution who can be contacted for further information. If the department has access to a central computer facility for data analysis or timesharing, the type of equipment available and the method of access to that central computer is included. The dedicated data processing equipment used by the department in its nuclear medicine studies is described, including the peripherals, languages used, modes of data collection, and other pertinent information. Following the hardware descriptions are listed the types of studies for which the data processing equipment is used, including the language(s) used, the method of output, and an estimate of the frequency of the particular study. An Installation Index and an Organ Studies Index are also included

  10. Computational fluid dynamics research at the United Technologies Research Center requiring supercomputers

    Science.gov (United States)

    Landgrebe, Anton J.

    1987-01-01

    An overview of research activities at the United Technologies Research Center (UTRC) in the area of Computational Fluid Dynamics (CFD) is presented. The requirement and use of various levels of computers, including supercomputers, for the CFD activities is described. Examples of CFD directed toward applications to helicopters, turbomachinery, heat exchangers, and the National Aerospace Plane are included. Helicopter rotor codes for the prediction of rotor and fuselage flow fields and airloads were developed with emphasis on rotor wake modeling. Airflow and airload predictions and comparisons with experimental data are presented. Examples are presented of recent parabolized Navier-Stokes and full Navier-Stokes solutions for hypersonic shock-wave/boundary layer interaction, and hydrogen/air supersonic combustion. In addition, other examples of CFD efforts in turbomachinery Navier-Stokes methodology and separated flow modeling are presented. A brief discussion of the 3-tier scientific computing environment is also presented, in which the researcher has access to workstations, mid-size computers, and supercomputers.

  11. Making Friends in Dark Shadows: An Examination of the Use of Social Computing Strategy Within the United States Intelligence Community Since 9/11

    Directory of Open Access Journals (Sweden)

    Andrew Chomik

    2011-01-01

    Full Text Available The tragic events of 9/11/2001 in the United States highlighted failures in communication and cooperation in the U.S. intelligence community. Agencies within the community failed to “connect the dots” by not collaborating in intelligence gathering efforts, which resulted in severe gaps in data sharing that eventually contributed to the terrorist attack on American soil. Since then, and under the recommendation made by the 9/11 Commission Report, the United States intelligence community has made organizational and operational changes to intelligence gathering and sharing, primarily with the creation of the Office of the Director of National Intelligence (ODNI. The ODNI has since introduced a series of web-based social computing tools to be used by all members of the intelligence community, primarily with its closed-access wiki entitled “Intellipedia” and their social networking service called “A-Space”. This paper argues that, while the introduction of these and other social computing tools have been adopted successfully into the intelligence workplace, they have reached a plateau in their use and serve only as complementary tools to otherwise pre-existing information sharing processes. Agencies continue to ‘stove-pipe’ their respective data, a chronic challenge that plagues the community due to bureaucratic policy, technology use and workplace culture. This paper identifies and analyzes these challenges, and recommends improvements in the use of these tools, both in the business processes behind them and the technology itself. These recommendations aim to provide possible solutions for using these social computing tools as part of a more trusted, collaborative information sharing process.

  12. Computer Science Teacher Professional Development in the United States: A Review of Studies Published between 2004 and 2014

    Science.gov (United States)

    Menekse, Muhsin

    2015-01-01

    While there has been a remarkable interest to make computer science a core K-12 academic subject in the United States, there is a shortage of K-12 computer science teachers to successfully implement computer sciences courses in schools. In order to enhance computer science teacher capacity, training programs have been offered through teacher…

  13. Microwave processing of a dental ceramic used in computer-aided design/computer-aided manufacturing.

    Science.gov (United States)

    Pendola, Martin; Saha, Subrata

    2015-01-01

    Because of their favorable mechanical properties and natural esthetics, ceramics are widely used in restorative dentistry. The conventional ceramic sintering process required for their use is usually slow, however, and the equipment has an elevated energy consumption. Sintering processes that use microwaves have several advantages compared to regular sintering: shorter processing times, lower energy consumption, and the capacity for volumetric heating. The objective of this study was to test the mechanical properties of a dental ceramic used in computer-aided design/computer-aided manufacturing (CAD/CAM) after the specimens were processed with microwave hybrid sintering. Density, hardness, and bending strength were measured. When ceramic specimens were sintered with microwaves, the processing times were reduced and protocols were simplified. Hardness was improved almost 20% compared to regular sintering, and flexural strength measurements suggested that specimens were approximately 50% stronger than specimens sintered in a conventional system. Microwave hybrid sintering may preserve or improve the mechanical properties of dental ceramics designed for CAD/CAM processing systems, reducing processing and waiting times.

  14. 32 CFR 516.10 - Service of civil process within the United States.

    Science.gov (United States)

    2010-07-01

    ... 32 National Defense 3 2010-07-01 2010-07-01 true Service of civil process within the United States... CIVIL AUTHORITIES AND PUBLIC RELATIONS LITIGATION Service of Process § 516.10 Service of civil process within the United States. (a) Policy. DA officials will not prevent or evade the service or process in...

  15. High-performance computing on GPUs for resistivity logging of oil and gas wells

    Science.gov (United States)

    Glinskikh, V.; Dudaev, A.; Nechaev, O.; Surodina, I.

    2017-10-01

    We developed and implemented into software an algorithm for high-performance simulation of electrical logs from oil and gas wells using high-performance heterogeneous computing. The numerical solution of the 2D forward problem is based on the finite-element method and the Cholesky decomposition for solving a system of linear algebraic equations (SLAE). Software implementations of the algorithm used the NVIDIA CUDA technology and computing libraries are made, allowing us to perform decomposition of SLAE and find its solution on central processor unit (CPU) and graphics processor unit (GPU). The calculation time is analyzed depending on the matrix size and number of its non-zero elements. We estimated the computing speed on CPU and GPU, including high-performance heterogeneous CPU-GPU computing. Using the developed algorithm, we simulated resistivity data in realistic models.

  16. A multi-criteria assessment of scenarios on thermal processing of infectious hospital wastes: A case study for Central Macedonia

    International Nuclear Information System (INIS)

    Karagiannidis, A.; Papageorgiou, A.; Perkoulidis, G.; Sanida, G.; Samaras, P.

    2010-01-01

    In Greece more than 14,000 tonnes of infectious hospital waste are produced yearly; a significant part of it is still mismanaged. Only one off-site licensed incineration facility for hospital wastes is in operation, with the remaining of the market covered by various hydroclave and autoclave units, whereas numerous problems are still generally encountered regarding waste segregation, collection, transportation and management, as well as often excessive entailed costs. Everyday practices still include dumping the majority of solid hospital waste into household disposal sites and landfills after sterilization, still largely without any preceding recycling and separation steps. Discussed in the present paper are the implemented and future treatment practices of infectious hospital wastes in Central Macedonia; produced quantities are reviewed, actual treatment costs are addressed critically, whereas the overall situation in Greece is discussed. Moreover, thermal treatment processes that could be applied for the treatment of infectious hospital wastes in the region are assessed via the multi-criteria decision method Analytic Hierarchy Process. Furthermore, a sensitivity analysis was performed and the analysis demonstrated that a centralized autoclave or hydroclave plant near Thessaloniki is the best performing option, depending however on the selection and weighing of criteria of the multi-criteria process. Moreover the study found that a common treatment option for the treatment of all infectious hospital wastes produced in the Region of Central Macedonia, could offer cost and environmental benefits. In general the multi-criteria decision method, as well as the conclusions and remarks of this study can be used as a basis for future planning and anticipation of the needs for investments in the area of medical waste management.

  17. Real-time speckle variance swept-source optical coherence tomography using a graphics processing unit.

    Science.gov (United States)

    Lee, Kenneth K C; Mariampillai, Adrian; Yu, Joe X Z; Cadotte, David W; Wilson, Brian C; Standish, Beau A; Yang, Victor X D

    2012-07-01

    Advances in swept source laser technology continues to increase the imaging speed of swept-source optical coherence tomography (SS-OCT) systems. These fast imaging speeds are ideal for microvascular detection schemes, such as speckle variance (SV), where interframe motion can cause severe imaging artifacts and loss of vascular contrast. However, full utilization of the laser scan speed has been hindered by the computationally intensive signal processing required by SS-OCT and SV calculations. Using a commercial graphics processing unit that has been optimized for parallel data processing, we report a complete high-speed SS-OCT platform capable of real-time data acquisition, processing, display, and saving at 108,000 lines per second. Subpixel image registration of structural images was performed in real-time prior to SV calculations in order to reduce decorrelation from stationary structures induced by the bulk tissue motion. The viability of the system was successfully demonstrated in a high bulk tissue motion scenario of human fingernail root imaging where SV images (512 × 512 pixels, n = 4) were displayed at 54 frames per second.

  18. Modernization of process computer on the ONACAWA-1 NPP

    International Nuclear Information System (INIS)

    Matsuda, Ya.

    1997-01-01

    Modernization of a process computer caused by a necessity increasing the storage capacity due to introduction of a new type of fuel and replacement of outwork computer components is performed. Comparison of the PC parameters before and after modernization is given

  19. A primary study on the increasing of efficiency in the computer cooling system by means of external air

    Energy Technology Data Exchange (ETDEWEB)

    Kim, S. H.; Kim, M. H. [Silla University, Busan (Korea, Republic of)

    2009-07-01

    In recent years, since the continuing increase in the capacity of in personal computer such as the optimal performance, high quality and high resolution image, the computer system's components produce large amounts of heat during operation. This study analyzes and investigates an ability and efficiency of the cooling system inside the computer by means of Central Processing Unit (CPU) and power supply cooling fan. This research was conducted for increasing an ability of the cooling system inside the computer by making a structure which produces different air pressures in an air inflow tube. Consequently, when temperatures of the CPU and room inside computer were compared with a general personal computer, temperatures of the tested CPU, the room and the heat sink were as low as 5 .deg. C, 2.5 .deg. C and 7 .deg. C respectively. In addition to, Revolution Per Minute (RPM) was shown as low as 250 after 1 hour operation. This research explored the possibility of enhancing the effective cooling of high-performance computer systems.

  20. Technological innovation: a structrational process view

    NARCIS (Netherlands)

    Fehse, K.I.A.; Wognum, P.M.

    1999-01-01

    The central aim of our research is to describe and explain how the introduction of a computer-based technology, which supports co-operative work in engineering departments, induces change processes. The employment of computer-based technologies in product development organisations to support

  1. Analysis of Unit Process Cost for an Engineering-Scale Pyroprocess Facility Using a Process Costing Method in Korea

    Directory of Open Access Journals (Sweden)

    Sungki Kim

    2015-08-01

    Full Text Available Pyroprocessing, which is a dry recycling method, converts spent nuclear fuel into U (Uranium/TRU (TRansUranium metal ingots in a high-temperature molten salt phase. This paper provides the unit process cost of a pyroprocess facility that can process up to 10 tons of pyroprocessing product per year by utilizing the process costing method. Toward this end, the pyroprocess was classified into four kinds of unit processes: pretreatment, electrochemical reduction, electrorefining and electrowinning. The unit process cost was calculated by classifying the cost consumed at each process into raw material and conversion costs. The unit process costs of the pretreatment, electrochemical reduction, electrorefining and electrowinning were calculated as 195 US$/kgU-TRU, 310 US$/kgU-TRU, 215 US$/kgU-TRU and 231 US$/kgU-TRU, respectively. Finally the total pyroprocess cost was calculated as 951 US$/kgU-TRU. In addition, the cost driver for the raw material cost was identified as the cost for Li3PO4, needed for the LiCl-KCl purification process, and platinum as an anode electrode in the electrochemical reduction process.

  2. Central venous catheter placement by an interventional radiology unit: an australian experience

    International Nuclear Information System (INIS)

    Lee, M. K. S.; Mossop, P. J.; Vrazas, J. I.

    2007-01-01

    The aim of this retrospective study was to analyse the outcomes of central venous catheter (CVC) placement carried out by an interventional radiology unit. A review of our hospital records identified 331 consecutive patients who underwent insertion of a tunnelled or non-tunnelled CVC between January 2000 and December 2004. Key outcome measures included the technical success rate of CVC insertion and the percentage of immediate ( 30 days) complications. A total of 462 CVCs were placed under radiological guidance, with an overall success rate of 98.9%. Immediate complications included one pneumothorax, which was diagnosed 7 days after subclavian CVC insertion, and eight episodes of significant haematoma or bleeding within 24 h of CVC insertion. No cases were complicated by arterial puncture or air embolus. Catheter-related sepsis occurred in 2% of non-tunnelled CVC and 8.9% of tunnelled CVC. The overall incidence of catheter-related sepsis was 0.17 per 100 catheter days. As the demand for chemotherapy and haemodialysis grows with our ageing population, interventional radiology suites are well placed to provide a safe and reliable service for the placement of central venous access devices

  3. Snore related signals processing in a private cloud computing system.

    Science.gov (United States)

    Qian, Kun; Guo, Jian; Xu, Huijie; Zhu, Zhaomeng; Zhang, Gongxuan

    2014-09-01

    Snore related signals (SRS) have been demonstrated to carry important information about the obstruction site and degree in the upper airway of Obstructive Sleep Apnea-Hypopnea Syndrome (OSAHS) patients in recent years. To make this acoustic signal analysis method more accurate and robust, big SRS data processing is inevitable. As an emerging concept and technology, cloud computing has motivated numerous researchers and engineers to exploit applications both in academic and industry field, which could have an ability to implement a huge blue print in biomedical engineering. Considering the security and transferring requirement of biomedical data, we designed a system based on private cloud computing to process SRS. Then we set the comparable experiments of processing a 5-hour audio recording of an OSAHS patient by a personal computer, a server and a private cloud computing system to demonstrate the efficiency of the infrastructure we proposed.

  4. High-throughput sequence alignment using Graphics Processing Units

    Directory of Open Access Journals (Sweden)

    Trapnell Cole

    2007-12-01

    Full Text Available Abstract Background The recent availability of new, less expensive high-throughput DNA sequencing technologies has yielded a dramatic increase in the volume of sequence data that must be analyzed. These data are being generated for several purposes, including genotyping, genome resequencing, metagenomics, and de novo genome assembly projects. Sequence alignment programs such as MUMmer have proven essential for analysis of these data, but researchers will need ever faster, high-throughput alignment tools running on inexpensive hardware to keep up with new sequence technologies. Results This paper describes MUMmerGPU, an open-source high-throughput parallel pairwise local sequence alignment program that runs on commodity Graphics Processing Units (GPUs in common workstations. MUMmerGPU uses the new Compute Unified Device Architecture (CUDA from nVidia to align multiple query sequences against a single reference sequence stored as a suffix tree. By processing the queries in parallel on the highly parallel graphics card, MUMmerGPU achieves more than a 10-fold speedup over a serial CPU version of the sequence alignment kernel, and outperforms the exact alignment component of MUMmer on a high end CPU by 3.5-fold in total application time when aligning reads from recent sequencing projects using Solexa/Illumina, 454, and Sanger sequencing technologies. Conclusion MUMmerGPU is a low cost, ultra-fast sequence alignment program designed to handle the increasing volume of data produced by new, high-throughput sequencing technologies. MUMmerGPU demonstrates that even memory-intensive applications can run significantly faster on the relatively low-cost GPU than on the CPU.

  5. Use of personal computer image for processing a magnetic resonance image (MRI)

    International Nuclear Information System (INIS)

    Yamamoto, Tetsuo; Tanaka, Hitoshi

    1988-01-01

    Image processing of MR imaging was attempted by using a popular personal computer as 16-bit model. The computer processed the images on a 256 x 256 matrix and 512 x 512 matrix. The softwer languages for image-processing were those of Macro-Assembler performed by (MS-DOS). The original images, acuired with an 0.5 T superconducting machine (VISTA MR 0.5 T, Picker International) were transfered to the computer by the flexible disket. Image process are the display of image to monitor, other the contrast enhancement, the unsharped mask contrast enhancement, the various filter process, the edge detections or the color histogram was obtained in 1.6 sec to 67 sec, indicating that commercialzed personal computer had ability for routine clinical purpose in MRI-processing. (author)

  6. Design, functioning and possible applications of process computers

    International Nuclear Information System (INIS)

    Kussl, V.

    1975-01-01

    Process computers are useful as automation instruments a) when large numbers of data are processed in analog or digital form, b) for low data flow (data rate), and c) when data must be stored over short or long periods of time. (orig./AK) [de

  7. CIPSS [computer-integrated process and safeguards system]: The integration of computer-integrated manufacturing and robotics with safeguards, security, and process operations

    International Nuclear Information System (INIS)

    Leonard, R.S.; Evans, J.C.

    1987-01-01

    This poster session describes the computer-integrated process and safeguards system (CIPSS). The CIPSS combines systems developed for factory automation and automated mechanical functions (robots) with varying degrees of intelligence (expert systems) to create an integrated system that would satisfy current and emerging security and safeguards requirements. Specifically, CIPSS is an extension of the automated physical security functions concepts. The CIPSS also incorporates the concepts of computer-integrated manufacturing (CIM) with integrated safeguards concepts, and draws upon the Defense Advance Research Project Agency's (DARPA's) strategic computing program

  8. Modeling Prices for Sawtimber Stumpage in the South-Central United States

    Directory of Open Access Journals (Sweden)

    Rajan Parajuli

    2016-07-01

    Full Text Available The South-Central United States, which includes the states of Louisiana, Mississippi, Texas, and Arkansas, represents an important segment of the softwood sawtimber market. By using the Seemingly Unrelated Regression (SUR method to account for the linkage among the four contiguous timber markets, this study examines the dynamics of softwood sawtimber stumpage markets within the region. Based on quarterly data from 1981 to 2014, the findings reveal that both pulpwood and chip-and-saw (CNS prices have a positive influence on the Texas and Arkansas sawtimber markets. Moreover, Granger-causality tests suggest that unidirectional causality runs from pulpwood and CNS markets to the respective sawtimber market. Compared to the pre-financial crisis period, sawtimber prices in these four states are 9%–17% lower in the recent years.

  9. Misdiagnosis of acute peripheral vestibulopathy in central nervous ischemic infarction.

    Science.gov (United States)

    Braun, Eva Maria; Tomazic, Peter Valentin; Ropposch, Thorsten; Nemetz, Ulrike; Lackner, Andreas; Walch, Christian

    2011-12-01

    Vertigo is a very common symptom at otorhinolaryngology (ENT), neurological, and emergency units, but often, it is difficult to distinguish between vertigo of peripheral and central origin. We conducted a retrospective analysis of a hospital database, including all patients admitted to the ENT University Hospital Graz after neurological examination, with a diagnosis of peripheral vestibular vertigo and subsequent diagnosis of central nervous infarction as the actual cause for the vertigo. Twelve patients were included in this study. All patients with acute spinning vertigo after a thorough neurological examination and with uneventful computed tomographic scans were referred to our ENT department. Nine of them presented with horizontal nystagmus. Only 1 woman experienced additional hearing loss. The mean diagnostic delay to the definite diagnosis of a central infarction through magnetic resonance imaging was 4 days (SD, 2.3 d). A careful otologic and neurological examination, including the head impulse test and caloric testing, is mandatory. Because ischemic events cannot be diagnosed in computed tomographic scans at an early stage, we strongly recommend to perform cranial magnetic resonance imaging within 48 hours from admission if vertigo has not improved under conservative treatment.

  10. Central axis dose verification in patients treated with total body irradiation of photons using a Computed Radiography system

    International Nuclear Information System (INIS)

    Rubio Rivero, A.; Caballero Pinelo, R.; Gonzalez Perez, Y.

    2015-01-01

    To propose and evaluate a method for the central axis dose verification in patients treated with total body irradiation (TBI) of photons using images obtained through a Computed Radiography (CR) system. It was used the Computed Radiography (Fuji) portal imaging cassette readings and correlate with measured of absorbed dose in water using 10 x 10 irradiation fields with ionization chamber in the 60 Co equipment. The analytical and graphic expression is obtained through software 'Origin8', the TBI patient portal verification images were processed using software ImageJ, to obtain the patient dose. To validate the results, the absorbed dose in RW3 models was measured with ionization chamber with different thickness, simulating TBI real conditions. Finally it was performed a retrospective study over the last 4 years obtaining the patients absorbed dose based on the reading in the image and comparing with the planned dose. The analytical equation obtained permits estimate the absorbed dose using image pixel value and the dose measured with ionization chamber and correlated with patient clinical records. Those results are compared with reported evidence obtaining a difference less than 02%, the 3 methods were compared and the results are within 10%. (Author)

  11. Desk-top computer assisted processing of thermoluminescent dosimeters

    International Nuclear Information System (INIS)

    Archer, B.R.; Glaze, S.A.; North, L.B.; Bushong, S.C.

    1977-01-01

    An accurate dosimetric system utilizing a desk-top computer and high sensitivity ribbon type TLDs has been developed. The system incorporates an exposure history file and procedures designed for constant spatial orientation of each dosimeter. Processing of information is performed by two computer programs. The first calculates relative response factors to insure that the corrected response of each TLD is identical following a given dose of radiation. The second program computes a calibration factor and uses it and the relative response factor to determine the actual dose registered by each TLD. (U.K.)

  12. Computational models of neuromodulation.

    Science.gov (United States)

    Fellous, J M; Linster, C

    1998-05-15

    Computational modeling of neural substrates provides an excellent theoretical framework for the understanding of the computational roles of neuromodulation. In this review, we illustrate, with a large number of modeling studies, the specific computations performed by neuromodulation in the context of various neural models of invertebrate and vertebrate preparations. We base our characterization of neuromodulations on their computational and functional roles rather than on anatomical or chemical criteria. We review the main framework in which neuromodulation has been studied theoretically (central pattern generation and oscillations, sensory processing, memory and information integration). Finally, we present a detailed mathematical overview of how neuromodulation has been implemented at the single cell and network levels in modeling studies. Overall, neuromodulation is found to increase and control computational complexity.

  13. High performance graphics processor based computed tomography reconstruction algorithms for nuclear and other large scale applications.

    Energy Technology Data Exchange (ETDEWEB)

    Jimenez, Edward S. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Orr, Laurel J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Thompson, Kyle R. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2013-09-01

    The goal of this work is to develop a fast computed tomography (CT) reconstruction algorithm based on graphics processing units (GPU) that achieves significant improvement over traditional central processing unit (CPU) based implementations. The main challenge in developing a CT algorithm that is capable of handling very large datasets is parallelizing the algorithm in such a way that data transfer does not hinder performance of the reconstruction algorithm. General Purpose Graphics Processing (GPGPU) is a new technology that the Science and Technology (S&T) community is starting to adopt in many fields where CPU-based computing is the norm. GPGPU programming requires a new approach to algorithm development that utilizes massively multi-threaded environments. Multi-threaded algorithms in general are difficult to optimize since performance bottlenecks occur that are non-existent in single-threaded algorithms such as memory latencies. If an efficient GPU-based CT reconstruction algorithm can be developed; computational times could be improved by a factor of 20. Additionally, cost benefits will be realized as commodity graphics hardware could potentially replace expensive supercomputers and high-end workstations. This project will take advantage of the CUDA programming environment and attempt to parallelize the task in such a way that multiple slices of the reconstruction volume are computed simultaneously. This work will also take advantage of the GPU memory by utilizing asynchronous memory transfers, GPU texture memory, and (when possible) pinned host memory so that the memory transfer bottleneck inherent to GPGPU is amortized. Additionally, this work will take advantage of GPU-specific hardware (i.e. fast texture memory, pixel-pipelines, hardware interpolators, and varying memory hierarchy) that will allow for additional performance improvements.

  14. The Development of a General Purpose ARM-based Processing Unit for the ATLAS TileCal sROD

    OpenAIRE

    Cox, Mitchell Arij; Reed, Robert; Mellado Garcia, Bruce Rafael

    2014-01-01

    The Large Hadron Collider at CERN generates enormous amounts of raw data which present a serious computing challenge. After Phase-II upgrades in 2022, the data output from the ATLAS Tile Calorimeter will increase by 200 times to 41 Tb/s! ARM processors are common in mobile devices due to their low cost, low energy consumption and high performance. It is proposed that a cost-effective, high data throughput Processing Unit (PU) can be developed by using several consumer ARM processors in a clus...

  15. Bioinformation processing a primer on computational cognitive science

    CERN Document Server

    Peterson, James K

    2016-01-01

    This book shows how mathematics, computer science and science can be usefully and seamlessly intertwined. It begins with a general model of cognitive processes in a network of computational nodes, such as neurons, using a variety of tools from mathematics, computational science and neurobiology. It then moves on to solve the diffusion model from a low-level random walk point of view. It also demonstrates how this idea can be used in a new approach to solving the cable equation, in order to better understand the neural computation approximations. It introduces specialized data for emotional content, which allows a brain model to be built using MatLab tools, and also highlights a simple model of cognitive dysfunction.

  16. 32 CFR 516.9 - Service of criminal process within the United States.

    Science.gov (United States)

    2010-07-01

    ... 32 National Defense 3 2010-07-01 2010-07-01 true Service of criminal process within the United... OF CIVIL AUTHORITIES AND PUBLIC RELATIONS LITIGATION Service of Process § 516.9 Service of criminal process within the United States. (a) Surrender of personnel. Guidance for surrender of military personnel...

  17. Experimental data processing techniques by a personal computer

    International Nuclear Information System (INIS)

    Matsuura, Kiyokata; Tsuda, Kenzo; Abe, Yoshihiko; Kojima, Tsuyoshi; Nishikawa, Akira; Shimura, Hitoshi; Hyodo, Hiromi; Yamagishi, Shigeru.

    1989-01-01

    A personal computer (16-bit, about 1 MB memory) can be used at a low cost in the experimental data processing. This report surveys the important techniques on A/D and D/A conversion, display, store and transfer of the experimental data. It is also discussed the items to be considered in the software. Practical softwares programed BASIC and Assembler language are given as examples. Here, we present some techniques to get faster process in BASIC language and show that the system composed of BASIC and Assembler is useful in a practical experiment. The system performance such as processing speed and flexibility in setting operation condition will depend strongly on programming language. We have made test for processing speed by some typical programming languages; BASIC(interpreter), C, FORTRAN and Assembler. As for the calculation, FORTRAN has the best performance which is comparable to or better than Assembler even in the personal computer. (author)

  18. Computer hardware for radiologists: Part I

    International Nuclear Information System (INIS)

    Indrajit, IK; Alam, A

    2010-01-01

    Computers are an integral part of modern radiology practice. They are used in different radiology modalities to acquire, process, and postprocess imaging data. They have had a dramatic influence on contemporary radiology practice. Their impact has extended further with the emergence of Digital Imaging and Communications in Medicine (DICOM), Picture Archiving and Communication System (PACS), Radiology information system (RIS) technology, and Teleradiology. A basic overview of computer hardware relevant to radiology practice is presented here. The key hardware components in a computer are the motherboard, central processor unit (CPU), the chipset, the random access memory (RAM), the memory modules, bus, storage drives, and ports. The personnel computer (PC) has a rectangular case that contains important components called hardware, many of which are integrated circuits (ICs). The fiberglass motherboard is the main printed circuit board and has a variety of important hardware mounted on it, which are connected by electrical pathways called “buses”. The CPU is the largest IC on the motherboard and contains millions of transistors. Its principal function is to execute “programs”. A Pentium ® 4 CPU has transistors that execute a billion instructions per second. The chipset is completely different from the CPU in design and function; it controls data and interaction of buses between the motherboard and the CPU. Memory (RAM) is fundamentally semiconductor chips storing data and instructions for access by a CPU. RAM is classified by storage capacity, access speed, data rate, and configuration

  19. Computer hardware for radiologists: Part I

    Directory of Open Access Journals (Sweden)

    Indrajit I

    2010-01-01

    Full Text Available Computers are an integral part of modern radiology practice. They are used in different radiology modalities to acquire, process, and postprocess imaging data. They have had a dramatic influence on contemporary radiology practice. Their impact has extended further with the emergence of Digital Imaging and Communications in Medicine (DICOM, Picture Archiving and Communication System (PACS, Radiology information system (RIS technology, and Teleradiology. A basic overview of computer hardware relevant to radiology practice is presented here. The key hardware components in a computer are the motherboard, central processor unit (CPU, the chipset, the random access memory (RAM, the memory modules, bus, storage drives, and ports. The personnel computer (PC has a rectangular case that contains important components called hardware, many of which are integrated circuits (ICs. The fiberglass motherboard is the main printed circuit board and has a variety of important hardware mounted on it, which are connected by electrical pathways called "buses". The CPU is the largest IC on the motherboard and contains millions of transistors. Its principal function is to execute "programs". A Pentium® 4 CPU has transistors that execute a billion instructions per second. The chipset is completely different from the CPU in design and function; it controls data and interaction of buses between the motherboard and the CPU. Memory (RAM is fundamentally semiconductor chips storing data and instructions for access by a CPU. RAM is classified by storage capacity, access speed, data rate, and configuration.

  20. [Personal computer-based computer monitoring system of the anesthesiologist (2-year experience in development and use)].

    Science.gov (United States)

    Buniatian, A A; Sablin, I N; Flerov, E V; Mierbekov, E M; Broĭtman, O G; Shevchenko, V V; Shitikov, I I

    1995-01-01

    Creation of computer monitoring systems (CMS) for operating rooms is one of the most important spheres of personal computer employment in anesthesiology. The authors developed a PC RS/AT-based CMS and effectively used it for more than 2 years. This system permits comprehensive monitoring in cardiosurgical operations by real time processing the values of arterial and central venous pressure, pressure in the pulmonary artery, bioelectrical activity of the brain, and two temperature values. Use of this CMS helped appreciably improve patients' safety during surgery. The possibility to assess brain function by computer monitoring the EEF simultaneously with central hemodynamics and body temperature permit the anesthesiologist to objectively assess the depth of anesthesia and to diagnose cerebral hypoxia. Automated anesthesiological chart issued by the CMS after surgery reliably reflects the patient's status and the measures taken by the anesthesiologist.

  1. Fog Computing and Edge Computing Architectures for Processing Data From Diabetes Devices Connected to the Medical Internet of Things.

    Science.gov (United States)

    Klonoff, David C

    2017-07-01

    The Internet of Things (IoT) is generating an immense volume of data. With cloud computing, medical sensor and actuator data can be stored and analyzed remotely by distributed servers. The results can then be delivered via the Internet. The number of devices in IoT includes such wireless diabetes devices as blood glucose monitors, continuous glucose monitors, insulin pens, insulin pumps, and closed-loop systems. The cloud model for data storage and analysis is increasingly unable to process the data avalanche, and processing is being pushed out to the edge of the network closer to where the data-generating devices are. Fog computing and edge computing are two architectures for data handling that can offload data from the cloud, process it nearby the patient, and transmit information machine-to-machine or machine-to-human in milliseconds or seconds. Sensor data can be processed near the sensing and actuating devices with fog computing (with local nodes) and with edge computing (within the sensing devices). Compared to cloud computing, fog computing and edge computing offer five advantages: (1) greater data transmission speed, (2) less dependence on limited bandwidths, (3) greater privacy and security, (4) greater control over data generated in foreign countries where laws may limit use or permit unwanted governmental access, and (5) lower costs because more sensor-derived data are used locally and less data are transmitted remotely. Connected diabetes devices almost all use fog computing or edge computing because diabetes patients require a very rapid response to sensor input and cannot tolerate delays for cloud computing.

  2. Biodiversity indicators fruit trees for farm units of the central region of Cuba

    Directory of Open Access Journals (Sweden)

    Esther Gutiérrez Fleites

    2014-10-01

    Full Text Available In order to determine the biodiversity indicators in fruit trees in the province of Cienfuegos, this research was conducted. The work was conducted during the months of May to October 2009, 49 production units in 10 municipalities in the Central Region (Villa Clara, Cienfuegos and Sancti Spiritus, which were randomly selected. To characterize them the total cultivable area and exploitation as well as the sources of water supply is determined, grouping the data by municipalities and forms of organization of agricultural production. Inventory of all fruit species present in each production unit was performed and evaluated plant biodiversity indicators that define the richness, dominance and diversity. The data were statistically analyzed using the Statgraphics Plus version 5.1 program. The results indicated that the Units are characterized by a 80-100% of surface area in operation even in the case of Agricultural Production Cooperatives reach values of 62% and appear as main sources of water supply wells and rivers. A total of 47 species of fruit were recorded. Biodiversity indicators indicate overall average wealth seven, a range of 1.1 and 0.59 dominance; addition, there are significant differences between municipalities but not between different forms of organ ization of agricultural production.

  3. A Generic Software Development Process Refined from Best Practices for Cloud Computing

    Directory of Open Access Journals (Sweden)

    Soojin Park

    2015-04-01

    Full Text Available Cloud computing has emerged as more than just a piece of technology, it is rather a new IT paradigm. The philosophy behind cloud computing shares its view with green computing where computing environments and resources are not as subjects to own but as subjects of sustained use. However, converting currently used IT services to Software as a Service (SaaS cloud computing environments introduces several new risks. To mitigate such risks, existing software development processes must undergo significant remodeling. This study analyzes actual cases of SaaS cloud computing environment adoption as a way to derive four new best practices for software development and incorporates the identified best practices for currently-in-use processes. Furthermore, this study presents a design for generic software development processes that implement the proposed best practices. The design for the generic process has been applied to reinforce the weak points found in SaaS cloud service development practices used by eight enterprises currently developing or operating actual SaaS cloud computing services. Lastly, this study evaluates the applicability of the proposed SaaS cloud oriented development process through analyzing the feedback data collected from actual application to the development of a SaaS cloud service Astation.

  4. The modernization of the process computer of the Trillo Nuclear Power Plant

    International Nuclear Information System (INIS)

    Martin Aparicio, J.; Atanasio, J.

    2011-01-01

    The paper describes the modernization of the Process computer of the Trillo Nuclear Power Plant. The process computer functions, have been incorporated in the non Safety I and C platform selected in Trillo NPP: the Siemens SPPA-T2000 OM690 (formerly known as Teleperm XP). The upgrade of the Human Machine Interface of the control room has been included in the project. The modernization project has followed the same development process used in the upgrade of the process computer of PWR German nuclear power plants. (Author)

  5. An overview of computer-based natural language processing

    Science.gov (United States)

    Gevarter, W. B.

    1983-01-01

    Computer based Natural Language Processing (NLP) is the key to enabling humans and their computer based creations to interact with machines in natural language (like English, Japanese, German, etc., in contrast to formal computer languages). The doors that such an achievement can open have made this a major research area in Artificial Intelligence and Computational Linguistics. Commercial natural language interfaces to computers have recently entered the market and future looks bright for other applications as well. This report reviews the basic approaches to such systems, the techniques utilized, applications, the state of the art of the technology, issues and research requirements, the major participants and finally, future trends and expectations. It is anticipated that this report will prove useful to engineering and research managers, potential users, and others who will be affected by this field as it unfolds.

  6. Inpatient Peripherally Inserted Central Venous Catheter Complications: Should Peripherally Inserted Central Catheter Lines Be Placed in the Intensive Care Unit Setting?

    Science.gov (United States)

    Martyak, Michael; Kabir, Ishraq; Britt, Rebecca

    2017-08-01

    Peripherally inserted central venous catheters (PICCs) are now commonly used for central access in the intensive care unit (ICU) setting; however, there is a paucity of data evaluating the complication rates associated with these lines. We performed a retrospective review of all PICCs placed in the inpatient setting at our institution during a 1-year period from January 2013 to December 2013. These were divided into two groups: those placed at the bedside in the ICU and those placed by interventional radiology in non-ICU patients. Data regarding infectious and thrombotic complications were collected and evaluated. During the study period, 1209 PICC line placements met inclusion criteria and were evaluated; 1038 were placed by interventional radiology in non-ICU patients, and 171 were placed at the bedside in ICU patients. The combined thrombotic and central line associated blood stream infection rate was 6.17 per cent in the non-ICU group and 10.53 per cent in the ICU group (P = 0.035). The thrombotic complication rate was 5.88 per cent in the non-ICU group and 7.60 per cent in the ICU group (P = 0.38), whereas the central line associated blood stream infection rate was 0.29 per cent in the non-ICU group and 2.92 per cent in the ICU group (P = 0.002). This study seems to suggest that PICC lines placed at the bedside in the ICU setting are associated with higher complication rates, in particular infectious complications, than those placed by interventional radiology in non-ICU patients. The routine placement of PICC lines in the ICU settings needs to be reevaluated given these findings.

  7. Function Follows Performance in Evolutionary Computational Processing

    DEFF Research Database (Denmark)

    Pasold, Anke; Foged, Isak Worre

    2011-01-01

    As the title ‘Function Follows Performance in Evolutionary Computational Processing’ suggests, this paper explores the potentials of employing multiple design and evaluation criteria within one processing model in order to account for a number of performative parameters desired within varied...

  8. Tutorial: Signal Processing in Brain-Computer Interfaces

    NARCIS (Netherlands)

    Garcia Molina, G.

    2010-01-01

    Research in Electroencephalogram (EEG) based Brain-Computer Interfaces (BCIs) has been considerably expanding during the last few years. Such an expansion owes to a large extent to the multidisciplinary and challenging nature of BCI research. Signal processing undoubtedly constitutes an essential

  9. Parallel Algorithm for GPU Processing; for use in High Speed Machine Vision Sensing of Cotton Lint Trash

    Directory of Open Access Journals (Sweden)

    Mathew G. Pelletier

    2008-02-01

    Full Text Available One of the main hurdles standing in the way of optimal cleaning of cotton lint isthe lack of sensing systems that can react fast enough to provide the control system withreal-time information as to the level of trash contamination of the cotton lint. This researchexamines the use of programmable graphic processing units (GPU as an alternative to thePC’s traditional use of the central processing unit (CPU. The use of the GPU, as analternative computation platform, allowed for the machine vision system to gain asignificant improvement in processing time. By improving the processing time, thisresearch seeks to address the lack of availability of rapid trash sensing systems and thusalleviate a situation in which the current systems view the cotton lint either well before, orafter, the cotton is cleaned. This extended lag/lead time that is currently imposed on thecotton trash cleaning control systems, is what is responsible for system operators utilizing avery large dead-band safety buffer in order to ensure that the cotton lint is not undercleaned.Unfortunately, the utilization of a large dead-band buffer results in the majority ofthe cotton lint being over-cleaned which in turn causes lint fiber-damage as well assignificant losses of the valuable lint due to the excessive use of cleaning machinery. Thisresearch estimates that upwards of a 30% reduction in lint loss could be gained through theuse of a tightly coupled trash sensor to the cleaning machinery control systems. Thisresearch seeks to improve processing times through the development of a new algorithm forcotton trash sensing that allows for implementation on a highly parallel architecture.Additionally, by moving the new parallel algorithm onto an alternative computing platform,the graphic processing unit “GPU”, for processing of the cotton trash images, a speed up ofover 6.5 times, over optimized code running on the PC’s central processing

  10. Background Noise Degrades Central Auditory Processing in Toddlers.

    Science.gov (United States)

    Niemitalo-Haapola, Elina; Haapala, Sini; Jansson-Verkasalo, Eira; Kujala, Teija

    2015-01-01

    Noise, as an unwanted sound, has become one of modern society's environmental conundrums, and many children are exposed to higher noise levels than previously assumed. However, the effects of background noise on central auditory processing of toddlers, who are still acquiring language skills, have so far not been determined. The authors evaluated the effects of background noise on toddlers' speech-sound processing by recording event-related brain potentials. The hypothesis was that background noise modulates neural speech-sound encoding and degrades speech-sound discrimination. Obligatory P1 and N2 responses for standard syllables and the mismatch negativity (MMN) response for five different syllable deviants presented in a linguistic multifeature paradigm were recorded in silent and background noise conditions. The participants were 18 typically developing 22- to 26-month-old monolingual children with healthy ears. The results showed that the P1 amplitude was smaller and the N2 amplitude larger in the noisy conditions compared with the silent conditions. In the noisy condition, the MMN was absent for the intensity and vowel changes and diminished for the consonant, frequency, and vowel duration changes embedded in speech syllables. Furthermore, the frontal MMN component was attenuated in the noisy condition. However, noise had no effect on P1, N2, or MMN latencies. The results from this study suggest multiple effects of background noise on the central auditory processing of toddlers. It modulates the early stages of sound encoding and dampens neural discrimination vital for accurate speech perception. These results imply that speech processing of toddlers, who may spend long periods of daytime in noisy conditions, is vulnerable to background noise. In noisy conditions, toddlers' neural representations of some speech sounds might be weakened. Thus, special attention should be paid to acoustic conditions and background noise levels in children's daily environments

  11. Whole Language, Computers and CD-ROM Technology: A Kindergarten Unit on "Benjamin Bunny."

    Science.gov (United States)

    Balajthy, Ernest

    A kindergarten teacher, two preservice teachers, and a college consultant on educational computer technology designed and developed a 10-day whole-language integrated unit on the theme of Beatrix Potter's "Benjamin Bunny." The project was designed as a demonstration of the potential of integrating the CD-ROM-based version of…

  12. Establishing a central waste processing and storage facility in Ghana

    International Nuclear Information System (INIS)

    Glover, E.T.; Fletcher, J.J.; Darko, E.O.

    2001-01-01

    regulations. About 50 delegates from various ministries and establishment participated in the seminar. The final outcome of the draft regulation was sent to the Attorney General's office for the necessary legal review before been presented to Parliament through the Ministry of Environment, Science and Technology. A radiation sources and radioactive waste inventory have been established using the Regulatory Authority Information System (RAIS) and the Sealed Radiation Sources Registry System (SRS). A central waste processing and storage facility was constructed in the mid sixties to handle waste from a 2MW reactor that was never installed. The facility consists of a decontamination unit, two concrete vaults (about 5x15 m and 4m deep) intended for low and intermediate level waste storage and 60 wells (about 0.5m diameter x 4.6m) for storage of spent fuel. This Facility will require significant rehabilitation. Safety and performance assessment studies have been carried out with the help of three IAEA experts. The recommendations from the assessment indicate that the vaults are very old and deteriorated to be considered for any future waste storage. However the decontamination unit and the wells are still in good condition and were earmarked for refurbishment and use as waste processing and storage facilities respectively. The decontamination unit has a surface area of 60m 2 and a laboratory of surface area 10m 2 . The decontamination unit will have four technological areas. An area for cementation of non-compactible solid waste and spent sealed sources. An area for compaction of compactable solid waste and a controlled area for conditioned wastes in 200L drums. Provision has been made to condition liquid waste. There will be a section for receipt and segregation of the waste. The laboratory will be provided with the necessary equipment for quality control. Research to support technological processes will be carried out in the laboratory. A quality assurance and control systems

  13. TESTING NONLINEAR INFLATION CONVERGENCE FOR THE CENTRAL AFRICAN ECONOMIC AND MONETARY COMMUNITY

    Directory of Open Access Journals (Sweden)

    Emmanuel Anoruo

    2014-01-01

    Full Text Available This paper uses nonlinear unit root testing procedures to examine the issue of inflation convergence for the Central African Economic and Monetary Community (CEMAC member states including Cameron, Central African Republic, Chad, Equatorial Guinea, Gabon and the Republic of Congo. The results from nonlinear STAR unit root tests suggest that inflation differentials for the sample countries are nonlinear and mean reverting processes. These results provide evidence of inflation convergence among countries within CEMAC. The finding of inflation convergence indicates the feasibility of a common monetary policy and/or inflation targeting regime within CEMAC.

  14. The Performance Improvement of the Lagrangian Particle Dispersion Model (LPDM) Using Graphics Processing Unit (GPU) Computing

    Science.gov (United States)

    2017-08-01

    used for its GPU computing capability during the experiment. It has Nvidia Tesla K40 GPU accelerators containing 32 GPU nodes consisting of 1024...cores. CUDA is a parallel computing platform and application programming interface (API) model that was created and designed by Nvidia to give direct...Agricultural and Forest Meteorology. 1995:76:277–291, ISSN 0168-1923. 3. GPU vs. CPU? What is GPU computing? Santa Clara (CA): Nvidia Corporation; 2017

  15. High-Throughput Characterization of Porous Materials Using Graphics Processing Units

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Jihan; Martin, Richard L.; Rübel, Oliver; Haranczyk, Maciej; Smit, Berend

    2012-05-08

    We have developed a high-throughput graphics processing units (GPU) code that can characterize a large database of crystalline porous materials. In our algorithm, the GPU is utilized to accelerate energy grid calculations where the grid values represent interactions (i.e., Lennard-Jones + Coulomb potentials) between gas molecules (i.e., CH$_{4}$ and CO$_{2}$) and material's framework atoms. Using a parallel flood fill CPU algorithm, inaccessible regions inside the framework structures are identified and blocked based on their energy profiles. Finally, we compute the Henry coefficients and heats of adsorption through statistical Widom insertion Monte Carlo moves in the domain restricted to the accessible space. The code offers significant speedup over a single core CPU code and allows us to characterize a set of porous materials at least an order of magnitude larger than ones considered in earlier studies. For structures selected from such a prescreening algorithm, full adsorption isotherms can be calculated by conducting multiple grand canonical Monte Carlo simulations concurrently within the GPU.

  16. The application of projected conjugate gradient solvers on graphical processing units

    International Nuclear Information System (INIS)

    Lin, Youzuo; Renaut, Rosemary

    2011-01-01

    Graphical processing units introduce the capability for large scale computation at the desktop. Presented numerical results verify that efficiencies and accuracies of basic linear algebra subroutines of all levels when implemented in CUDA and Jacket are comparable. But experimental results demonstrate that the basic linear algebra subroutines of level three offer the greatest potential for improving efficiency of basic numerical algorithms. We consider the solution of the multiple right hand side set of linear equations using Krylov subspace-based solvers. Thus, for the multiple right hand side case, it is more efficient to make use of a block implementation of the conjugate gradient algorithm, rather than to solve each system independently. Jacket is used for the implementation. Furthermore, including projection from one system to another improves efficiency. A relevant example, for which simulated results are provided, is the reconstruction of a three dimensional medical image volume acquired from a positron emission tomography scanner. Efficiency of the reconstruction is improved by using projection across nearby slices.

  17. The application of projected conjugate gradient solvers on graphical processing units

    Energy Technology Data Exchange (ETDEWEB)

    Lin, Youzuo [Los Alamos National Laboratory; Renaut, Rosemary [ARIZONA STATE UNIV.

    2011-01-26

    Graphical processing units introduce the capability for large scale computation at the desktop. Presented numerical results verify that efficiencies and accuracies of basic linear algebra subroutines of all levels when implemented in CUDA and Jacket are comparable. But experimental results demonstrate that the basic linear algebra subroutines of level three offer the greatest potential for improving efficiency of basic numerical algorithms. We consider the solution of the multiple right hand side set of linear equations using Krylov subspace-based solvers. Thus, for the multiple right hand side case, it is more efficient to make use of a block implementation of the conjugate gradient algorithm, rather than to solve each system independently. Jacket is used for the implementation. Furthermore, including projection from one system to another improves efficiency. A relevant example, for which simulated results are provided, is the reconstruction of a three dimensional medical image volume acquired from a positron emission tomography scanner. Efficiency of the reconstruction is improved by using projection across nearby slices.

  18. Selecting a Benchmark Suite to Profile High-Performance Computing (HPC) Machines

    Science.gov (United States)

    2014-11-01

    architectures. Machines now contain central processing units (CPUs), graphics processing units (GPUs), and many integrated core ( MIC ) architecture all...evaluate the feasibility and applicability of a new architecture just released to the market . Researchers are often unsure how available resources will...architectures. Having a suite of programs running on different architectures, such as GPUs, MICs , and CPUs, adds complexity and technical challenges

  19. 15 CFR 971.427 - Processing outside the United States.

    Science.gov (United States)

    2010-01-01

    ... 15 Commerce and Foreign Trade 3 2010-01-01 2010-01-01 false Processing outside the United States... THE ENVIRONMENTAL DATA SERVICE DEEP SEABED MINING REGULATIONS FOR COMMERCIAL RECOVERY PERMITS Issuance/Transfer: Terms, Conditions and Restrictions Terms, Conditions and Restrictions § 971.427 Processing...

  20. Digital image processing and analysis human and computer vision applications with CVIPtools

    CERN Document Server

    Umbaugh, Scott E

    2010-01-01

    Section I Introduction to Digital Image Processing and AnalysisDigital Image Processing and AnalysisOverviewImage Analysis and Computer VisionImage Processing and Human VisionKey PointsExercisesReferencesFurther ReadingComputer Imaging SystemsImaging Systems OverviewImage Formation and SensingCVIPtools SoftwareImage RepresentationKey PointsExercisesSupplementary ExercisesReferencesFurther ReadingSection II Digital Image Analysis and Computer VisionIntroduction to Digital Image AnalysisIntroductionPreprocessingBinary Image AnalysisKey PointsExercisesSupplementary ExercisesReferencesFurther Read

  1. The Use of Computer Graphics in the Design Process.

    Science.gov (United States)

    Palazzi, Maria

    This master's thesis examines applications of computer technology to the field of industrial design and ways in which technology can transform the traditional process. Following a statement of the problem, the history and applications of the fields of computer graphics and industrial design are reviewed. The traditional industrial design process…

  2. Applications of evolutionary computation in image processing and pattern recognition

    CERN Document Server

    Cuevas, Erik; Perez-Cisneros, Marco

    2016-01-01

    This book presents the use of efficient Evolutionary Computation (EC) algorithms for solving diverse real-world image processing and pattern recognition problems. It provides an overview of the different aspects of evolutionary methods in order to enable the reader in reaching a global understanding of the field and, in conducting studies on specific evolutionary techniques that are related to applications in image processing and pattern recognition. It explains the basic ideas of the proposed applications in a way that can also be understood by readers outside of the field. Image processing and pattern recognition practitioners who are not evolutionary computation researchers will appreciate the discussed techniques beyond simple theoretical tools since they have been adapted to solve significant problems that commonly arise on such areas. On the other hand, members of the evolutionary computation community can learn the way in which image processing and pattern recognition problems can be translated into an...

  3. SHIPBUILDING PRODUCTION PROCESS DESIGN METHODOLOGY USING COMPUTER SIMULATION

    OpenAIRE

    Marko Hadjina; Nikša Fafandjel; Tin Matulja

    2015-01-01

    In this research a shipbuilding production process design methodology, using computer simulation, is suggested. It is expected from suggested methodology to give better and more efficient tool for complex shipbuilding production processes design procedure. Within the first part of this research existing practice for production process design in shipbuilding was discussed, its shortcomings and problem were emphasized. In continuing, discrete event simulation modelling method, as basis of sugge...

  4. Cool computers in a bunker. 10 000 kW of cold demand for 160 000 internet computers; Coole Rechner im Bunker. 10 000 kW Kaeltebedarf fuer 160 000 Internetrechner

    Energy Technology Data Exchange (ETDEWEB)

    Klein, S. [Combitherm GmbH, Stuttgart-Fellbach (Germany)

    2007-06-15

    In 2005, Combitherm GmbH of Stuttgart-Fellbach, a producer of refrigerators and heat pumps specializing in customized solutions, was given an unusual order as 1 and 1 Internet AG, one of the world's biggest internet providers, was looking for a cooling concept for their new central computer system near Baden-Baden, which was to become a central node in international data transmission. Combitherm already had experience with cold water units and free cooling elements in the 5000 kW range for a big computer center. The tasks were defined in close cooperation with the customer and with a Karlsruhe bureau of engineering consultants, and a refrigerating concept was developed. (orig.)

  5. Modelling and simulating decision processes of linked lives: An approach based on concurrent processes and stochastic race

    NARCIS (Netherlands)

    Warnke, T.; Reinhardt, O.; Klabunde, A.; Willekens, F.J.; Uhrmacher, A.

    2017-01-01

    Individuals’ decision processes play a central role in understanding modern migration phenomena and other demographic processes. Their integration into agent-based computational demography depends largely on suitable support by a modelling language. We are developing the Modelling Language for

  6. A Tuning Process in a Tunable Archtecture Computer System

    OpenAIRE

    深沢, 良彰; 岸野, 覚; 門倉, 敏夫

    1986-01-01

    A tuning process in a tunable archtecture computer is described. We have designed a computer system with tunable archtecture. Main components of this computer are four AM2903 bit-slice chips. The control schema of micro instructions is horizontal-type, and the length of each instruction is 104 bits. Our tunable algorithm utilizes an execution history of machine level instructions, because the execution history can be regarded as a property of the user program. In execution histories of simila...

  7. Acceleration of the OpenFOAM-based MHD solver using graphics processing units

    International Nuclear Information System (INIS)

    He, Qingyun; Chen, Hongli; Feng, Jingchao

    2015-01-01

    Highlights: • A 3D PISO-MHD was implemented on Kepler-class graphics processing units (GPUs) using CUDA technology. • A consistent and conservative scheme is used in the code which was validated by three basic benchmarks in a rectangular and round ducts. • Parallelized of CPU and GPU acceleration were compared relating to single core CPU in MHD problems and non-MHD problems. • Different preconditions for solving MHD solver were compared and the results showed that AMG method is better for calculations. - Abstract: The pressure-implicit with splitting of operators (PISO) magnetohydrodynamics MHD solver of the couple of Navier–Stokes equations and Maxwell equations was implemented on Kepler-class graphics processing units (GPUs) using the CUDA technology. The solver is developed on open source code OpenFOAM based on consistent and conservative scheme which is suitable for simulating MHD flow under strong magnetic field in fusion liquid metal blanket with structured or unstructured mesh. We verified the validity of the implementation on several standard cases including the benchmark I of Shercliff and Hunt's cases, benchmark II of fully developed circular pipe MHD flow cases and benchmark III of KIT experimental case. Computational performance of the GPU implementation was examined by comparing its double precision run times with those of essentially the same algorithms and meshes. The resulted showed that a GPU (GTX 770) can outperform a server-class 4-core, 8-thread CPU (Intel Core i7-4770k) by a factor of 2 at least.

  8. Acceleration of the OpenFOAM-based MHD solver using graphics processing units

    Energy Technology Data Exchange (ETDEWEB)

    He, Qingyun; Chen, Hongli, E-mail: hlchen1@ustc.edu.cn; Feng, Jingchao

    2015-12-15

    Highlights: • A 3D PISO-MHD was implemented on Kepler-class graphics processing units (GPUs) using CUDA technology. • A consistent and conservative scheme is used in the code which was validated by three basic benchmarks in a rectangular and round ducts. • Parallelized of CPU and GPU acceleration were compared relating to single core CPU in MHD problems and non-MHD problems. • Different preconditions for solving MHD solver were compared and the results showed that AMG method is better for calculations. - Abstract: The pressure-implicit with splitting of operators (PISO) magnetohydrodynamics MHD solver of the couple of Navier–Stokes equations and Maxwell equations was implemented on Kepler-class graphics processing units (GPUs) using the CUDA technology. The solver is developed on open source code OpenFOAM based on consistent and conservative scheme which is suitable for simulating MHD flow under strong magnetic field in fusion liquid metal blanket with structured or unstructured mesh. We verified the validity of the implementation on several standard cases including the benchmark I of Shercliff and Hunt's cases, benchmark II of fully developed circular pipe MHD flow cases and benchmark III of KIT experimental case. Computational performance of the GPU implementation was examined by comparing its double precision run times with those of essentially the same algorithms and meshes. The resulted showed that a GPU (GTX 770) can outperform a server-class 4-core, 8-thread CPU (Intel Core i7-4770k) by a factor of 2 at least.

  9. Analysis of source spectra, attenuation, and site effects from central and eastern United States earthquakes

    International Nuclear Information System (INIS)

    Lindley, G.

    1998-02-01

    This report describes the results from three studies of source spectra, attenuation, and site effects of central and eastern United States earthquakes. In the first study source parameter estimates taken from 27 previous studies were combined to test the assumption that the earthquake stress drop is roughly a constant, independent of earthquake size. 200 estimates of stress drop and seismic moment from eastern North American earthquakes were combined. It was found that the estimated stress drop from the 27 studies increases approximately as the square-root of the seismic moment, from about 3 bars at 10 20 dyne-cm to 690 bars at 10 25 dyne-cm. These results do not support the assumption of a constant stress drop when estimating ground motion parameters from eastern North American earthquakes. In the second study, broadband seismograms recorded by the United States National Seismograph Network and cooperating stations have been analysed to determine Q Lg as a function of frequency in five regions: the northeastern US, southeastern US, central US, northern Basin and Range, and California and western Nevada. In the third study, using spectral analysis, estimates have been made for the anelastic attenuation of four regional phases, and estimates have been made for the source parameters of 27 earthquakes, including the M b 5.6, 14 April, 1995, West Texas earthquake

  10. Insulating process for HT-7U central solenoid model coils

    International Nuclear Information System (INIS)

    Cui Yimin; Pan Wanjiang; Wu Songtao; Wan Yuanxi

    2003-01-01

    The HT-7U superconducting Tokamak is a whole superconducting magnetically confined fusion device. The insulating system of its central solenoid coils is critical to its properties. In this paper the forming of the insulating system and the vacuum-pressure-impregnating (VPI) are introduced, and the whole insulating process is verified under the super-conducting experiment condition

  11. Comparing Administrative and Clinical Data for Central Line Associated Blood Stream Infections in Pediatric Intensive Care Unit and Pediatric Cardiothoracic Intensive Care Unit

    Science.gov (United States)

    Bond, Jory; Issa, Mohamed; Nasrallah, Ali; Bahroloomi, Sheena; Blackwood, Roland A.

    2016-01-01

    Central line associated bloodstream infections (CLABSIs) are a frequent source of health complication for patients of all ages, including for patients in the pediatric intensive care unit (PICU) and Pediatric Cardiothoracic Intensive Care Unit (PCTU). Many hospitals, including the University of Michigan Health System, currently use the International Classification of Disease (ICD) coding system when coding for CLABSI. The purpose of this study was to determine the accuracy of coding for CLABSI infections with ICD-9CM codes in PICU and PCTU patients. A retrospective chart review was conducted for 75 PICU and PCTU patients with 90 events of hospital acquired central line infections at the University of Michigan Health System (from 2007-2011). The different variables examined in the chart review included the type of central line the patient had, the duration of the stay of the line, the type of organism infecting the patient, and the treatment the patient received. A review was conducted to assess if patients had received the proper ICD-9CM code for their hospital acquired infection. In addition, each patient chart was searched using Electronic Medical Record Search Engine to determine if any phrases that commonly referred to hospital acquired CLABSIs were present in their charts. Our review found that in most CLABSI cases the hospital’s administrative data diagnosis using ICD-9CM coding systems did not code for the CLABSI. Our results indicate a low sensitivity of 32% in the PICU and an even lower sensitivity of 12% in the PCTU. Using these results, we can conclude that the ICD-9CM coding system cannot be used for accurately defining hospital acquired CLABSIs in administrative data. With the new use of the ICD-10CM coding system, further research is needed to assess the effects of the ICD-10CM coding system on the accuracy of administrative data.

  12. Comparing administrative and clinical data for central line associated blood stream infections in Pediatric Intensive Care Unit and Pediatric Cardiothoracic Intensive Care Unit

    Directory of Open Access Journals (Sweden)

    Jory Bond

    2016-10-01

    Full Text Available Central line associated bloodstream infections (CLABSIs are a frequent source of health complication for patients of all ages, including for patients in the pediatric intensive care unit (PICU and Pediatric Cardiothoracic Intensive Care Unit (PCTU. Many hospitals, including the University of Michigan Health System, currently use the International Classification of Disease (ICD coding system when coding for CLABSI. The purpose of this study was to determine the accuracy of coding for CLABSI infections with ICD-9CM codes in PICU and PCTU patients. A retrospective chart review was conducted for 75 PICU and PCTU patients with 90 events of hospital acquired central line infections at the University of Michigan Health System (from 2007-2011. The different variables examined in the chart review included the type of central line the patient had, the duration of the stay of the line, the type of organism infecting the patient, and the treatment the patient received. A review was conducted to assess if patients had received the proper ICD-9CM code for their hospital acquired infection. In addition, each patient chart was searched using Electronic Medical Record Search Engine to determine if any phrases that commonly referred to hospital acquired CLABSIs were present in their charts. Our review found that in most CLABSI cases the hospital’s administrative data diagnosis using ICD-9CM coding systems did not code for the CLABSI. Our results indicate a low sensitivity of 32% in the PICU and an even lower sensitivity of 12% in the PCTU. Using these results, we can conclude that the ICD-9CM coding system cannot be used for accurately defining hospital acquired CLABSIs in administrative data. With the new use of the ICD- 10CM coding system, further research is needed to assess the effects of the ICD-10CM coding system on the accuracy of administrative data.

  13. Central tarsal bone fractures in horses not used for racing: Computed tomographic configuration and long-term outcome of lag screw fixation.

    Science.gov (United States)

    Gunst, S; Del Chicca, F; Fürst, A E; Kuemmerle, J M

    2016-09-01

    There are no reports on the configuration of equine central tarsal bone fractures based on cross-sectional imaging and clinical and radiographic long-term outcome after internal fixation. To report clinical, radiographic and computed tomographic findings of equine central tarsal bone fractures and to evaluate the long-term outcome of internal fixation. Retrospective case series. All horses diagnosed with a central tarsal bone fracture at our institution in 2009-2013 were included. Computed tomography and internal fixation using lag screw technique was performed in all patients. Medical records and diagnostic images were reviewed retrospectively. A clinical and radiographic follow-up examination was performed at least 1 year post operatively. A central tarsal bone fracture was diagnosed in 6 horses. Five were Warmbloods used for showjumping and one was a Quarter Horse used for reining. All horses had sagittal slab fractures that began dorsally, ran in a plantar or plantaromedial direction and exited the plantar cortex at the plantar or plantaromedial indentation of the central tarsal bone. Marked sclerosis of the central tarsal bone was diagnosed in all patients. At long-term follow-up, 5/6 horses were sound and used as intended although mild osteophyte formation at the distal intertarsal joint was commonly observed. Central tarsal bone fractures in nonracehorses had a distinct configuration but radiographically subtle additional fracture lines can occur. A chronic stress related aetiology seems likely. Internal fixation of these fractures based on an accurate diagnosis of the individual fracture configuration resulted in a very good prognosis. © 2015 EVJ Ltd.

  14. Conceptual Design for the Pilot-Scale Plutonium Oxide Processing Unit in the Radiochemical Processing Laboratory

    Energy Technology Data Exchange (ETDEWEB)

    Lumetta, Gregg J.; Meier, David E.; Tingey, Joel M.; Casella, Amanda J.; Delegard, Calvin H.; Edwards, Matthew K.; Jones, Susan A.; Rapko, Brian M.

    2014-08-05

    This report describes a conceptual design for a pilot-scale capability to produce plutonium oxide for use as exercise and reference materials, and for use in identifying and validating nuclear forensics signatures associated with plutonium production. This capability is referred to as the Pilot-scale Plutonium oxide Processing Unit (P3U), and it will be located in the Radiochemical Processing Laboratory at the Pacific Northwest National Laboratory. The key unit operations are described, including plutonium dioxide (PuO2) dissolution, purification of the Pu by ion exchange, precipitation, and conversion to oxide by calcination.

  15. Computer-integrated electric-arc melting process control system

    OpenAIRE

    Дёмин, Дмитрий Александрович

    2014-01-01

    Developing common principles of completing melting process automation systems with hardware and creating on their basis rational choices of computer- integrated electricarc melting control systems is an actual task since it allows a comprehensive approach to the issue of modernizing melting sites of workshops. This approach allows to form the computer-integrated electric-arc furnace control system as part of a queuing system “electric-arc furnace - foundry conveyor” and consider, when taking ...

  16. Central Auditory Processing Disorders: Is It a Meaningful Construct or a Twentieth Century Unicorn?

    Science.gov (United States)

    Kamhi, Alan G.; Beasley, Daniel S.

    1985-01-01

    The article demonstrates how professional and theoretical perspectives (including psycholinguistics, behaviorist, and information processing perspectives) significantly influence the manner in which central auditory processing is viewed, assessed, and remediated. (Author/CL)

  17. Mechanical strainer unit

    International Nuclear Information System (INIS)

    Kraeling, J.B.; Netkowicz, R.J.; Schnall, I.H.

    1983-01-01

    The mechanical strainer unit is connected to a flanged conduit which originates in and extends out of a suppression chamber in a nuclear reactor. The strainer includes a plurality of centrally apertured plates positioned along a common central axis and in parallel and spaced relationship. The plates have a plurality of bores radially spaced about the central axis. Spacer means such as washers are positioned between adjacent plates to maintain the plates is spaced relationship and form communicating passages of a predetermined size to the central apertures. Connecting means such as bolts or studs extend through the aligned bores to maintain the unit in assembled relationship and secure the unit to the pipe. By employing perforated plates and blocking off certain of the communicating passages, a dual straining effect can be achieved

  18. Sedimentary processes in the Carnot Formation (Central African Republic) related to the palaeogeographic framework of Central Africa

    Science.gov (United States)

    Censier, Claude; Lang, Jacques

    1999-08-01

    The depositional environment, provenance and processes of emplacement of the detrital material of the Mesozoic Carnot Formation are defined, by bedding and sedimentological analysis of its main facies, and are reconstructed within the palaeogeographic framework of Central Africa. The clastic material was laid down between probably the Albian and the end of the Cretaceous, in a NNW-oriented braided stream fluvial system that drained into the Doba Trough (Chad) and probably also into the Touboro Basin (Cameroon). The material was derived from weathering of the underlying Devonian-Carboniferous Mambéré Glacial Formation and of the Precambrian schist-quartzite complex located to the south of the Carnot Formation. These results provide useful indications as to the provenance of diamonds mined in the southwest Central African Republic.

  19. Computer hardware for radiologists: Part 2

    International Nuclear Information System (INIS)

    Indrajit, IK; Alam, A

    2010-01-01

    Computers are an integral part of modern radiology equipment. In the first half of this two-part article, we dwelt upon some fundamental concepts regarding computer hardware, covering components like motherboard, central processing unit (CPU), chipset, random access memory (RAM), and memory modules. In this article, we describe the remaining computer hardware components that are of relevance to radiology. “Storage drive” is a term describing a “memory” hardware used to store data for later retrieval. Commonly used storage drives are hard drives, floppy drives, optical drives, flash drives, and network drives. The capacity of a hard drive is dependent on many factors, including the number of disk sides, number of tracks per side, number of sectors on each track, and the amount of data that can be stored in each sector. “Drive interfaces” connect hard drives and optical drives to a computer. The connections of such drives require both a power cable and a data cable. The four most popular “input/output devices” used commonly with computers are the printer, monitor, mouse, and keyboard. The “bus” is a built-in electronic signal pathway in the motherboard to permit efficient and uninterrupted data transfer. A motherboard can have several buses, including the system bus, the PCI express bus, the PCI bus, the AGP bus, and the (outdated) ISA bus. “Ports” are the location at which external devices are connected to a computer motherboard. All commonly used peripheral devices, such as printers, scanners, and portable drives, need ports. A working knowledge of computers is necessary for the radiologist if the workflow is to realize its full potential and, besides, this knowledge will prepare the radiologist for the coming innovations in the ‘ever increasing’ digital future

  20. Computer hardware for radiologists: Part 2

    Directory of Open Access Journals (Sweden)

    Indrajit I

    2010-01-01

    Full Text Available Computers are an integral part of modern radiology equipment. In the first half of this two-part article, we dwelt upon some fundamental concepts regarding computer hardware, covering components like motherboard, central processing unit (CPU, chipset, random access memory (RAM, and memory modules. In this article, we describe the remaining computer hardware components that are of relevance to radiology. "Storage drive" is a term describing a "memory" hardware used to store data for later retrieval. Commonly used storage drives are hard drives, floppy drives, optical drives, flash drives, and network drives. The capacity of a hard drive is dependent on many factors, including the number of disk sides, number of tracks per side, number of sectors on each track, and the amount of data that can be stored in each sector. "Drive interfaces" connect hard drives and optical drives to a computer. The connections of such drives require both a power cable and a data cable. The four most popular "input/output devices" used commonly with computers are the printer, monitor, mouse, and keyboard. The "bus" is a built-in electronic signal pathway in the motherboard to permit efficient and uninterrupted data transfer. A motherboard can have several buses, including the system bus, the PCI express bus, the PCI bus, the AGP bus, and the (outdated ISA bus. "Ports" are the location at which external devices are connected to a computer motherboard. All commonly used peripheral devices, such as printers, scanners, and portable drives, need ports. A working knowledge of computers is necessary for the radiologist if the workflow is to realize its full potential and, besides, this knowledge will prepare the radiologist for the coming innovations in the ′ever increasing′ digital future.