WorldWideScience

Sample records for central processing units computers

  1. Temperature of the Central Processing Unit

    Directory of Open Access Journals (Sweden)

    Ivan Lavrov

    2016-10-01

    Full Text Available Heat is inevitably generated in the semiconductors during operation. Cooling in a computer, and in its main part – the Central Processing Unit (CPU, is crucial, allowing the proper functioning without overheating, malfunctioning, and damage. In order to estimate the temperature as a function of time, it is important to solve the differential equations describing the heat flow and to understand how it depends on the physical properties of the system. This project aims to answer these questions by considering a simplified model of the CPU + heat sink. A similarity with the electrical circuit and certain methods from electrical circuit analysis are discussed.

  2. Quantum Central Processing Unit and Quantum Algorithm

    Institute of Scientific and Technical Information of China (English)

    王安民

    2002-01-01

    Based on a scalable and universal quantum network, quantum central processing unit, proposed in our previous paper [Chin. Phys. Left. 18 (2001)166], the whole quantum network for the known quantum algorithms,including quantum Fourier transformation, Shor's algorithm and Grover's algorithm, is obtained in a unitied way.

  3. Central Computer IMS Processing System (CIMS).

    Science.gov (United States)

    Wolfe, Howard

    As part of the IMS Version 3 tryout in 1971-72, software was developed to enable data submitted by IMS users to be transmitted to the central computer, which acted on the data to create IMS reports and to update the Pupil Data Base with criterion exercise and class roster information. The program logic is described, and the subroutines and…

  4. Mesh-particle interpolations on graphics processing units and multicore central processing units.

    Science.gov (United States)

    Rossinelli, Diego; Conti, Christian; Koumoutsakos, Petros

    2011-06-13

    Particle-mesh interpolations are fundamental operations for particle-in-cell codes, as implemented in vortex methods, plasma dynamics and electrostatics simulations. In these simulations, the mesh is used to solve the field equations and the gradients of the fields are used in order to advance the particles. The time integration of particle trajectories is performed through an extensive resampling of the flow field at the particle locations. The computational performance of this resampling turns out to be limited by the memory bandwidth of the underlying computer architecture. We investigate how mesh-particle interpolation can be efficiently performed on graphics processing units (GPUs) and multicore central processing units (CPUs), and we present two implementation techniques. The single-precision results for the multicore CPU implementation show an acceleration of 45-70×, depending on system size, and an acceleration of 85-155× for the GPU implementation over an efficient single-threaded C++ implementation. In double precision, we observe a performance improvement of 30-40× for the multicore CPU implementation and 20-45× for the GPU implementation. With respect to the 16-threaded standard C++ implementation, the present CPU technique leads to a performance increase of roughly 2.8-3.7× in single precision and 1.7-2.4× in double precision, whereas the GPU technique leads to an improvement of 9× in single precision and 2.2-2.8× in double precision.

  5. Exploiting graphics processing units for computational biology and bioinformatics.

    Science.gov (United States)

    Payne, Joshua L; Sinnott-Armstrong, Nicholas A; Moore, Jason H

    2010-09-01

    Advances in the video gaming industry have led to the production of low-cost, high-performance graphics processing units (GPUs) that possess more memory bandwidth and computational capability than central processing units (CPUs), the standard workhorses of scientific computing. With the recent release of generalpurpose GPUs and NVIDIA's GPU programming language, CUDA, graphics engines are being adopted widely in scientific computing applications, particularly in the fields of computational biology and bioinformatics. The goal of this article is to concisely present an introduction to GPU hardware and programming, aimed at the computational biologist or bioinformaticist. To this end, we discuss the primary differences between GPU and CPU architecture, introduce the basics of the CUDA programming language, and discuss important CUDA programming practices, such as the proper use of coalesced reads, data types, and memory hierarchies. We highlight each of these topics in the context of computing the all-pairs distance between instances in a dataset, a common procedure in numerous disciplines of scientific computing. We conclude with a runtime analysis of the GPU and CPU implementations of the all-pairs distance calculation. We show our final GPU implementation to outperform the CPU implementation by a factor of 1700.

  6. Fast high-resolution computer-generated hologram computation using multiple graphics processing unit cluster system.

    Science.gov (United States)

    Takada, Naoki; Shimobaba, Tomoyoshi; Nakayama, Hirotaka; Shiraki, Atsushi; Okada, Naohisa; Oikawa, Minoru; Masuda, Nobuyuki; Ito, Tomoyoshi

    2012-10-20

    To overcome the computational complexity of a computer-generated hologram (CGH), we implement an optimized CGH computation in our multi-graphics processing unit cluster system. Our system can calculate a CGH of 6,400×3,072 pixels from a three-dimensional (3D) object composed of 2,048 points in 55 ms. Furthermore, in the case of a 3D object composed of 4096 points, our system is 553 times faster than a conventional central processing unit (using eight threads).

  7. Centralization of Intensive Care Units: Process Reengineering in a Hospital

    Directory of Open Access Journals (Sweden)

    Arun Kumar

    2010-03-01

    Full Text Available Centralization of intensive care units (ICUs is a concept that has been around for several decades and the OECD countries have led the way in adopting this in their operations. Singapore Hospital was built in 1981, before the concept of centralization of ICUs took off. The hospital's ICUs were never centralized and were spread out across eight different blocks with the specialization they were associated with. Coupled with the acquisitions of the new concept of centralization and its benefits, the hospital recognizes the importance of having a centralized ICU to better handle major disasters. Using simulation models, this paper attempts to study the feasibility of centralization of ICUs in Singapore Hospital, subject to space constraints. The results will prove helpful to those who consider reengineering the intensive care process in hospitals.

  8. A Universal Quantum Network Quantum Central Processing Unit

    Institute of Scientific and Technical Information of China (English)

    WANG An-Min

    2001-01-01

    A new construction scheme of a universal quantum network which is compatible with the known quantum gate- assembly schemes is proposed. Our quantum network is standard, easy-assemble, reusable, scalable and even potentially programmable. Moreover, we can construct a whole quantum network to implement the generalquantum algorithm and quantum simulation procedure. In the above senses, it is a realization of the quantum central processing unit.

  9. Architectural and performance considerations for a 10(7)-instruction/sec optoelectronic central processing unit.

    Science.gov (United States)

    Arrathoon, R; Kozaitis, S

    1987-11-01

    Architectural considerations for a multiple-instruction, single-data-based optoelectronic central processing unit operating at 10(7) instructions per second are detailed. Central to the operation of this device is a giant fiber-optic content-addressable memory in a programmable logic array configuration. The design includes four instructions and emphasizes the fan-in and fan-out capabilities of optical systems. Interconnection limitations and scaling issues are examined.

  10. Process as Content in Computer Science Education: Empirical Determination of Central Processes

    Science.gov (United States)

    Zendler, A.; Spannagel, C.; Klaudt, D.

    2008-01-01

    Computer science education should not be based on short-term developments but on content that is observable in multiple domains of computer science, may be taught at every intellectual level, will be relevant in the longer term, and is related to everyday language and/or thinking. Recently, a catalogue of "central concepts" for computer science…

  11. Density functional theory calculation on many-cores hybrid central processing unit-graphic processing unit architectures.

    Science.gov (United States)

    Genovese, Luigi; Ospici, Matthieu; Deutsch, Thierry; Méhaut, Jean-François; Neelov, Alexey; Goedecker, Stefan

    2009-07-21

    We present the implementation of a full electronic structure calculation code on a hybrid parallel architecture with graphic processing units (GPUs). This implementation is performed on a free software code based on Daubechies wavelets. Such code shows very good performances, systematic convergence properties, and an excellent efficiency on parallel computers. Our GPU-based acceleration fully preserves all these properties. In particular, the code is able to run on many cores which may or may not have a GPU associated, and thus on parallel and massive parallel hybrid machines. With double precision calculations, we may achieve considerable speedup, between a factor of 20 for some operations and a factor of 6 for the whole density functional theory code.

  12. Massively Parallel Signal Processing using the Graphics Processing Unit for Real-Time Brain-Computer Interface Feature Extraction.

    Science.gov (United States)

    Wilson, J Adam; Williams, Justin C

    2009-01-01

    The clock speeds of modern computer processors have nearly plateaued in the past 5 years. Consequently, neural prosthetic systems that rely on processing large quantities of data in a short period of time face a bottleneck, in that it may not be possible to process all of the data recorded from an electrode array with high channel counts and bandwidth, such as electrocorticographic grids or other implantable systems. Therefore, in this study a method of using the processing capabilities of a graphics card [graphics processing unit (GPU)] was developed for real-time neural signal processing of a brain-computer interface (BCI). The NVIDIA CUDA system was used to offload processing to the GPU, which is capable of running many operations in parallel, potentially greatly increasing the speed of existing algorithms. The BCI system records many channels of data, which are processed and translated into a control signal, such as the movement of a computer cursor. This signal processing chain involves computing a matrix-matrix multiplication (i.e., a spatial filter), followed by calculating the power spectral density on every channel using an auto-regressive method, and finally classifying appropriate features for control. In this study, the first two computationally intensive steps were implemented on the GPU, and the speed was compared to both the current implementation and a central processing unit-based implementation that uses multi-threading. Significant performance gains were obtained with GPU processing: the current implementation processed 1000 channels of 250 ms in 933 ms, while the new GPU method took only 27 ms, an improvement of nearly 35 times.

  13. Computing the Density Matrix in Electronic Structure Theory on Graphics Processing Units.

    Science.gov (United States)

    Cawkwell, M J; Sanville, E J; Mniszewski, S M; Niklasson, Anders M N

    2012-11-13

    The self-consistent solution of a Schrödinger-like equation for the density matrix is a critical and computationally demanding step in quantum-based models of interatomic bonding. This step was tackled historically via the diagonalization of the Hamiltonian. We have investigated the performance and accuracy of the second-order spectral projection (SP2) algorithm for the computation of the density matrix via a recursive expansion of the Fermi operator in a series of generalized matrix-matrix multiplications. We demonstrate that owing to its simplicity, the SP2 algorithm [Niklasson, A. M. N. Phys. Rev. B2002, 66, 155115] is exceptionally well suited to implementation on graphics processing units (GPUs). The performance in double and single precision arithmetic of a hybrid GPU/central processing unit (CPU) and full GPU implementation of the SP2 algorithm exceed those of a CPU-only implementation of the SP2 algorithm and traditional matrix diagonalization when the dimensions of the matrices exceed about 2000 × 2000. Padding schemes for arrays allocated in the GPU memory that optimize the performance of the CUBLAS implementations of the level 3 BLAS DGEMM and SGEMM subroutines for generalized matrix-matrix multiplications are described in detail. The analysis of the relative performance of the hybrid CPU/GPU and full GPU implementations indicate that the transfer of arrays between the GPU and CPU constitutes only a small fraction of the total computation time. The errors measured in the self-consistent density matrices computed using the SP2 algorithm are generally smaller than those measured in matrices computed via diagonalization. Furthermore, the errors in the density matrices computed using the SP2 algorithm do not exhibit any dependence of system size, whereas the errors increase linearly with the number of orbitals when diagonalization is employed.

  14. Thermal/Heat Transfer Analysis Using a Graphic Processing Unit (GPU) Enabled Computing Environment Project

    Data.gov (United States)

    National Aeronautics and Space Administration — The objective of this project was to use GPU enabled computing to accelerate the analyses of heat transfer and thermal effects. Graphical processing unit (GPU)...

  15. Energy- and cost-efficient lattice-QCD computations using graphics processing units

    Energy Technology Data Exchange (ETDEWEB)

    Bach, Matthias

    2014-07-01

    Quarks and gluons are the building blocks of all hadronic matter, like protons and neutrons. Their interaction is described by Quantum Chromodynamics (QCD), a theory under test by large scale experiments like the Large Hadron Collider (LHC) at CERN and in the future at the Facility for Antiproton and Ion Research (FAIR) at GSI. However, perturbative methods can only be applied to QCD for high energies. Studies from first principles are possible via a discretization onto an Euclidean space-time grid. This discretization of QCD is called Lattice QCD (LQCD) and is the only ab-initio option outside of the high-energy regime. LQCD is extremely compute and memory intensive. In particular, it is by definition always bandwidth limited. Thus - despite the complexity of LQCD applications - it led to the development of several specialized compute platforms and influenced the development of others. However, in recent years General-Purpose computation on Graphics Processing Units (GPGPU) came up as a new means for parallel computing. Contrary to machines traditionally used for LQCD, graphics processing units (GPUs) are a massmarket product. This promises advantages in both the pace at which higher-performing hardware becomes available and its price. CL2QCD is an OpenCL based implementation of LQCD using Wilson fermions that was developed within this thesis. It operates on GPUs by all major vendors as well as on central processing units (CPUs). On the AMD Radeon HD 7970 it provides the fastest double-precision D kernel for a single GPU, achieving 120GFLOPS. D - the most compute intensive kernel in LQCD simulations - is commonly used to compare LQCD platforms. This performance is enabled by an in-depth analysis of optimization techniques for bandwidth-limited codes on GPUs. Further, analysis of the communication between GPU and CPU, as well as between multiple GPUs, enables high-performance Krylov space solvers and linear scaling to multiple GPUs within a single system. LQCD

  16. Parallel computing for simultaneous iterative tomographic imaging by graphics processing units

    Science.gov (United States)

    Bello-Maldonado, Pedro D.; López, Ricardo; Rogers, Colleen; Jin, Yuanwei; Lu, Enyue

    2016-05-01

    In this paper, we address the problem of accelerating inversion algorithms for nonlinear acoustic tomographic imaging by parallel computing on graphics processing units (GPUs). Nonlinear inversion algorithms for tomographic imaging often rely on iterative algorithms for solving an inverse problem, thus computationally intensive. We study the simultaneous iterative reconstruction technique (SIRT) for the multiple-input-multiple-output (MIMO) tomography algorithm which enables parallel computations of the grid points as well as the parallel execution of multiple source excitation. Using graphics processing units (GPUs) and the Compute Unified Device Architecture (CUDA) programming model an overall improvement of 26.33x was achieved when combining both approaches compared with sequential algorithms. Furthermore we propose an adaptive iterative relaxation factor and the use of non-uniform weights to improve the overall convergence of the algorithm. Using these techniques, fast computations can be performed in parallel without the loss of image quality during the reconstruction process.

  17. All-optical quantum computing with a hybrid solid-state processing unit

    CERN Document Server

    Pei, Pei; Li, Chong

    2011-01-01

    We develop an architecture of hybrid quantum solid-state processing unit for universal quantum computing. The architecture allows distant and nonidentical solid-state qubits in distinct physical systems to interact and work collaboratively. All the quantum computing procedures are controlled by optical methods using classical fields and cavity QED. Our methods have prominent advantage of the insensitivity to dissipation process due to the virtual excitation of subsystems. Moreover, the QND measurements and state transfer for the solid-state qubits are proposed. The architecture opens promising perspectives for implementing scalable quantum computation in a broader sense that different solid systems can merge and be integrated into one quantum processor afterwards.

  18. Graphics Processing Unit-Accelerated Code for Computing Second-Order Wiener Kernels and Spike-Triggered Covariance

    Science.gov (United States)

    Mano, Omer

    2017-01-01

    Sensory neuroscience seeks to understand and predict how sensory neurons respond to stimuli. Nonlinear components of neural responses are frequently characterized by the second-order Wiener kernel and the closely-related spike-triggered covariance (STC). Recent advances in data acquisition have made it increasingly common and computationally intensive to compute second-order Wiener kernels/STC matrices. In order to speed up this sort of analysis, we developed a graphics processing unit (GPU)-accelerated module that computes the second-order Wiener kernel of a system’s response to a stimulus. The generated kernel can be easily transformed for use in standard STC analyses. Our code speeds up such analyses by factors of over 100 relative to current methods that utilize central processing units (CPUs). It works on any modern GPU and may be integrated into many data analysis workflows. This module accelerates data analysis so that more time can be spent exploring parameter space and interpreting data. PMID:28068420

  19. Monte Carlo standardless approach for laser induced breakdown spectroscopy based on massive parallel graphic processing unit computing

    Science.gov (United States)

    Demidov, A.; Eschlböck-Fuchs, S.; Kazakov, A. Ya.; Gornushkin, I. B.; Kolmhofer, P. J.; Pedarnig, J. D.; Huber, N.; Heitz, J.; Schmid, T.; Rössler, R.; Panne, U.

    2016-11-01

    The improved Monte-Carlo (MC) method for standard-less analysis in laser induced breakdown spectroscopy (LIBS) is presented. Concentrations in MC LIBS are found by fitting model-generated synthetic spectra to experimental spectra. The current version of MC LIBS is based on the graphic processing unit (GPU) computation and reduces the analysis time down to several seconds per spectrum/sample. The previous version of MC LIBS which was based on the central processing unit (CPU) computation requested unacceptably long analysis times of 10's minutes per spectrum/sample. The reduction of the computational time is achieved through the massively parallel computing on the GPU which embeds thousands of co-processors. It is shown that the number of iterations on the GPU exceeds that on the CPU by a factor > 1000 for the 5-dimentional parameter space and yet requires > 10-fold shorter computational time. The improved GPU-MC LIBS outperforms the CPU-MS LIBS in terms of accuracy, precision, and analysis time. The performance is tested on LIBS-spectra obtained from pelletized powders of metal oxides consisting of CaO, Fe2O3, MgO, and TiO2 that simulated by-products of steel industry, steel slags. It is demonstrated that GPU-based MC LIBS is capable of rapid multi-element analysis with relative error between 1 and 10's percent that is sufficient for industrial applications (e.g. steel slag analysis). The results of the improved GPU-based MC LIBS are positively compared to that of the CPU-based MC LIBS as well as to the results of the standard calibration-free (CF) LIBS based on the Boltzmann plot method.

  20. Real-Time Computation of Parameter Fitting and Image Reconstruction Using Graphical Processing Units

    CERN Document Server

    Locans, Uldis; Suter, Andreas; Fischer, Jannis; Lustermann, Werner; Dissertori, Gunther; Wang, Qiulin

    2016-01-01

    In recent years graphical processing units (GPUs) have become a powerful tool in scientific computing. Their potential to speed up highly parallel applications brings the power of high performance computing to a wider range of users. However, programming these devices and integrating their use in existing applications is still a challenging task. In this paper we examined the potential of GPUs for two different applications. The first application, created at Paul Scherrer Institut (PSI), is used for parameter fitting during data analysis of muSR (muon spin rotation, relaxation and resonance) experiments. The second application, developed at ETH, is used for PET (Positron Emission Tomography) image reconstruction and analysis. Applications currently in use were examined to identify parts of the algorithms in need of optimization. Efficient GPU kernels were created in order to allow applications to use a GPU, to speed up the previously identified parts. Benchmarking tests were performed in order to measure the ...

  1. Optical diagnostics of a single evaporating droplet using fast parallel computing on graphics processing units

    Science.gov (United States)

    Jakubczyk, D.; Migacz, S.; Derkachov, G.; Woźniak, M.; Archer, J.; Kolwas, K.

    2016-09-01

    We report on the first application of the graphics processing units (GPUs) accelerated computing technology to improve performance of numerical methods used for the optical characterization of evaporating microdroplets. Single microdroplets of various liquids with different volatility and molecular weight (glycerine, glycols, water, etc.), as well as mixtures of liquids and diverse suspensions evaporate inside the electrodynamic trap under the chosen temperature and composition of atmosphere. The series of scattering patterns recorded from the evaporating microdroplets are processed by fitting complete Mie theory predictions with gradientless lookup table method. We showed that computations on GPUs can be effectively applied to inverse scattering problems. In particular, our technique accelerated calculations of the Mie scattering theory on a single-core processor in a Matlab environment over 800 times and almost 100 times comparing to the corresponding code in C language. Additionally, we overcame problems of the time-consuming data post-processing when some of the parameters (particularly the refractive index) of an investigated liquid are uncertain. Our program allows us to track the parameters characterizing the evaporating droplet nearly simultaneously with the progress of evaporation.

  2. Future forest aboveground carbon dynamics in the central United States: the importance of forest demographic processes

    Science.gov (United States)

    Jin, Wenchi; He, Hong S.; Thompson, Frank R.; Wang, Wen J.; Fraser, Jacob S.; Shifley, Stephen R.; Hanberry, Brice B.; Dijak, William D.

    2017-01-01

    The Central Hardwood Forest (CHF) in the United States is currently a major carbon sink, there are uncertainties in how long the current carbon sink will persist and if the CHF will eventually become a carbon source. We used a multi-model ensemble to investigate aboveground carbon density of the CHF from 2010 to 2300 under current climate. Simulations were done using one representative model for each of the simple, intermediate, and complex demographic approaches (ED2, LANDIS PRO, and LINKAGES, respectively). All approaches agreed that the current carbon sink would persist at least to 2100. However, carbon dynamics after current carbon sink diminishes to zero differ for different demographic modelling approaches. Both the simple and the complex demographic approaches predicted prolonged periods of relatively stable carbon densities after 2100, with minor declines, until the end of simulations in 2300. In contrast, the intermediate demographic approach predicted the CHF would become a carbon source between 2110 and 2260, followed by another carbon sink period. The disagreement between these patterns can be partly explained by differences in the capacity of models to simulate gross growth (both birth and subsequent growth) and mortality of short-lived, relatively shade-intolerant tree species. PMID:28165483

  3. Exploring Graphics Processing Unit (GPU Resource Sharing Efficiency for High Performance Computing

    Directory of Open Access Journals (Sweden)

    Teng Li

    2013-11-01

    Full Text Available The increasing incorporation of Graphics Processing Units (GPUs as accelerators has been one of the forefront High Performance Computing (HPC trends and provides unprecedented performance; however, the prevalent adoption of the Single-Program Multiple-Data (SPMD programming model brings with it challenges of resource underutilization. In other words, under SPMD, every CPU needs GPU capability available to it. However, since CPUs generally outnumber GPUs, the asymmetric resource distribution gives rise to overall computing resource underutilization. In this paper, we propose to efficiently share the GPU under SPMD and formally define a series of GPU sharing scenarios. We provide performance-modeling analysis for each sharing scenario with accurate experimentation validation. With the modeling basis, we further conduct experimental studies to explore potential GPU sharing efficiency improvements from multiple perspectives. Both further theoretical and experimental GPU sharing performance analysis and results are presented. Our results not only demonstrate the significant performance gain for SPMD programs with the proposed efficient GPU sharing, but also the further improved sharing efficiency with the optimization techniques based on our accurate modeling.

  4. Fast computation of MadGraph amplitudes on graphics processing unit (GPU)

    CERN Document Server

    Hagiwara, K; Li, Q; Okamura, N; Stelzer, T

    2013-01-01

    Continuing our previous studies on QED and QCD processes, we use the graphics processing unit (GPU) for fast calculations of helicity amplitudes for general Standard Model (SM) processes. Additional HEGET codes to handle all SM interactions are introduced, as well assthe program MG2CUDA that converts arbitrary MadGraph generated HELAS amplitudess(FORTRAN) into HEGET codes in CUDA. We test all the codes by comparing amplitudes and cross sections for multi-jet srocesses at the LHC associated with production of single and double weak bosonss a top-quark pair, Higgs boson plus a weak boson or a top-quark pair, and multisle Higgs bosons via weak-boson fusion, where all the heavy particles are allowes to decay into light quarks and leptons with full spin correlations. All the helicity amplitudes computed by HEGET are found to agree with those comsuted by HELAS within the expected numerical accuracy, and the cross sections obsained by gBASES, a GPU version of the Monte Carlo integration program, agree wish those obt...

  5. Accelerating the Fourier split operator method via graphics processing units

    CERN Document Server

    Bauke, Heiko

    2010-01-01

    Current generations of graphics processing units have turned into highly parallel devices with general computing capabilities. Thus, graphics processing units may be utilized, for example, to solve time dependent partial differential equations by the Fourier split operator method. In this contribution, we demonstrate that graphics processing units are capable to calculate fast Fourier transforms much more efficiently than traditional central processing units. Thus, graphics processing units render efficient implementations of the Fourier split operator method possible. Performance gains of more than an order of magnitude as compared to implementations for traditional central processing units are reached in the solution of the time dependent Schr\\"odinger equation and the time dependent Dirac equation.

  6. STRATEGIC BUSINESS UNIT – THE CENTRAL ELEMENT OF THE BUSINESS PORTFOLIO STRATEGIC PLANNING PROCESS

    OpenAIRE

    FLORIN TUDOR IONESCU

    2011-01-01

    Over time, due to changes in the marketing environment, generated by the tightening competition, technological, social and political pressures the companies have adopted a new approach, by which the potential businesses began to be treated as strategic business units. A strategic business unit can be considered a part of a company, a product line within a division, and sometimes a single product or brand. From a strategic perspective, the diversified companies represent a collection of busine...

  7. An Investigation Into the Feasibility of Merging Three Technical Processing Operations Into One Central Unit.

    Science.gov (United States)

    Burns, Robert W., Jr.

    Three contiguous schools in the upper midwest--a teacher's training college and a private four-year college in one state, and a land-grant university in another--were studied to see if their libraries could merge one of their major divisions--technical services--into a single administrative unit. Potential benefits from such a merger were felt to…

  8. Massively parallel signal processing using the graphics processing unit for real-time brain-computer interface feature extraction

    Directory of Open Access Journals (Sweden)

    J. Adam Wilson

    2009-07-01

    Full Text Available The clock speeds of modern computer processors have nearly plateaued in the past five years. Consequently, neural prosthetic systems that rely on processing large quantities of data in a short period of time face a bottleneck, in that it may not be possible to process all of the data recorded from an electrode array with high channel counts and bandwidth, such as electrocorticographic grids or other implantable systems. Therefore, in this study a method of using the processing capabilities of a graphics card (GPU was developed for real-time neural signal processing of a brain-computer interface (BCI. The NVIDIA CUDA system was used to offload processing to the GPU, which is capable of running many operations in parallel, potentially greatly increasing the speed of existing algorithms. The BCI system records many channels of data, which are processed and translated into a control signal, such as the movement of a computer cursor. This signal processing chain involves computing a matrix-matrix multiplication (i.e., a spatial filter, followed by calculating the power spectral density on every channel using an auto-regressive method, and finally classifying appropriate features for control. In this study, the first two computationally-intensive steps were implemented on the GPU, and the speed was compared to both the current implementation and a CPU-based implementation that uses multi-threading. Significant performance gains were obtained with GPU processing: the current implementation processed 1000 channels in 933 ms, while the new GPU method took only 27 ms, an improvement of nearly 35 times.

  9. A Real-Time High Performance Computation Architecture for Multiple Moving Target Tracking Based on Wide-Area Motion Imagery via Cloud and Graphic Processing Units

    Directory of Open Access Journals (Sweden)

    Kui Liu

    2017-02-01

    Full Text Available This paper presents the first attempt at combining Cloud with Graphic Processing Units (GPUs in a complementary manner within the framework of a real-time high performance computation architecture for the application of detecting and tracking multiple moving targets based on Wide Area Motion Imagery (WAMI. More specifically, the GPU and Cloud Moving Target Tracking (GC-MTT system applied a front-end web based server to perform the interaction with Hadoop and highly parallelized computation functions based on the Compute Unified Device Architecture (CUDA©. The introduced multiple moving target detection and tracking method can be extended to other applications such as pedestrian tracking, group tracking, and Patterns of Life (PoL analysis. The cloud and GPUs based computing provides an efficient real-time target recognition and tracking approach as compared to methods when the work flow is applied using only central processing units (CPUs. The simultaneous tracking and recognition results demonstrate that a GC-MTT based approach provides drastically improved tracking with low frame rates over realistic conditions.

  10. A Real-Time High Performance Computation Architecture for Multiple Moving Target Tracking Based on Wide-Area Motion Imagery via Cloud and Graphic Processing Units.

    Science.gov (United States)

    Liu, Kui; Wei, Sixiao; Chen, Zhijiang; Jia, Bin; Chen, Genshe; Ling, Haibin; Sheaff, Carolyn; Blasch, Erik

    2017-02-12

    This paper presents the first attempt at combining Cloud with Graphic Processing Units (GPUs) in a complementary manner within the framework of a real-time high performance computation architecture for the application of detecting and tracking multiple moving targets based on Wide Area Motion Imagery (WAMI). More specifically, the GPU and Cloud Moving Target Tracking (GC-MTT) system applied a front-end web based server to perform the interaction with Hadoop and highly parallelized computation functions based on the Compute Unified Device Architecture (CUDA©). The introduced multiple moving target detection and tracking method can be extended to other applications such as pedestrian tracking, group tracking, and Patterns of Life (PoL) analysis. The cloud and GPUs based computing provides an efficient real-time target recognition and tracking approach as compared to methods when the work flow is applied using only central processing units (CPUs). The simultaneous tracking and recognition results demonstrate that a GC-MTT based approach provides drastically improved tracking with low frame rates over realistic conditions.

  11. A Real-Time High Performance Computation Architecture for Multiple Moving Target Tracking Based on Wide-Area Motion Imagery via Cloud and Graphic Processing Units

    Science.gov (United States)

    Liu, Kui; Wei, Sixiao; Chen, Zhijiang; Jia, Bin; Chen, Genshe; Ling, Haibin; Sheaff, Carolyn; Blasch, Erik

    2017-01-01

    This paper presents the first attempt at combining Cloud with Graphic Processing Units (GPUs) in a complementary manner within the framework of a real-time high performance computation architecture for the application of detecting and tracking multiple moving targets based on Wide Area Motion Imagery (WAMI). More specifically, the GPU and Cloud Moving Target Tracking (GC-MTT) system applied a front-end web based server to perform the interaction with Hadoop and highly parallelized computation functions based on the Compute Unified Device Architecture (CUDA©). The introduced multiple moving target detection and tracking method can be extended to other applications such as pedestrian tracking, group tracking, and Patterns of Life (PoL) analysis. The cloud and GPUs based computing provides an efficient real-time target recognition and tracking approach as compared to methods when the work flow is applied using only central processing units (CPUs). The simultaneous tracking and recognition results demonstrate that a GC-MTT based approach provides drastically improved tracking with low frame rates over realistic conditions. PMID:28208684

  12. Identification of a site critical for kinase regulation on the central processing unit (CPU) helix of the aspartate receptor.

    Science.gov (United States)

    Trammell, M A; Falke, J J

    1999-01-01

    Ligand binding to the homodimeric aspartate receptor of Escherichia coli and Salmonella typhimurium generates a transmembrane signal that regulates the activity of a cytoplasmic histidine kinase, thereby controlling cellular chemotaxis. This receptor also senses intracellular pH and ambient temperature and is covalently modified by an adaptation system. A specific helix in the cytoplasmic domain of the receptor, helix alpha6, has been previously implicated in the processing of these multiple input signals. While the solvent-exposed face of helix alpha6 possesses adaptive methylation sites known to play a role in kinase regulation, the functional significance of its buried face is less clear. This buried region lies at the subunit interface where helix alpha6 packs against its symmetric partner, helix alpha6'. To test the role of the helix alpha6-helix alpha6' interface in kinase regulation, the present study introduces a series of 13 side-chain substitutions at the Gly 278 position on the buried face of helix alpha6. The substitutions are observed to dramatically alter receptor function in vivo and in vitro, yielding effects ranging from kinase superactivation (11 examples) to complete kinase inhibition (one example). Moreover, four hydrophobic, branched side chains (Val, Ile, Phe, and Trp) lock the kinase in the superactivated state regardless of whether the receptor is occupied by ligand. The observation that most side-chain substitutions at position 278 yield kinase superactivation, combined with evidence that such facile superactivation is rare at other receptor positions, identifies the buried Gly 278 residue as a regulatory hotspot where helix packing is tightly coupled to kinase regulation. Together, helix alpha6 and its packing interactions function as a simple central processing unit (CPU) that senses multiple input signals, integrates these signals, and transmits the output to the signaling subdomain where the histidine kinase is bound. Analogous CPU

  13. Central nervous system and computation.

    Science.gov (United States)

    Guidolin, Diego; Albertin, Giovanna; Guescini, Michele; Fuxe, Kjell; Agnati, Luigi F

    2011-12-01

    Computational systems are useful in neuroscience in many ways. For instance, they may be used to construct maps of brain structure and activation, or to describe brain processes mathematically. Furthermore, they inspired a powerful theory of brain function, in which the brain is viewed as a system characterized by intrinsic computational activities or as a "computational information processor. "Although many neuroscientists believe that neural systems really perform computations, some are more cautious about computationalism or reject it. Thus, does the brain really compute? Answering this question requires getting clear on a definition of computation that is able to draw a line between physical systems that compute and systems that do not, so that we can discern on which side of the line the brain (or parts of it) could fall. In order to shed some light on the role of computational processes in brain function, available neurobiological data will be summarized from the standpoint of a recently proposed taxonomy of notions of computation, with the aim of identifying which brain processes can be considered computational. The emerging picture shows the brain as a very peculiar system, in which genuine computational features act in concert with noncomputational dynamical processes, leading to continuous self-organization and remodeling under the action of external stimuli from the environment and from the rest of the organism.

  14. Reduction of computing time for seismic applications based on the Helmholtz equation by Graphics Processing Units

    NARCIS (Netherlands)

    Knibbe, H.P.

    2015-01-01

    The oil and gas industry makes use of computational intensive algorithms to provide an image of the subsurface. The image is obtained by sending wave energy into the subsurface and recording the signal required for a seismic wave to reflect back to the surface from the Earth interfaces that may have

  15. Fast traffic noise mapping of cities using the graphics processing unit of a personal computer

    NARCIS (Netherlands)

    Salomons, E.M.; Zhou, H.; Lohman, W.J.A.

    2014-01-01

    Traffic noise mapping of cities requires large computer calculation times. This originates from the large number of point-to-point sound propagation calculations that must be performed. In this article it is demonstrated that noise mapping calculation times can be reduced considerably by the use of

  16. Improvement of MS (multiple sclerosis) CAD (computer aided diagnosis) performance using C/C++ and computing engine in the graphical processing unit (GPU)

    Science.gov (United States)

    Suh, Joohyung; Ma, Kevin; Le, Anh

    2011-03-01

    Multiple Sclerosis (MS) is a disease which is caused by damaged myelin around axons of the brain and spinal cord. Currently, MR Imaging is used for diagnosis, but it is very highly variable and time-consuming since the lesion detection and estimation of lesion volume are performed manually. For this reason, we developed a CAD (Computer Aided Diagnosis) system which would assist segmentation of MS to facilitate physician's diagnosis. The MS CAD system utilizes K-NN (k-nearest neighbor) algorithm to detect and segment the lesion volume in an area based on the voxel. The prototype MS CAD system was developed under the MATLAB environment. Currently, the MS CAD system consumes a huge amount of time to process data. In this paper we will present the development of a second version of MS CAD system which has been converted into C/C++ in order to take advantage of the GPU (Graphical Processing Unit) which will provide parallel computation. With the realization of C/C++ and utilizing the GPU, we expect to cut running time drastically. The paper investigates the conversion from MATLAB to C/C++ and the utilization of a high-end GPU for parallel computing of data to improve algorithm performance of MS CAD.

  17. Compute-unified device architecture implementation of a block-matching algorithm for multiple graphical processing unit cards

    Science.gov (United States)

    Massanes, Francesc; Cadennes, Marie; Brankov, Jovan G.

    2011-07-01

    We describe and evaluate a fast implementation of a classical block-matching motion estimation algorithm for multiple graphical processing units (GPUs) using the compute unified device architecture computing engine. The implemented block-matching algorithm uses summed absolute difference error criterion and full grid search (FS) for finding optimal block displacement. In this evaluation, we compared the execution time of a GPU and CPU implementation for images of various sizes, using integer and noninteger search grids. The results show that use of a GPU card can shorten computation time by a factor of 200 times for integer and 1000 times for a noninteger search grid. The additional speedup for a noninteger search grid comes from the fact that GPU has built-in hardware for image interpolation. Further, when using multiple GPU cards, the presented evaluation shows the importance of the data splitting method across multiple cards, but an almost linear speedup with a number of cards is achievable. In addition, we compared the execution time of the proposed FS GPU implementation with two existing, highly optimized nonfull grid search CPU-based motion estimations methods, namely implementation of the Pyramidal Lucas Kanade Optical flow algorithm in OpenCV and simplified unsymmetrical multi-hexagon search in H.264/AVC standard. In these comparisons, FS GPU implementation still showed modest improvement even though the computational complexity of FS GPU implementation is substantially higher than non-FS CPU implementation. We also demonstrated that for an image sequence of 720 × 480 pixels in resolution commonly used in video surveillance, the proposed GPU implementation is sufficiently fast for real-time motion estimation at 30 frames-per-second using two NVIDIA C1060 Tesla GPU cards.

  18. Performance of heterogeneous computing with graphics processing unit and many integrated core for hartree potential calculations on a numerical grid.

    Science.gov (United States)

    Choi, Sunghwan; Kwon, Oh-Kyoung; Kim, Jaewook; Kim, Woo Youn

    2016-09-15

    We investigated the performance of heterogeneous computing with graphics processing units (GPUs) and many integrated core (MIC) with 20 CPU cores (20×CPU). As a practical example toward large scale electronic structure calculations using grid-based methods, we evaluated the Hartree potentials of silver nanoparticles with various sizes (3.1, 3.7, 4.9, 6.1, and 6.9 nm) via a direct integral method supported by the sinc basis set. The so-called work stealing scheduler was used for efficient heterogeneous computing via the balanced dynamic distribution of workloads between all processors on a given architecture without any prior information on their individual performances. 20×CPU + 1GPU was up to ∼1.5 and ∼3.1 times faster than 1GPU and 20×CPU, respectively. 20×CPU + 2GPU was ∼4.3 times faster than 20×CPU. The performance enhancement by CPU + MIC was considerably lower than expected because of the large initialization overhead of MIC, although its theoretical performance is similar with that of CPU + GPU. © 2016 Wiley Periodicals, Inc.

  19. GPUDePiCt: A Parallel Implementation of a Clustering Algorithm for Computing Degenerate Primers on Graphics Processing Units.

    Science.gov (United States)

    Cickovski, Trevor; Flor, Tiffany; Irving-Sachs, Galen; Novikov, Philip; Parda, James; Narasimhan, Giri

    2015-01-01

    In order to make multiple copies of a target sequence in the laboratory, the technique of Polymerase Chain Reaction (PCR) requires the design of "primers", which are short fragments of nucleotides complementary to the flanking regions of the target sequence. If the same primer is to amplify multiple closely related target sequences, then it is necessary to make the primers "degenerate", which would allow it to hybridize to target sequences with a limited amount of variability that may have been caused by mutations. However, the PCR technique can only allow a limited amount of degeneracy, and therefore the design of degenerate primers requires the identification of reasonably well-conserved regions in the input sequences. We take an existing algorithm for designing degenerate primers that is based on clustering and parallelize it in a web-accessible software package GPUDePiCt, using a shared memory model and the computing power of Graphics Processing Units (GPUs). We test our implementation on large sets of aligned sequences from the human genome and show a multi-fold speedup for clustering using our hybrid GPU/CPU implementation over a pure CPU approach for these sequences, which consist of more than 7,500 nucleotides. We also demonstrate that this speedup is consistent over larger numbers and longer lengths of aligned sequences.

  20. 中央核处理器的真空热解%The Vacuum Pyrolysis of Central Processing Unit

    Institute of Scientific and Technical Information of China (English)

    王晓雅

    2012-01-01

    The low temperature pyrolysis of an important electronic waste,central processing unit(CPU) was investigated under vacuum condition and was compared with the results of higher temperature pyrolysis.Results showed that the pyrolysis of CPU took place adequately with a high pyrolysis oils yield which was good for the recovery of organics in the CPU and the pins could be separated from the base plates at pyrolysis temperature of 500~700 ℃.When the pyrolysis was carried out at 300~400 ℃,the solder mask of the CPU was pyrolysed and the pins could be separated from the base plates with a relatively intact gold-plated layer.Meanwhile,the pyrolysis oils yield was lower but the composition of the pyrolysis oils was relatively simple which was easy for separation and purification.%在真空条件下对中央核处理器(CPU)这一重要的电子废弃物进行低温热解研究,并对比较高温度下的热解效果。结果表明:500~700℃热解温度下,CPU基板充分裂解,产油率高,有利于CPU中有机物的回收,且针脚可与基板分离完全。低温热解300~400℃条件下,CPU的阻焊层发生裂解,针脚可与基板分离,且针脚镀金层较为完整,产油率相对较低,但液体产物组分较为单一,易于分离提纯。

  1. Study on efficiency of time computation in x-ray imaging simulation base on Monte Carlo algorithm using graphics processing unit

    Science.gov (United States)

    Setiani, Tia Dwi; Suprijadi, Haryanto, Freddy

    2016-03-01

    Monte Carlo (MC) is one of the powerful techniques for simulation in x-ray imaging. MC method can simulate the radiation transport within matter with high accuracy and provides a natural way to simulate radiation transport in complex systems. One of the codes based on MC algorithm that are widely used for radiographic images simulation is MC-GPU, a codes developed by Andrea Basal. This study was aimed to investigate the time computation of x-ray imaging simulation in GPU (Graphics Processing Unit) compared to a standard CPU (Central Processing Unit). Furthermore, the effect of physical parameters to the quality of radiographic images and the comparison of image quality resulted from simulation in the GPU and CPU are evaluated in this paper. The simulations were run in CPU which was simulated in serial condition, and in two GPU with 384 cores and 2304 cores. In simulation using GPU, each cores calculates one photon, so, a large number of photon were calculated simultaneously. Results show that the time simulations on GPU were significantly accelerated compared to CPU. The simulations on the 2304 core of GPU were performed about 64 -114 times faster than on CPU, while the simulation on the 384 core of GPU were performed about 20 - 31 times faster than in a single core of CPU. Another result shows that optimum quality of images from the simulation was gained at the history start from 108 and the energy from 60 Kev to 90 Kev. Analyzed by statistical approach, the quality of GPU and CPU images are relatively the same.

  2. Computers and data processing

    CERN Document Server

    Deitel, Harvey M

    1985-01-01

    Computers and Data Processing provides information pertinent to the advances in the computer field. This book covers a variety of topics, including the computer hardware, computer programs or software, and computer applications systems.Organized into five parts encompassing 19 chapters, this book begins with an overview of some of the fundamental computing concepts. This text then explores the evolution of modern computing systems from the earliest mechanical calculating devices to microchips. Other chapters consider how computers present their results and explain the storage and retrieval of

  3. Signal processing unit

    Energy Technology Data Exchange (ETDEWEB)

    Boswell, J.

    1983-01-01

    The architecture of the signal processing unit (SPU) comprises an ROM connected to a program bus, and an input-output bus connected to a data bus and register through a pipeline multiplier accumulator (pmac) and a pipeline arithmetic logic unit (palu), each associated with a random access memory (ram1,2). The system pulse frequency is from 20 mhz. The pmac is further detailed, and has a capability of 20 mega operations per second. There is also a block diagram for the palu, showing interconnections between the register block (rbl), separator for bus (bs), register (reg), shifter (sh) and combination unit. The first and second rams have formats 64*16 and 32*32 bits, respectively. Further data are a 5-v power supply and 2.5 micron n-channel silicon gate mos technology with about 50000 transistors.

  4. Relativistic hydrodynamics on graphics processing units

    CERN Document Server

    Sikorski, Jan; Porter-Sobieraj, Joanna; Słodkowski, Marcin; Krzyżanowski, Piotr; Książek, Natalia; Duda, Przemysław

    2016-01-01

    Hydrodynamics calculations have been successfully used in studies of the bulk properties of the Quark-Gluon Plasma, particularly of elliptic flow and shear viscosity. However, there are areas (for instance event-by-event simulations for flow fluctuations and higher-order flow harmonics studies) where further advancement is hampered by lack of efficient and precise 3+1D~program. This problem can be solved by using Graphics Processing Unit (GPU) computing, which offers unprecedented increase of the computing power compared to standard CPU simulations. In this work, we present an implementation of 3+1D ideal hydrodynamics simulations on the Graphics Processing Unit using Nvidia CUDA framework. MUSTA-FORCE (MUlti STAge, First ORder CEntral, with a~slope limiter and MUSCL reconstruction) and WENO (Weighted Essentially Non-Oscillating) schemes are employed in the simulations, delivering second (MUSTA-FORCE), fifth and seventh (WENO) order of accuracy. Third order Runge-Kutta scheme was used for integration in the t...

  5. Using compute unified device architecture-enabled graphic processing unit to accelerate fast Fourier transform-based regression Kriging interpolation on a MODIS land surface temperature image

    Science.gov (United States)

    Hu, Hongda; Shu, Hong; Hu, Zhiyong; Xu, Jianhui

    2016-04-01

    Kriging interpolation provides the best linear unbiased estimation for unobserved locations, but its heavy computation limits the manageable problem size in practice. To address this issue, an efficient interpolation procedure incorporating the fast Fourier transform (FFT) was developed. Extending this efficient approach, we propose an FFT-based parallel algorithm to accelerate regression Kriging interpolation on an NVIDIA® compute unified device architecture (CUDA)-enabled graphic processing unit (GPU). A high-performance cuFFT library in the CUDA toolkit was introduced to execute computation-intensive FFTs on the GPU, and three time-consuming processes were redesigned as kernel functions and executed on the CUDA cores. A MODIS land surface temperature 8-day image tile at a resolution of 1 km was resampled to create experimental datasets at eight different output resolutions. These datasets were used as the interpolation grids with different sizes in a comparative experiment. Experimental results show that speedup of the FFT-based regression Kriging interpolation accelerated by GPU can exceed 1000 when processing datasets with large grid sizes, as compared to the traditional Kriging interpolation running on the CPU. These results demonstrate that the combination of FFT methods and GPU-based parallel computing techniques greatly improves the computational performance without loss of precision.

  6. Real-space density functional theory on graphical processing units: computational approach and comparison to Gaussian basis set methods

    CERN Document Server

    Andrade, Xavier

    2013-01-01

    We discuss the application of graphical processing units (GPUs) to accelerate real-space density functional theory (DFT) calculations. To make our implementation efficient, we have developed a scheme to expose the data parallelism available in the DFT approach; this is applied to the different procedures required for a real-space DFT calculation. We present results for current-generation GPUs from AMD and Nvidia, which show that our scheme, implemented in the free code OCTOPUS, can reach a sustained performance of up to 90 GFlops for a single GPU, representing an important speed-up when compared to the CPU version of the code. Moreover, for some systems our implementation can outperform a GPU Gaussian basis set code, showing that the real-space approach is a competitive alternative for DFT simulations on GPUs.

  7. Real-Space Density Functional Theory on Graphical Processing Units: Computational Approach and Comparison to Gaussian Basis Set Methods.

    Science.gov (United States)

    Andrade, Xavier; Aspuru-Guzik, Alán

    2013-10-01

    We discuss the application of graphical processing units (GPUs) to accelerate real-space density functional theory (DFT) calculations. To make our implementation efficient, we have developed a scheme to expose the data parallelism available in the DFT approach; this is applied to the different procedures required for a real-space DFT calculation. We present results for current-generation GPUs from AMD and Nvidia, which show that our scheme, implemented in the free code Octopus, can reach a sustained performance of up to 90 GFlops for a single GPU, representing a significant speed-up when compared to the CPU version of the code. Moreover, for some systems, our implementation can outperform a GPU Gaussian basis set code, showing that the real-space approach is a competitive alternative for DFT simulations on GPUs.

  8. Computing betweenness centrality in external memory

    DEFF Research Database (Denmark)

    Arge, Lars; Goodrich, Michael T.; Walderveen, Freek van

    2013-01-01

    Betweenness centrality is one of the most well-known measures of the importance of nodes in a social-network graph. In this paper we describe the first known external-memory and cache-oblivious algorithms for computing betweenness centrality. We present four different external-memory algorithms...

  9. Distributed Computing with Centralized Support Works at Brigham Young.

    Science.gov (United States)

    McDonald, Kelly; Stone, Brad

    1992-01-01

    Brigham Young University (Utah) has addressed the need for maintenance and support of distributed computing systems on campus by implementing a program patterned after a national business franchise, providing the support and training of a centralized administration but allowing each unit to operate much as an independent small business.…

  10. Central Limit Theorem for Nonlinear Hawkes Processes

    CERN Document Server

    Zhu, Lingjiong

    2012-01-01

    Hawkes process is a self-exciting point process with clustering effect whose jump rate depends on its entire past history. It has wide applications in neuroscience, finance and many other fields. Linear Hawkes process has an immigration-birth representation and can be computed more or less explicitly. It has been extensively studied in the past and the limit theorems are well understood. On the contrary, nonlinear Hawkes process lacks the immigration-birth representation and is much harder to analyze. In this paper, we obtain a functional central limit theorem for nonlinear Hawkes process.

  11. Fast point-based method of a computer-generated hologram for a triangle-patch model by using a graphics processing unit.

    Science.gov (United States)

    Sugawara, Takuya; Ogihara, Yuki; Sakamoto, Yuji

    2016-01-20

    The point-based method and fast-Fourier-transform-based method are commonly used for calculation methods of computer-generation holograms. This paper proposes a novel fast calculation method for a patch model, which uses the point-based method. The method provides a calculation time that is proportional to the number of patches but not to that of the point light sources. This means that the method is suitable for calculating a wide area covered by patches quickly. Experiments using a graphics processing unit indicated that the proposed method is about 8 times or more faster than the ordinary point-based method.

  12. Reduction of computing time for least-squares migration based on the Helmholtz equation by graphics processing units

    NARCIS (Netherlands)

    Knibbe, H.; Vuik, C.; Oosterlee, C.W.

    2015-01-01

    In geophysical applications, the interest in least-squares migration (LSM) as an imaging algorithm is increasing due to the demand for more accurate solutions and the development of high-performance computing. The computational engine of LSM in this work is the numerical solution of the 3D Helmholtz

  13. Numerical Integration with Graphical Processing Unit for QKD Simulation

    Science.gov (United States)

    2014-03-27

    existing and proposed Quantum Key Distribution (QKD) systems. This research investigates using graphical processing unit ( GPU ) technology to more...Time Pad GPU graphical processing unit API application programming interface CUDA Compute Unified Device Architecture SIMD single-instruction-stream...and can be passed by value or reference [2]. 2.3 Graphical Processing Units Programming with graphical processing unit ( GPU ) requires a different

  14. The modernization of the process computer of the Trillo Nuclear Power Plant; Modernizacion del ordenador de proceso de la Central Nuclear de Trillo

    Energy Technology Data Exchange (ETDEWEB)

    Martin Aparicio, J.; Atanasio, J.

    2011-07-01

    The paper describes the modernization of the Process computer of the Trillo Nuclear Power Plant. The process computer functions, have been incorporated in the non Safety I and C platform selected in Trillo NPP: the Siemens SPPA-T2000 OM690 (formerly known as Teleperm XP). The upgrade of the Human Machine Interface of the control room has been included in the project. The modernization project has followed the same development process used in the upgrade of the process computer of PWR German nuclear power plants. (Author)

  15. 中央维护计算机系统中的故障处理技术%Fault Data Processing Technology Applied in Central Maintenance Computer System

    Institute of Scientific and Technical Information of China (English)

    李文娟; 贺尔铭; 马存宝

    2014-01-01

    Based on knowledge on how failures are generated and processed onboard the aircraft,Implement functions,modeling strategy and experienced procedure of fault data processing including in central maintenance computer system (CMCS)are analyzed.Fault message generation,cascaded fault screening,multiple fault consolidation and flight deck effect (FDE)correlation with maintenance message are present respectively.It is critical that logic equation-based fault isolation technology is discussed for correlation design strategy to fault diag-nosis module by taking foreign advanced airplane model as the example.With the development and maturity of CMCS,maintenance and man-agement mode for commercial vehicle are changed inevitability.%针对飞机故障的产生逻辑和处理方法,详细分析了中央维护计算机系统故障数据处理模块的主要功能,建模思想和处理流程;再现了故障信息产生、级联效应删除、重复故障合并以及飞机驾驶舱效应与维护信息关联的全过程。并以国外先进机型为例,对基于逻辑方程的故障隔离技术进行了深入分析,使得系统设计思路与机载故障诊断模型相结合;随着 CMCS技术的不断发展和成熟,必将对商用飞机传统维护模式和运营模式产生深远的影响。

  16. Sono-leather technology with ultrasound: a boon for unit operations in leather processing - review of our research work at Central Leather Research Institute (CLRI), India.

    Science.gov (United States)

    Sivakumar, Venkatasubramanian; Swaminathan, Gopalaraman; Rao, Paruchuri Gangadhar; Ramasami, Thirumalachari

    2009-01-01

    Ultrasound is a sound wave with a frequency above the human audible range of 16 Hz to 16 kHz. In recent years, numerous unit operations involving physical as well as chemical processes are reported to have been enhanced by ultrasonic irradiation. There have been benefits such as improvement in process efficiency, process time reduction, performing the processes under milder conditions and avoiding the use of some toxic chemicals to achieve cleaner processing. These could be a better way of augmentation for the processes as an advanced technique. The important point here is that ultrasonic irradiation is physical method activation rather than using chemical entities. Detailed studies have been made in the unit operations related to leather such as diffusion rate enhancement through porous leather matrix, cleaning, degreasing, tanning, dyeing, fatliquoring, oil-water emulsification process and solid-liquid tannin extraction from vegetable tanning materials as well as in precipitation reaction in wastewater treatment. The fundamental mechanism involved in these processes is ultrasonic cavitation in liquid media. In addition to this there also exist some process specific mechanisms for the enhancement of the processes. For instance, possible real-time reversible pore-size changes during ultrasound propagation through skin/leather matrix could be a reason for diffusion rate enhancement in leather processing as reported for the first time. Exhaustive scientific research work has been carried out in this area by our group working in Chemical Engineering Division of CLRI and most of these benefits have been proven with publications in valued peer-reviewed international journals. The overall results indicate that about 2-5-fold increase in the process efficiency due to ultrasound under the given process conditions for various unit operations with additional benefits. Scale-up studies are underway for converting these concepts in to a real viable larger scale operation. In

  17. 图形处理器在通用计算中的应用%Application of graphics processing unit in general purpose computation

    Institute of Scientific and Technical Information of China (English)

    张健; 陈瑞

    2009-01-01

    基于图形处理器(GPU)的计算统一设备体系结构(compute unified device architecture,CUDA)构架,阐述了GPU用于通用计算的原理和方法.在Geforce8800GT下,完成了矩阵乘法运算实验.实验结果表明,随着矩阵阶数的递增,无论是GPU还是CPU处理,速度都在减慢.数据增加100倍后,GPU上的运算时间仅增加了3.95倍,而CPU的运算时间增加了216.66倍.%Based on the CUDA (compute unified device architecture) of GPU (graphics processing unit), the technical fundamentals and methods for general purpose computation on GPU are introduced. The algorithm of matrix multiplication is simulated on Geforce8800 GT. With the increasing of matrix order, algorithm speed is slowed either on CPU or on GPU. After the data quantity increases to 100 times, the operation time only increased in 3.95 times on GPU, and 216.66 times on CPU.

  18. Mobility in process calculi and natural computing

    CERN Document Server

    Aman, Bogdan

    2011-01-01

    The design of formal calculi in which fundamental concepts underlying interactive systems can be described and studied has been a central theme of theoretical computer science in recent decades, while membrane computing, a rule-based formalism inspired by biological cells, is a more recent field that belongs to the general area of natural computing. This is the first book to establish a link between these two research directions while treating mobility as the central topic. In the first chapter the authors offer a formal description of mobility in process calculi, noting the entities that move

  19. Data Sorting Using Graphics Processing Units

    Directory of Open Access Journals (Sweden)

    M. J. Mišić

    2012-06-01

    Full Text Available Graphics processing units (GPUs have been increasingly used for general-purpose computation in recent years. The GPU accelerated applications are found in both scientific and commercial domains. Sorting is considered as one of the very important operations in many applications, so its efficient implementation is essential for the overall application performance. This paper represents an effort to analyze and evaluate the implementations of the representative sorting algorithms on the graphics processing units. Three sorting algorithms (Quicksort, Merge sort, and Radix sort were evaluated on the Compute Unified Device Architecture (CUDA platform that is used to execute applications on NVIDIA graphics processing units. Algorithms were tested and evaluated using an automated test environment with input datasets of different characteristics. Finally, the results of this analysis are briefly discussed.

  20. Citizens unite for computational immunology!

    Science.gov (United States)

    Belden, Orrin S; Baker, Sarah Catherine; Baker, Brian M

    2015-07-01

    Recruiting volunteers who can provide computational time, programming expertise, or puzzle-solving talent has emerged as a powerful tool for biomedical research. Recent projects demonstrate the potential for such 'crowdsourcing' efforts in immunology. Tools for developing applications, new funding opportunities, and an eager public make crowdsourcing a serious option for creative solutions for computationally-challenging problems. Expanded uses of crowdsourcing in immunology will allow for more efficient large-scale data collection and analysis. It will also involve, inspire, educate, and engage the public in a variety of meaningful ways. The benefits are real - it is time to jump in!

  1. Guide to Computational Geometry Processing

    DEFF Research Database (Denmark)

    Bærentzen, Jakob Andreas; Gravesen, Jens; Anton, François;

    be processed before it is useful. This Guide to Computational Geometry Processing reviews the algorithms for processing geometric data, with a practical focus on important techniques not covered by traditional courses on computer vision and computer graphics. This is balanced with an introduction......, metric space, affine spaces, differential geometry, and finite difference methods for derivatives and differential equations Reviews geometry representations, including polygonal meshes, splines, and subdivision surfaces Examines techniques for computing curvature from polygonal meshes Describes...

  2. Parallel Computers in Signal Processing

    Directory of Open Access Journals (Sweden)

    Narsingh Deo

    1985-07-01

    Full Text Available Signal processing often requires a great deal of raw computing power for which it is important to take a look at parallel computers. The paper reviews various types of parallel computer architectures from the viewpoint of signal and image processing.

  3. 2011 floods of the central United States

    Science.gov (United States)

    ,

    2013-01-01

    The Central United States experienced record-setting flooding during 2011, with floods that extended from headwater streams in the Rocky Mountains, to transboundary rivers in the upper Midwest and Northern Plains, to the deep and wide sand-bedded lower Mississippi River. The U.S. Geological Survey (USGS), as part of its mission, collected extensive information during and in the aftermath of the 2011 floods to support scientific analysis of the origins and consequences of extreme floods. The information collected for the 2011 floods, combined with decades of past data, enables scientists and engineers from the USGS to provide syntheses and scientific analyses to inform emergency managers, planners, and policy makers about life-safety, economic, and environmental-health issues surrounding flood hazards for the 2011 floods and future floods like it. USGS data, information, and scientific analyses provide context and understanding of the effect of floods on complex societal issues such as ecosystem and human health, flood-plain management, climate-change adaptation, economic security, and the associated policies enacted for mitigation. Among the largest societal questions is "How do we balance agricultural, economic, life-safety, and environmental needs in and along our rivers?" To address this issue, many scientific questions have to be answered including the following: * How do the 2011 weather and flood conditions compare to the past weather and flood conditions and what can we reasonably expect in the future for flood magnitudes?

  4. Computer Processed Evaluation.

    Science.gov (United States)

    Griswold, George H.; Kapp, George H.

    A student testing system was developed consisting of computer generated and scored equivalent but unique repeatable tests based on performance objectives for undergraduate chemistry classes. The evaluation part of the computer system, made up of four separate programs written in FORTRAN IV, generates tests containing varying numbers of multiple…

  5. Accelerated space object tracking via graphic processing unit

    Science.gov (United States)

    Jia, Bin; Liu, Kui; Pham, Khanh; Blasch, Erik; Chen, Genshe

    2016-05-01

    In this paper, a hybrid Monte Carlo Gauss mixture Kalman filter is proposed for the continuous orbit estimation problem. Specifically, the graphic processing unit (GPU) aided Monte Carlo method is used to propagate the uncertainty of the estimation when the observation is not available and the Gauss mixture Kalman filter is used to update the estimation when the observation sequences are available. A typical space object tracking problem using the ground radar is used to test the performance of the proposed algorithm. The performance of the proposed algorithm is compared with the popular cubature Kalman filter (CKF). The simulation results show that the ordinary CKF diverges in 5 observation periods. In contrast, the proposed hybrid Monte Carlo Gauss mixture Kalman filter achieves satisfactory performance in all observation periods. In addition, by using the GPU, the computational time is over 100 times less than that using the conventional central processing unit (CPU).

  6. THOR Particle Processing Unit PPU

    Science.gov (United States)

    Federica Marcucci, Maria; Bruno, Roberto; Consolini, Giuseppe; D'Amicis, Raffaella; De Lauretis, Marcello; De Marco, Rossana; De Michelis, Paola; Francia, Patrizia; Laurenza, Monica; Materassi, Massimo; Vellante, Massimo; Valentini, Francesco

    2016-04-01

    Turbulence Heating ObserveR (THOR) is the first mission ever flown in space dedicated to plasma turbulence. On board THOR, data collected by the Turbulent Electron Analyser, the Ion Mass Spectrum analyser and the Cold Solar Wind ion analyser instruments will be processed by a common digital processor unit, the Particle Processing Unit (PPU). PPU architecture will be based on the state of the art space flight processors and will be fully redundant, in order to efficiently and safely handle the data from the numerous sensors of the instruments suite. The approach of a common processing unit for particle instruments is very important for the enabling of an efficient management for correlative plasma measurements, also facilitating interoperation with other instruments on the spacecraft. Moreover, it permits technical and programmatic synergies giving the possibility to optimize and save spacecraft resources.

  7. Graphics Processing Unit Assisted Thermographic Compositing

    Science.gov (United States)

    Ragasa, Scott; McDougal, Matthew; Russell, Sam

    2013-01-01

    Objective: To develop a software application utilizing general purpose graphics processing units (GPUs) for the analysis of large sets of thermographic data. Background: Over the past few years, an increasing effort among scientists and engineers to utilize the GPU in a more general purpose fashion is allowing for supercomputer level results at individual workstations. As data sets grow, the methods to work them grow at an equal, and often greater, pace. Certain common computations can take advantage of the massively parallel and optimized hardware constructs of the GPU to allow for throughput that was previously reserved for compute clusters. These common computations have high degrees of data parallelism, that is, they are the same computation applied to a large set of data where the result does not depend on other data elements. Signal (image) processing is one area were GPUs are being used to greatly increase the performance of certain algorithms and analysis techniques.

  8. Information processing, computation, and cognition.

    Science.gov (United States)

    Piccinini, Gualtiero; Scarantino, Andrea

    2011-01-01

    Computation and information processing are among the most fundamental notions in cognitive science. They are also among the most imprecisely discussed. Many cognitive scientists take it for granted that cognition involves computation, information processing, or both - although others disagree vehemently. Yet different cognitive scientists use 'computation' and 'information processing' to mean different things, sometimes without realizing that they do. In addition, computation and information processing are surrounded by several myths; first and foremost, that they are the same thing. In this paper, we address this unsatisfactory state of affairs by presenting a general and theory-neutral account of computation and information processing. We also apply our framework by analyzing the relations between computation and information processing on one hand and classicism, connectionism, and computational neuroscience on the other. We defend the relevance to cognitive science of both computation, at least in a generic sense, and information processing, in three important senses of the term. Our account advances several foundational debates in cognitive science by untangling some of their conceptual knots in a theory-neutral way. By leveling the playing field, we pave the way for the future resolution of the debates' empirical aspects.

  9. Magnetohydrodynamics simulations on graphics processing units

    CERN Document Server

    Wong, Hon-Cheng; Feng, Xueshang; Tang, Zesheng

    2009-01-01

    Magnetohydrodynamics (MHD) simulations based on the ideal MHD equations have become a powerful tool for modeling phenomena in a wide range of applications including laboratory, astrophysical, and space plasmas. In general, high-resolution methods for solving the ideal MHD equations are computationally expensive and Beowulf clusters or even supercomputers are often used to run the codes that implemented these methods. With the advent of the Compute Unified Device Architecture (CUDA), modern graphics processing units (GPUs) provide an alternative approach to parallel computing for scientific simulations. In this paper we present, to the authors' knowledge, the first implementation to accelerate computation of MHD simulations on GPUs. Numerical tests have been performed to validate the correctness of our GPU MHD code. Performance measurements show that our GPU-based implementation achieves speedups of 2 (1D problem with 2048 grids), 106 (2D problem with 1024^2 grids), and 43 (3D problem with 128^3 grids), respec...

  10. 一种机载设备的中央处理单元模块的设计与实现%Design and Implementation of a Central Process Unit in Airborne Equipment

    Institute of Scientific and Technical Information of China (English)

    王俊; 吕俊; 杨宁

    2014-01-01

    The design and implementation of a central process unit in airborne equipment is introduced in this paper. The airborne equipment receives instruction signals from flight control system via RS422 communication, then the central process unit implements controlling, data calculation, A/D conversion, and feedback the result to actuuating mechanism, so that implementing the expected functions of airborne equipment. The equipment has been used in the airborne which proves that this design is referential and practical.%文章介绍了一种机载设备的中央处理单元模块设计与实现。机载设备通过RS422通讯接收飞行控制系统发来的指令信号,中央处理单元完成控制、数据解算、A/D转换等功能,将结果反馈给执行机构,从而实现机载设备的预期功能。本设备已在飞机上使用,使用结果良好,因此具有较强的参考性和实用性。

  11. Optimization Solutions for Improving the Performance of the Parallel Reduction Algorithm Using Graphics Processing Units

    Directory of Open Access Journals (Sweden)

    Ion LUNGU

    2012-01-01

    Full Text Available In this paper, we research, analyze and develop optimization solutions for the parallel reduction function using graphics processing units (GPUs that implement the Compute Unified Device Architecture (CUDA, a modern and novel approach for improving the software performance of data processing applications and algorithms. Many of these applications and algorithms make use of the reduction function in their computational steps. After having designed the function and its algorithmic steps in CUDA, we have progressively developed and implemented optimization solutions for the reduction function. In order to confirm, test and evaluate the solutions' efficiency, we have developed a custom tailored benchmark suite. We have analyzed the obtained experimental results regarding: the comparison of the execution time and bandwidth when using graphic processing units covering the main CUDA architectures (Tesla GT200, Fermi GF100, Kepler GK104 and a central processing unit; the data type influence; the binary operator's influence.

  12. 核电站数字化反应堆保护系统中央处理器负荷率分析与测试%Analysis and Test of Nuclear Power Plant Reactor Trip Protect System Central Processing Unit Load Function Test

    Institute of Scientific and Technical Information of China (English)

    汪绩宁

    2013-01-01

    There are exact demands about the Central Processing Unit(CPU) load of nuclear power plant reactor trip protect system. This paper first theoretically analyzed the Central Processing Unit(CPU) load of nuclear power plant reactor trip protect system, gave the computational methods, then designed the test method and test equipment. And the real test work was also carried out. The test result is obtained by analyzing the experimental data. The result shows that reactor trip protect system of the Central Processing Unit(CPU) load of nuclear power plant accords with the techno-requirement, and the load of main-control-CPU is higher than the load of standby-CPU.%核电站对数字化反应堆保护系统的中央处理器的负荷率有严格要求。本文首先对核电站数字化反应堆保护系统中央处理器的负荷率进行了理论分析,得出了负荷率计算公式;然后设计了相应的负荷率测试方法与测试装置,完成了实际的测试工作;对测试所得实验数据进行处理,得出测试结果,结果表明数字化反应堆保护系统的中央处理器负荷率符合技术要求,且主控CPU的负荷率比备用CPU负荷率要高。

  13. Computer processing of tomography data

    OpenAIRE

    Konečný, Jan

    2011-01-01

    Computer processing of tomography data Tomographs are one of the most important diagnostic devices, which are used in every hospital nowadays; they have already been so for a considerable period of time. The different types of tomographs and the processing of tomographic data and imaging of these data are the subject of this thesis. I have described the four most common types of tomography: X-ray Computed Tomography, Magnetic Resonance Imaging, Positron Emission Tomography and Single Photon E...

  14. Graphics Processing Units for HEP trigger systems

    Science.gov (United States)

    Ammendola, R.; Bauce, M.; Biagioni, A.; Chiozzi, S.; Cotta Ramusino, A.; Fantechi, R.; Fiorini, M.; Giagu, S.; Gianoli, A.; Lamanna, G.; Lonardo, A.; Messina, A.; Neri, I.; Paolucci, P. S.; Piandani, R.; Pontisso, L.; Rescigno, M.; Simula, F.; Sozzi, M.; Vicini, P.

    2016-07-01

    General-purpose computing on GPUs (Graphics Processing Units) is emerging as a new paradigm in several fields of science, although so far applications have been tailored to the specific strengths of such devices as accelerator in offline computation. With the steady reduction of GPU latencies, and the increase in link and memory throughput, the use of such devices for real-time applications in high-energy physics data acquisition and trigger systems is becoming ripe. We will discuss the use of online parallel computing on GPU for synchronous low level trigger, focusing on CERN NA62 experiment trigger system. The use of GPU in higher level trigger system is also briefly considered.

  15. Graphics Processing Units for HEP trigger systems

    Energy Technology Data Exchange (ETDEWEB)

    Ammendola, R. [INFN Sezione di Roma “Tor Vergata”, Via della Ricerca Scientifica 1, 00133 Roma (Italy); Bauce, M. [INFN Sezione di Roma “La Sapienza”, P.le A. Moro 2, 00185 Roma (Italy); University of Rome “La Sapienza”, P.lee A.Moro 2, 00185 Roma (Italy); Biagioni, A. [INFN Sezione di Roma “La Sapienza”, P.le A. Moro 2, 00185 Roma (Italy); Chiozzi, S.; Cotta Ramusino, A. [INFN Sezione di Ferrara, Via Saragat 1, 44122 Ferrara (Italy); University of Ferrara, Via Saragat 1, 44122 Ferrara (Italy); Fantechi, R. [INFN Sezione di Pisa, Largo B. Pontecorvo 3, 56127 Pisa (Italy); CERN, Geneve (Switzerland); Fiorini, M. [INFN Sezione di Ferrara, Via Saragat 1, 44122 Ferrara (Italy); University of Ferrara, Via Saragat 1, 44122 Ferrara (Italy); Giagu, S. [INFN Sezione di Roma “La Sapienza”, P.le A. Moro 2, 00185 Roma (Italy); University of Rome “La Sapienza”, P.lee A.Moro 2, 00185 Roma (Italy); Gianoli, A. [INFN Sezione di Ferrara, Via Saragat 1, 44122 Ferrara (Italy); University of Ferrara, Via Saragat 1, 44122 Ferrara (Italy); Lamanna, G., E-mail: gianluca.lamanna@cern.ch [INFN Sezione di Pisa, Largo B. Pontecorvo 3, 56127 Pisa (Italy); INFN Laboratori Nazionali di Frascati, Via Enrico Fermi 40, 00044 Frascati (Roma) (Italy); Lonardo, A. [INFN Sezione di Roma “La Sapienza”, P.le A. Moro 2, 00185 Roma (Italy); Messina, A. [INFN Sezione di Roma “La Sapienza”, P.le A. Moro 2, 00185 Roma (Italy); University of Rome “La Sapienza”, P.lee A.Moro 2, 00185 Roma (Italy); and others

    2016-07-11

    General-purpose computing on GPUs (Graphics Processing Units) is emerging as a new paradigm in several fields of science, although so far applications have been tailored to the specific strengths of such devices as accelerator in offline computation. With the steady reduction of GPU latencies, and the increase in link and memory throughput, the use of such devices for real-time applications in high-energy physics data acquisition and trigger systems is becoming ripe. We will discuss the use of online parallel computing on GPU for synchronous low level trigger, focusing on CERN NA62 experiment trigger system. The use of GPU in higher level trigger system is also briefly considered.

  16. MASSIVELY PARALLEL LATENT SEMANTIC ANALYSES USING A GRAPHICS PROCESSING UNIT

    Energy Technology Data Exchange (ETDEWEB)

    Cavanagh, J.; Cui, S.

    2009-01-01

    Latent Semantic Analysis (LSA) aims to reduce the dimensions of large term-document datasets using Singular Value Decomposition. However, with the ever-expanding size of datasets, current implementations are not fast enough to quickly and easily compute the results on a standard PC. A graphics processing unit (GPU) can solve some highly parallel problems much faster than a traditional sequential processor or central processing unit (CPU). Thus, a deployable system using a GPU to speed up large-scale LSA processes would be a much more effective choice (in terms of cost/performance ratio) than using a PC cluster. Due to the GPU’s application-specifi c architecture, harnessing the GPU’s computational prowess for LSA is a great challenge. We presented a parallel LSA implementation on the GPU, using NVIDIA® Compute Unifi ed Device Architecture and Compute Unifi ed Basic Linear Algebra Subprograms software. The performance of this implementation is compared to traditional LSA implementation on a CPU using an optimized Basic Linear Algebra Subprograms library. After implementation, we discovered that the GPU version of the algorithm was twice as fast for large matrices (1 000x1 000 and above) that had dimensions not divisible by 16. For large matrices that did have dimensions divisible by 16, the GPU algorithm ran fi ve to six times faster than the CPU version. The large variation is due to architectural benefi ts of the GPU for matrices divisible by 16. It should be noted that the overall speeds for the CPU version did not vary from relative normal when the matrix dimensions were divisible by 16. Further research is needed in order to produce a fully implementable version of LSA. With that in mind, the research we presented shows that the GPU is a viable option for increasing the speed of LSA, in terms of cost/performance ratio.

  17. Grace: a Cross-platform Micromagnetic Simulator On Graphics Processing Units

    CERN Document Server

    Zhu, Ru

    2014-01-01

    A micromagnetic simulator running on graphics processing unit (GPU) is presented. It achieves significant performance boost as compared to previous central processing unit (CPU) simulators, up to two orders of magnitude for large input problems. Different from GPU implementations of other research groups, this simulator is developed with C++ Accelerated Massive Parallelism (C++ AMP) and is hardware platform compatible. It runs on GPU from venders include NVidia, AMD and Intel, which paved the way for fast micromagnetic simulation on both high-end workstations with dedicated graphics cards and low-end personal computers with integrated graphics card. A copy of the simulator software is publicly available.

  18. Computational Intelligence in Image Processing

    CERN Document Server

    Siarry, Patrick

    2013-01-01

    Computational intelligence based techniques have firmly established themselves as viable, alternate, mathematical tools for more than a decade. They have been extensively employed in many systems and application domains, among these signal processing, automatic control, industrial and consumer electronics, robotics, finance, manufacturing systems, electric power systems, and power electronics. Image processing is also an extremely potent area which has attracted the atten­tion of many researchers who are interested in the development of new computational intelligence-based techniques and their suitable applications, in both research prob­lems and in real-world problems. Part I of the book discusses several image preprocessing algorithms; Part II broadly covers image compression algorithms; Part III demonstrates how computational intelligence-based techniques can be effectively utilized for image analysis purposes; and Part IV shows how pattern recognition, classification and clustering-based techniques can ...

  19. Fast blood flow visualization of high-resolution laser speckle imaging data using graphics processing unit.

    Science.gov (United States)

    Liu, Shusen; Li, Pengcheng; Luo, Qingming

    2008-09-15

    Laser speckle contrast analysis (LASCA) is a non-invasive, full-field optical technique that produces two-dimensional map of blood flow in biological tissue by analyzing speckle images captured by CCD camera. Due to the heavy computation required for speckle contrast analysis, video frame rate visualization of blood flow which is essentially important for medical usage is hardly achieved for the high-resolution image data by using the CPU (Central Processing Unit) of an ordinary PC (Personal Computer). In this paper, we introduced GPU (Graphics Processing Unit) into our data processing framework of laser speckle contrast imaging to achieve fast and high-resolution blood flow visualization on PCs by exploiting the high floating-point processing power of commodity graphics hardware. By using GPU, a 12-60 fold performance enhancement is obtained in comparison to the optimized CPU implementations.

  20. Retinoblastoma protein: a central processing unit.

    Science.gov (United States)

    Poznic, M

    2009-06-01

    The retinoblastoma protein (pRb) is one of the key cell-cycle regulating proteins and its inactivation leads to neoplastic transformation and carcinogenesis. This protein regulates critical G1 -to-S phase transition through interaction with the E2F family of cell-cycle transcription factors repressing transcription of genes required for this cell-cycle check-point transition. Its activity is regulated through network sensing intracellular and extracellular signals which block or permit phosphorylation (inactivation) of the Rb protein. Mechanisms of Rb-dependent cell-cycle control have been widely studied over the past couple of decades. However, recently it was found that pRb also regulates apoptosis through the same interaction with E2F transcription factors and that Rb-E2F complexes play a role in regulating the transcription of genes involved in differentiation and development.

  1. Retinoblastoma protein: a central processing unit

    Indian Academy of Sciences (India)

    M Poznic

    2009-06-01

    The retinoblastoma protein (pRb) is one of the key cell-cycle regulating proteins and its inactivation leads to neoplastic transformation and carcinogenesis. This protein regulates critical G1-to-S phase transition through interaction with the E2F family of cell-cycle transcription factors repressing transcription of genes required for this cell-cycle check-point transition. Its activity is regulated through network sensing intracellular and extracellular signals which block or permit phosphorylation (inactivation) of the Rb protein. Mechanisms of Rb-dependent cell-cycle control have been widely studied over the past couple of decades. However, recently it was found that pRb also regulates apoptosis through the same interaction with E2F transcription factors and that Rb–E2F complexes play a role in regulating the transcription of genes involved in differentiation and development.

  2. HIGH PERFORMANCE PHOTOGRAMMETRIC PROCESSING ON COMPUTER CLUSTERS

    Directory of Open Access Journals (Sweden)

    V. N. Adrov

    2012-07-01

    Full Text Available Most cpu consuming tasks in photogrammetric processing can be done in parallel. The algorithms take independent bits as input and produce independent bits as output. The independence of bits comes from the nature of such algorithms since images, stereopairs or small image blocks parts can be processed independently. Many photogrammetric algorithms are fully automatic and do not require human interference. Photogrammetric workstations can perform tie points measurements, DTM calculations, orthophoto construction, mosaicing and many other service operations in parallel using distributed calculations. Distributed calculations save time reducing several days calculations to several hours calculations. Modern trends in computer technology show the increase of cpu cores in workstations, speed increase in local networks, and as a result dropping the price of the supercomputers or computer clusters that can contain hundreds or even thousands of computing nodes. Common distributed processing in DPW is usually targeted for interactive work with a limited number of cpu cores and is not optimized for centralized administration. The bottleneck of common distributed computing in photogrammetry can be in the limited lan throughput and storage performance, since the processing of huge amounts of large raster images is needed.

  3. Command decoder unit. [performance tests of data processing terminals and data converters for space shuttle orbiters

    Science.gov (United States)

    1976-01-01

    The design and testing of laboratory hardware (a command decoder unit) used in evaluating space shuttle instrumentation, data processing, and ground check-out operations is described. The hardware was a modification of another similar instrumentation system. A data bus coupler was designed and tested to interface the equipment to a central bus controller (computer). A serial digital data transfer mechanism was also designed. Redundant power supplies and overhead modules were provided to minimize the probability of a single component failure causing a catastrophic failure. The command decoder unit is packaged in a modular configuration to allow maximum user flexibility in configuring a system. Test procedures and special test equipment for use in testing the hardware are described. Results indicate that the unit will allow NASA to evaluate future software systems for use in space shuttles. The units were delivered to NASA and appear to be adequately performing their intended function. Engineering sketches and photographs of the command decoder unit are included.

  4. Computer Modelling of Dynamic Processes

    Directory of Open Access Journals (Sweden)

    B. Rybakin

    2000-10-01

    Full Text Available Results of numerical modeling of dynamic problems are summed in the article up. These problems are characteristic for various areas of human activity, in particular for problem solving in ecology. The following problems are considered in the present work: computer modeling of dynamic effects on elastic-plastic bodies, calculation and determination of performances of gas streams in gas cleaning equipment, modeling of biogas formation processes.

  5. Rapid learning-based video stereolization using graphic processing unit acceleration

    Science.gov (United States)

    Sun, Tian; Jung, Cheolkon; Wang, Lei; Kim, Joongkyu

    2016-09-01

    Video stereolization has received much attention in recent years due to the lack of stereoscopic three-dimensional (3-D) contents. Although video stereolization can enrich stereoscopic 3-D contents, it is hard to achieve automatic two-dimensional-to-3-D conversion with less computational cost. We proposed rapid learning-based video stereolization using a graphic processing unit (GPU) acceleration. We first generated an initial depth map based on learning from examples. Then, we refined the depth map using saliency and cross-bilateral filtering to make object boundaries clear. Finally, we performed depth-image-based-rendering to generate stereoscopic 3-D views. To accelerate the computation of video stereolization, we provided a parallelizable hybrid GPU-central processing unit (CPU) solution to be suitable for running on GPU. Experimental results demonstrate that the proposed method is nearly 180 times faster than CPU-based processing and achieves a good performance comparable to the-state-of-the-art ones.

  6. United States Military Presence in Central Asia: Implications of United States Basing for Central Asian Stability

    Science.gov (United States)

    2006-06-01

    Europe and reducing the number of military personnel by 40,000 to 60,000. According to United States Air Force General Charles Wald , there are...The Deputy Secretary of Defense Paul Wolfowitz is quoted as saying United States presence “…may be more political than actually military” and that

  7. Computational Material Processing in Microgravity

    Science.gov (United States)

    2005-01-01

    Working with Professor David Matthiesen at Case Western Reserve University (CWRU) a computer model of the DPIMS (Diffusion Processes in Molten Semiconductors) space experiment was developed that is able to predict the thermal field, flow field and concentration profile within a molten germanium capillary under both ground-based and microgravity conditions as illustrated. These models are coupled with a novel nonlinear statistical methodology for estimating the diffusion coefficient from measured concentration values after a given time that yields a more accurate estimate than traditional methods. This code was integrated into a web-based application that has become a standard tool used by engineers in the Materials Science Department at CWRU.

  8. Graphics Processing Unit-Based Bioheat Simulation to Facilitate Rapid Decision Making Associated with Cryosurgery Training.

    Science.gov (United States)

    Keelan, Robert; Zhang, Hong; Shimada, Kenji; Rabin, Yoed

    2016-04-01

    This study focuses on the implementation of an efficient numerical technique for cryosurgery simulations on a graphics processing unit as an alternative means to accelerate runtime. This study is part of an ongoing effort to develop computerized training tools for cryosurgery, with prostate cryosurgery as a developmental model. The ability to perform rapid simulations of various test cases is critical to facilitate sound decision making associated with medical training. Consistent with clinical practice, the training tool aims at correlating the frozen region contour and the corresponding temperature field with the target region shape. The current study focuses on the feasibility of graphics processing unit-based computation using C++ accelerated massive parallelism, as one possible implementation. Benchmark results on a variety of computation platforms display between 3-fold acceleration (laptop) and 13-fold acceleration (gaming computer) of cryosurgery simulation, in comparison with the more common implementation on a multicore central processing unit. While the general concept of graphics processing unit-based simulations is not new, its application to phase-change problems, combined with the unique requirements for cryosurgery optimization, represents the core contribution of the current study.

  9. Computer image processing: Geologic applications

    Science.gov (United States)

    Abrams, M. J.

    1978-01-01

    Computer image processing of digital data was performed to support several geological studies. The specific goals were to: (1) relate the mineral content to the spectral reflectance of certain geologic materials, (2) determine the influence of environmental factors, such as atmosphere and vegetation, and (3) improve image processing techniques. For detection of spectral differences related to mineralogy, the technique of band ratioing was found to be the most useful. The influence of atmospheric scattering and methods to correct for the scattering were also studied. Two techniques were used to correct for atmospheric effects: (1) dark object subtraction, (2) normalization of use of ground spectral measurements. Of the two, the first technique proved to be the most successful for removing the effects of atmospheric scattering. A digital mosaic was produced from two side-lapping LANDSAT frames. The advantages were that the same enhancement algorithm can be applied to both frames, and there is no seam where the two images are joined.

  10. The changing nature of flooding across the central United States

    Science.gov (United States)

    Mallakpour, Iman; Villarini, Gabriele

    2015-03-01

    In the twentieth and twenty-first centuries, flooding has taken a devastating societal and economic toll on the central United States, contributing to dozens of fatalities and causing billions of dollars in damage. As a warmer atmosphere can hold more moisture (the Clausius-Clapeyron relation), a pronounced increase in intense rainfall events is included in models of future climate. Therefore, it is crucial to examine whether the magnitude and/or frequency of flood events is remaining constant or has been changing over recent decades. If either or both of these attributes have changed over time, it is imperative that we understand the underlying mechanisms that are responsible. Here, we show that while observational records (774 stream gauge stations) from the central United States present limited evidence of significant changes in the magnitude of floodpeaks, strong evidence points to an increasing frequency of flooding. These changes in flood hydrology result from changes in both seasonal rainfall and temperature across this region.

  11. Optimized Laplacian image sharpening algorithm based on graphic processing unit

    Science.gov (United States)

    Ma, Tinghuai; Li, Lu; Ji, Sai; Wang, Xin; Tian, Yuan; Al-Dhelaan, Abdullah; Al-Rodhaan, Mznah

    2014-12-01

    In classical Laplacian image sharpening, all pixels are processed one by one, which leads to large amount of computation. Traditional Laplacian sharpening processed on CPU is considerably time-consuming especially for those large pictures. In this paper, we propose a parallel implementation of Laplacian sharpening based on Compute Unified Device Architecture (CUDA), which is a computing platform of Graphic Processing Units (GPU), and analyze the impact of picture size on performance and the relationship between the processing time of between data transfer time and parallel computing time. Further, according to different features of different memory, an improved scheme of our method is developed, which exploits shared memory in GPU instead of global memory and further increases the efficiency. Experimental results prove that two novel algorithms outperform traditional consequentially method based on OpenCV in the aspect of computing speed.

  12. Cupola Furnace Computer Process Model

    Energy Technology Data Exchange (ETDEWEB)

    Seymour Katz

    2004-12-31

    The cupola furnace generates more than 50% of the liquid iron used to produce the 9+ million tons of castings annually. The cupola converts iron and steel into cast iron. The main advantages of the cupola furnace are lower energy costs than those of competing furnaces (electric) and the ability to melt less expensive metallic scrap than the competing furnaces. However the chemical and physical processes that take place in the cupola furnace are highly complex making it difficult to operate the furnace in optimal fashion. The results are low energy efficiency and poor recovery of important and expensive alloy elements due to oxidation. Between 1990 and 2004 under the auspices of the Department of Energy, the American Foundry Society and General Motors Corp. a computer simulation of the cupola furnace was developed that accurately describes the complex behavior of the furnace. When provided with the furnace input conditions the model provides accurate values of the output conditions in a matter of seconds. It also provides key diagnostics. Using clues from the diagnostics a trained specialist can infer changes in the operation that will move the system toward higher efficiency. Repeating the process in an iterative fashion leads to near optimum operating conditions with just a few iterations. More advanced uses of the program have been examined. The program is currently being combined with an ''Expert System'' to permit optimization in real time. The program has been combined with ''neural network'' programs to affect very easy scanning of a wide range of furnace operation. Rudimentary efforts were successfully made to operate the furnace using a computer. References to these more advanced systems will be found in the ''Cupola Handbook''. Chapter 27, American Foundry Society, Des Plaines, IL (1999).

  13. Parallel computer processing and modeling: applications for the ICU

    Science.gov (United States)

    Baxter, Grant; Pranger, L. Alex; Draghic, Nicole; Sims, Nathaniel M.; Wiesmann, William P.

    2003-07-01

    Current patient monitoring procedures in hospital intensive care units (ICUs) generate vast quantities of medical data, much of which is considered extemporaneous and not evaluated. Although sophisticated monitors to analyze individual types of patient data are routinely used in the hospital setting, this equipment lacks high order signal analysis tools for detecting long-term trends and correlations between different signals within a patient data set. Without the ability to continuously analyze disjoint sets of patient data, it is difficult to detect slow-forming complications. As a result, the early onset of conditions such as pneumonia or sepsis may not be apparent until the advanced stages. We report here on the development of a distributed software architecture test bed and software medical models to analyze both asynchronous and continuous patient data in real time. Hardware and software has been developed to support a multi-node distributed computer cluster capable of amassing data from multiple patient monitors and projecting near and long-term outcomes based upon the application of physiologic models to the incoming patient data stream. One computer acts as a central coordinating node; additional computers accommodate processing needs. A simple, non-clinical model for sepsis detection was implemented on the system for demonstration purposes. This work shows exceptional promise as a highly effective means to rapidly predict and thereby mitigate the effect of nosocomial infections.

  14. Empirical Foundation of Central Concepts for Computer Science Education

    Science.gov (United States)

    Zendler, Andreas; Spannagel, Christian

    2008-01-01

    The design of computer science curricula should rely on central concepts of the discipline rather than on technical short-term developments. Several authors have proposed lists of basic concepts or fundamental ideas in the past. However, these catalogs were based on subjective decisions without any empirical support. This article describes the…

  15. Reflector antenna analysis using physical optics on Graphics Processing Units

    DEFF Research Database (Denmark)

    Borries, Oscar Peter; Sørensen, Hans Henrik Brandenborg; Dammann, Bernd;

    2014-01-01

    The Physical Optics approximation is a widely used asymptotic method for calculating the scattering from electrically large bodies. It requires significant computational work and little memory, and is thus well suited for application on a Graphics Processing Unit. Here, we investigate...

  16. Tensors in image processing and computer vision

    CERN Document Server

    De Luis García, Rodrigo; Tao, Dacheng; Li, Xuelong

    2009-01-01

    Tensor signal processing is an emerging field with important applications to computer vision and image processing. This book presents the developments in this branch of signal processing, offering research and discussions by experts in the area. It is suitable for advanced students working in the area of computer vision and image processing.

  17. Accelerating Computation of the Unit Commitment Problem (Presentation)

    Energy Technology Data Exchange (ETDEWEB)

    Hummon, M.; Barrows, C.; Jones, W.

    2013-10-01

    Production cost models (PCMs) simulate power system operation at hourly (or higher) resolution. While computation times often extend into multiple days, the sequential nature of PCM's makes parallelism difficult. We exploit the persistence of unit commitment decisions to select partition boundaries for simulation horizon decomposition and parallel computation. Partitioned simulations are benchmarked against sequential solutions for optimality and computation time.

  18. Parallel particle swarm optimization on a graphics processing unit with application to trajectory optimization

    Science.gov (United States)

    Wu, Q.; Xiong, F.; Wang, F.; Xiong, Y.

    2016-10-01

    In order to reduce the computational time, a fully parallel implementation of the particle swarm optimization (PSO) algorithm on a graphics processing unit (GPU) is presented. Instead of being executed on the central processing unit (CPU) sequentially, PSO is executed in parallel via the GPU on the compute unified device architecture (CUDA) platform. The processes of fitness evaluation, updating of velocity and position of all particles are all parallelized and introduced in detail. Comparative studies on the optimization of four benchmark functions and a trajectory optimization problem are conducted by running PSO on the GPU (GPU-PSO) and CPU (CPU-PSO). The impact of design dimension, number of particles and size of the thread-block in the GPU and their interactions on the computational time is investigated. The results show that the computational time of the developed GPU-PSO is much shorter than that of CPU-PSO, with comparable accuracy, which demonstrates the remarkable speed-up capability of GPU-PSO.

  19. Superscalar pipelined inner product computation unit for signed unsigned number

    Directory of Open Access Journals (Sweden)

    Ravindra P. Rajput

    2016-09-01

    Full Text Available In this paper, we proposed superscalar pipelined inner product computation unit for signed-unsigned number operating at 16 GHz. This is designed using five stage pipelined operation with four 8 × 8 multipliers operating in parallel. Superscalar pipelined is designed to compute four 8 × 8 products in parallel in three clock cycles. In the fourth clock cycle of the pipeline operation, two inner products are computed using two adders in parallel. Fifth stage of the pipeline is designed to compute the final product by adding two inner partial products. Upon the pipeline is filled up, every clock cycle the new product of 16 × 16-bit signed unsigned number is obtained. The worst delay measured among the pipeline stage is 0.062 ns, and this delay is considered as the clock cycle period. With the delay of 0.062 ns clock cycle period, the pipeline stage can be operated with 16 GHz synchronous clock signal. Each superscalar pipeline stage is implemented using 45 nm CMOS process technology, and the comparison of results shows that the delay is decreased by 38%, area is reduced by 45% and power dissipation is saved by 32%.

  20. Computational and Pharmacological Target of Neurovascular Unit for Drug Design and Delivery.

    Science.gov (United States)

    Islam, Md Mirazul; Mohamed, Zahurin

    2015-01-01

    The blood-brain barrier (BBB) is a dynamic and highly selective permeable interface between central nervous system (CNS) and periphery that regulates the brain homeostasis. Increasing evidences of neurological disorders and restricted drug delivery process in brain make BBB as special target for further study. At present, neurovascular unit (NVU) is a great interest and highlighted topic of pharmaceutical companies for CNS drug design and delivery approaches. Some recent advancement of pharmacology and computational biology makes it convenient to develop drugs within limited time and affordable cost. In this review, we briefly introduce current understanding of the NVU, including molecular and cellular composition, physiology, and regulatory function. We also discuss the recent technology and interaction of pharmacogenomics and bioinformatics for drug design and step towards personalized medicine. Additionally, we develop gene network due to understand NVU associated transporter proteins interactions that might be effective for understanding aetiology of neurological disorders and new target base protective therapies development and delivery.

  1. Computational and Pharmacological Target of Neurovascular Unit for Drug Design and Delivery

    Directory of Open Access Journals (Sweden)

    Md. Mirazul Islam

    2015-01-01

    Full Text Available The blood-brain barrier (BBB is a dynamic and highly selective permeable interface between central nervous system (CNS and periphery that regulates the brain homeostasis. Increasing evidences of neurological disorders and restricted drug delivery process in brain make BBB as special target for further study. At present, neurovascular unit (NVU is a great interest and highlighted topic of pharmaceutical companies for CNS drug design and delivery approaches. Some recent advancement of pharmacology and computational biology makes it convenient to develop drugs within limited time and affordable cost. In this review, we briefly introduce current understanding of the NVU, including molecular and cellular composition, physiology, and regulatory function. We also discuss the recent technology and interaction of pharmacogenomics and bioinformatics for drug design and step towards personalized medicine. Additionally, we develop gene network due to understand NVU associated transporter proteins interactions that might be effective for understanding aetiology of neurological disorders and new target base protective therapies development and delivery.

  2. Energy efficiency of computer power supply units - Final report

    Energy Technology Data Exchange (ETDEWEB)

    Aebischer, B. [cepe - Centre for Energy Policy and Economics, Swiss Federal Institute of Technology Zuerich, Zuerich (Switzerland); Huser, H. [Encontrol GmbH, Niederrohrdorf (Switzerland)

    2002-11-15

    This final report for the Swiss Federal Office of Energy (SFOE) takes a look at the efficiency of computer power supply units, which decreases rapidly during average computer use. The background and the purpose of the project are examined. The power supplies for personal computers are discussed and the testing arrangement used is described. Efficiency, power-factor and operating points of the units are examined. Potentials for improvement and measures to be taken are discussed. Also, action to be taken by those involved in the design and operation of such power units is proposed. Finally, recommendations for further work are made.

  3. Monte Carlo MP2 on Many Graphical Processing Units.

    Science.gov (United States)

    Doran, Alexander E; Hirata, So

    2016-10-11

    In the Monte Carlo second-order many-body perturbation (MC-MP2) method, the long sum-of-product matrix expression of the MP2 energy, whose literal evaluation may be poorly scalable, is recast into a single high-dimensional integral of functions of electron pair coordinates, which is evaluated by the scalable method of Monte Carlo integration. The sampling efficiency is further accelerated by the redundant-walker algorithm, which allows a maximal reuse of electron pairs. Here, a multitude of graphical processing units (GPUs) offers a uniquely ideal platform to expose multilevel parallelism: fine-grain data-parallelism for the redundant-walker algorithm in which millions of threads compute and share orbital amplitudes on each GPU; coarse-grain instruction-parallelism for near-independent Monte Carlo integrations on many GPUs with few and infrequent interprocessor communications. While the efficiency boost by the redundant-walker algorithm on central processing units (CPUs) grows linearly with the number of electron pairs and tends to saturate when the latter exceeds the number of orbitals, on a GPU it grows quadratically before it increases linearly and then eventually saturates at a much larger number of pairs. This is because the orbital constructions are nearly perfectly parallelized on a GPU and thus completed in a near-constant time regardless of the number of pairs. In consequence, an MC-MP2/cc-pVDZ calculation of a benzene dimer is 2700 times faster on 256 GPUs (using 2048 electron pairs) than on two CPUs, each with 8 cores (which can use only up to 256 pairs effectively). We also numerically determine that the cost to achieve a given relative statistical uncertainty in an MC-MP2 energy increases as O(n(3)) or better with system size n, which may be compared with the O(n(5)) scaling of the conventional implementation of deterministic MP2. We thus establish the scalability of MC-MP2 with both system and computer sizes.

  4. State of the art and future research on general purpose computation of Graphics Processing Unit%图形处理器通用计算的研究综述

    Institute of Scientific and Technical Information of China (English)

    陈庆奎; 王海峰; 那丽春; 霍欢; 郝聚涛; 刘伯成

    2012-01-01

    从2004年开始,图形处理器GPU的通用计算成为一个新研究热点,此后GPGPU( General-Purpose Graphics Processing Unit)在最近几年中取得长足发展.从介绍GPGPU硬件体系结构的改变和软件技术的发展开始,阐述GPGPU主要应用领域中的研究成果及最新发展.针对各种应用领域中计算数据大规模增加的趋势,出现单个GPU计算节点无法克服的硬件限制问题,为解决该问题出现多GPU计算和GPU集群的解决方案.详细地讨论通用计算GPU集群的研究进展和应用技术,包括GPU集群硬件异构性的问题和软件框架的三个研究趋势,对几种典型的软件框架Glift、Zippy、CUDASA的特性和缺点进行较详细的分析.最后,总结GPU通用计算研究发展中存在的问题和未来的挑战.%The general purpose computation of graphic processing unit became a new research field since 2004. GPGPU has been developing rapidly in recent years at a high speed. Starting from an introduction to the development of the architecture of GPU for general-purpose computation and software technology, the study and development of GPU for general-purpose computation are introduced. Aiming at the large scale data of various application fields, GPU cluster is proposed to overcome the limitation of single GPU. So the development and application technologies of GPGPU cluster are discussed and include the issue of heterogeneous cluster and the trend of software for GPU cluster. Several frameworks for GPU cluster are analyzed in detailed, such as Glift, Zippy, and CUDASA. Finally, the unsolved problems and the new challenge in this subject are proposed.

  5. Care of central venous catheters in Intensive Care Unit

    Directory of Open Access Journals (Sweden)

    Thomai Kollia

    2015-04-01

    Full Text Available Introduction: Central venous catheters (CVC are part of daily clinical practice, regarding treatment of critically ill patients in the Intensive Care Unit (ICU. Infections associated with CVC, are a serious cause of morbidity and mortality, thus making as a demanding need the adoption of clinical protocols for the care in ICU. Aim: The aim of this review was to explore the nursing care to prevent CVC’s infections in ICU. Method and material: The methodology followed included reviews and research studies. The studies were carried out during the period 2000-2014 and were drawn from foreign electronic databases (Pubmed, Medline, Cochrane and Greek (Iatrotek, on the nursing care of CVC, in the ICU to prevent infections. Results: The literature review showed that the right choice of dressings on the point of entry, the antiseptic treatment solution, the time for replacement infusion sets, the flushing of central venous catheter, the hand disinfection and finally the training of nursing staff, are the key points to prevent CVC’s infections in ICU. Conclusions: Education and compliance of nurses regarding the instructions of CVC's care, are the gold standard in the prevention of infections.

  6. The Role of Computers in Writing Process

    Science.gov (United States)

    Ulusoy, Mustafa

    2006-01-01

    In this paper, the role of computers in writing process was investigated. Last 25 years of journals were searched to find related articles. Articles and books were classified under prewriting, composing, and revising and editing headings. The review results showed that computers can make writers' job easy in the writing process. In addition,…

  7. Graphics processing unit-assisted lossless decompression

    Energy Technology Data Exchange (ETDEWEB)

    Loughry, Thomas A.

    2016-04-12

    Systems and methods for decompressing compressed data that has been compressed by way of a lossless compression algorithm are described herein. In a general embodiment, a graphics processing unit (GPU) is programmed to receive compressed data packets and decompress such packets in parallel. The compressed data packets are compressed representations of an image, and the lossless compression algorithm is a Rice compression algorithm.

  8. Syllables as Processing Units in Handwriting Production

    Science.gov (United States)

    Kandel, Sonia; Alvarez, Carlos J.; Vallee, Nathalie

    2006-01-01

    This research focused on the syllable as a processing unit in handwriting. Participants wrote, in uppercase letters, words that had been visually presented. The interletter intervals provide information on the timing of motor production. In Experiment 1, French participants wrote words that shared the initial letters but had different syllable…

  9. Sandia`s computer support units: The first three years

    Energy Technology Data Exchange (ETDEWEB)

    Harris, R.N. [Sandia National Labs., Albuquerque, NM (United States). Labs. Computing Dept.

    1997-11-01

    This paper describes the method by which Sandia National Laboratories has deployed information technology to the line organizations and to the desktop as part of the integrated information services organization under the direction of the Chief Information officer. This deployment has been done by the Computer Support Unit (CSU) Department. The CSU approach is based on the principle of providing local customer service with a corporate perspective. Success required an approach that was both customer compelled at times and market or corporate focused in most cases. Above all, a complete solution was required that included a comprehensive method of technology choices and development, process development, technology implementation, and support. It is the authors hope that this information will be useful in the development of a customer-focused business strategy for information technology deployment and support. Descriptions of current status reflect the status as of May 1997.

  10. Optimization models of the supply of power structures’ organizational units with centralized procurement

    Directory of Open Access Journals (Sweden)

    Sysoiev Volodymyr

    2013-01-01

    Full Text Available Management of the state power structures’ organizational units for materiel and technical support requires the use of effective tools for supporting decisions, due to the complexity, interdependence, and dynamism of supply in the market economy. The corporate nature of power structures is of particular interest to centralized procurement management, as it provides significant advantages through coordination, eliminating duplication, and economy of scale. This article presents optimization models of the supply of state power structures’ organizational units with centralized procurement, for different levels of simulated materiel and technical support processes. The models allow us to find the most profitable options for state power structures’ organizational supply units in a centre-oriented logistics system in conditions of the changing needs, volume of allocated funds, and logistics costs that accompany the process of supply, by maximizing the provision level of organizational units with necessary material and technical resources for the entire planning period of supply by minimizing the total logistical costs, taking into account the diverse nature and the different priorities of organizational units and material and technical resources.

  11. Computer-Based Cognitive Tools: Description and Design.

    Science.gov (United States)

    Kennedy, David; McNaught, Carmel

    With computers, tangible tools are represented by the hardware (e.g., the central processing unit, scanners, and video display unit), while intangible tools are represented by the software. There is a special category of computer-based software tools (CBSTs) that have the potential to mediate cognitive processes--computer-based cognitive tools…

  12. Multidisciplinary Simulation Acceleration using Multiple Shared-Memory Graphical Processing Units

    Science.gov (United States)

    Kemal, Jonathan Yashar

    For purposes of optimizing and analyzing turbomachinery and other designs, the unsteady Favre-averaged flow-field differential equations for an ideal compressible gas can be solved in conjunction with the heat conduction equation. We solve all equations using the finite-volume multiple-grid numerical technique, with the dual time-step scheme used for unsteady simulations. Our numerical solver code targets CUDA-capable Graphical Processing Units (GPUs) produced by NVIDIA. Making use of MPI, our solver can run across networked compute notes, where each MPI process can use either a GPU or a Central Processing Unit (CPU) core for primary solver calculations. We use NVIDIA Tesla C2050/C2070 GPUs based on the Fermi architecture, and compare our resulting performance against Intel Zeon X5690 CPUs. Solver routines converted to CUDA typically run about 10 times faster on a GPU for sufficiently dense computational grids. We used a conjugate cylinder computational grid and ran a turbulent steady flow simulation using 4 increasingly dense computational grids. Our densest computational grid is divided into 13 blocks each containing 1033x1033 grid points, for a total of 13.87 million grid points or 1.07 million grid points per domain block. To obtain overall speedups, we compare the execution time of the solver's iteration loop, including all resource intensive GPU-related memory copies. Comparing the performance of 8 GPUs to that of 8 CPUs, we obtain an overall speedup of about 6.0 when using our densest computational grid. This amounts to an 8-GPU simulation running about 39.5 times faster than running than a single-CPU simulation.

  13. Real-time radar signal processing using GPGPU (general-purpose graphic processing unit)

    Science.gov (United States)

    Kong, Fanxing; Zhang, Yan Rockee; Cai, Jingxiao; Palmer, Robert D.

    2016-05-01

    This study introduces a practical approach to develop real-time signal processing chain for general phased array radar on NVIDIA GPUs(Graphical Processing Units) using CUDA (Compute Unified Device Architecture) libraries such as cuBlas and cuFFT, which are adopted from open source libraries and optimized for the NVIDIA GPUs. The processed results are rigorously verified against those from the CPUs. Performance benchmarked in computation time with various input data cube sizes are compared across GPUs and CPUs. Through the analysis, it will be demonstrated that GPGPUs (General Purpose GPU) real-time processing of the array radar data is possible with relatively low-cost commercial GPUs.

  14. Porting a Hall MHD Code to a Graphic Processing Unit

    Science.gov (United States)

    Dorelli, John C.

    2011-01-01

    We present our experience porting a Hall MHD code to a Graphics Processing Unit (GPU). The code is a 2nd order accurate MUSCL-Hancock scheme which makes use of an HLL Riemann solver to compute numerical fluxes and second-order finite differences to compute the Hall contribution to the electric field. The divergence of the magnetic field is controlled with Dedner?s hyperbolic divergence cleaning method. Preliminary benchmark tests indicate a speedup (relative to a single Nehalem core) of 58x for a double precision calculation. We discuss scaling issues which arise when distributing work across multiple GPUs in a CPU-GPU cluster.

  15. Molecular Dynamics Simulation of Macromolecules Using Graphics Processing Unit

    CERN Document Server

    Xu, Ji; Ge, Wei; Yu, Xiang; Yang, Xiaozhen; Li, Jinghai

    2010-01-01

    Molecular dynamics (MD) simulation is a powerful computational tool to study the behavior of macromolecular systems. But many simulations of this field are limited in spatial or temporal scale by the available computational resource. In recent years, graphics processing unit (GPU) provides unprecedented computational power for scientific applications. Many MD algorithms suit with the multithread nature of GPU. In this paper, MD algorithms for macromolecular systems that run entirely on GPU are presented. Compared to the MD simulation with free software GROMACS on a single CPU core, our codes achieve about 10 times speed-up on a single GPU. For validation, we have performed MD simulations of polymer crystallization on GPU, and the results observed perfectly agree with computations on CPU. Therefore, our single GPU codes have already provided an inexpensive alternative for macromolecular simulations on traditional CPU clusters and they can also be used as a basis to develop parallel GPU programs to further spee...

  16. Homology modeling, docking studies and molecular dynamic simulations using graphical processing unit architecture to probe the type-11 phosphodiesterase catalytic site: a computational approach for the rational design of selective inhibitors.

    Science.gov (United States)

    Cichero, Elena; D'Ursi, Pasqualina; Moscatelli, Marco; Bruno, Olga; Orro, Alessandro; Rotolo, Chiara; Milanesi, Luciano; Fossa, Paola

    2013-12-01

    Phosphodiesterase 11 (PDE11) is the latest isoform of the PDEs family to be identified, acting on both cyclic adenosine monophosphate and cyclic guanosine monophosphate. The initial reports of PDE11 found evidence for PDE11 expression in skeletal muscle, prostate, testis, and salivary glands; however, the tissue distribution of PDE11 still remains a topic of active study and some controversy. Given the sequence similarity between PDE11 and PDE5, several PDE5 inhibitors have been shown to cross-react with PDE11. Accordingly, many non-selective inhibitors, such as IBMX, zaprinast, sildenafil, and dipyridamole, have been documented to inhibit PDE11. Only recently, a series of dihydrothieno[3,2-d]pyrimidin-4(3H)-one derivatives proved to be selective toward the PDE11 isoform. In the absence of experimental data about PDE11 X-ray structures, we found interesting to gain a better understanding of the enzyme-inhibitor interactions using in silico simulations. In this work, we describe a computational approach based on homology modeling, docking, and molecular dynamics simulation to derive a predictive 3D model of PDE11. Using a Graphical Processing Unit architecture, it is possible to perform long simulations, find stable interactions involved in the complex, and finally to suggest guideline for the identification and synthesis of potent and selective inhibitors.

  17. Central nervous system infections in the intensive care unit

    Directory of Open Access Journals (Sweden)

    B. Vengamma

    2014-04-01

    Full Text Available Neurological infections constitute an uncommon, but important aetiological cause requiring admission to an intensive care unit (ICU. In addition, health-care associated neurological infections may develop in critically ill patients admitted to an ICU for other indications. Central nervous system infections can develop as complications in ICU patients including post-operative neurosurgical patients. While bacterial infections are the most common cause, mycobacterial and fungal infections are also frequently encountered. Delay in institution of specific treatment is considered to be the single most important poor prognostic factor. Empirical antibiotic therapy must be initiated while awaiting specific culture and sensitivity results. Choice of empirical antimicrobial therapy should take into consideration the most likely pathogens involved, locally prevalent drug-resistance patterns, underlying predisposing, co-morbid conditions, and other factors, such as age, immune status. Further, the antibiotic should adequately penetrate the blood-brain and blood- cerebrospinal fluid barriers. The presence of a focal collection of pus warrants immediate surgical drainage. Following strict aseptic precautions during surgery, hand-hygiene and care of catheters, devices constitute important preventive measures. A high index of clinical suspicion and aggressive efforts at identification of aetiological cause and early institution of specific treatment in patients with neurological infections can be life saving.

  18. Earthquake catalog for estimation of maximum earthquake magnitude, Central and Eastern United States: Part B, historical earthquakes

    Science.gov (United States)

    Wheeler, Russell L.

    2014-01-01

    Computation of probabilistic earthquake hazard requires an estimate of Mmax: the moment magnitude of the largest earthquake that is thought to be possible within a specified geographic region. The region specified in this report is the Central and Eastern United States and adjacent Canada. Parts A and B of this report describe the construction of a global catalog of moderate to large earthquakes that occurred worldwide in tectonic analogs of the Central and Eastern United States. Examination of histograms of the magnitudes of these earthquakes allows estimation of Central and Eastern United States Mmax. The catalog and Mmax estimates derived from it are used in the 2014 edition of the U.S. Geological Survey national seismic-hazard maps. Part A deals with prehistoric earthquakes, and this part deals with historical events.

  19. Fast, multi-channel real-time processing of signals with microsecond latency using graphics processing units

    Science.gov (United States)

    Rath, N.; Kato, S.; Levesque, J. P.; Mauel, M. E.; Navratil, G. A.; Peng, Q.

    2014-04-01

    Fast, digital signal processing (DSP) has many applications. Typical hardware options for performing DSP are field-programmable gate arrays (FPGAs), application-specific integrated DSP chips, or general purpose personal computer systems. This paper presents a novel DSP platform that has been developed for feedback control on the HBT-EP tokamak device. The system runs all signal processing exclusively on a Graphics Processing Unit (GPU) to achieve real-time performance with latencies below 8 μs. Signals are transferred into and out of the GPU using PCI Express peer-to-peer direct-memory-access transfers without involvement of the central processing unit or host memory. Tests were performed on the feedback control system of the HBT-EP tokamak using forty 16-bit floating point inputs and outputs each and a sampling rate of up to 250 kHz. Signals were digitized by a D-TACQ ACQ196 module, processing done on an NVIDIA GTX 580 GPU programmed in CUDA, and analog output was generated by D-TACQ AO32CPCI modules.

  20. Fast, multi-channel real-time processing of signals with microsecond latency using graphics processing units.

    Science.gov (United States)

    Rath, N; Kato, S; Levesque, J P; Mauel, M E; Navratil, G A; Peng, Q

    2014-04-01

    Fast, digital signal processing (DSP) has many applications. Typical hardware options for performing DSP are field-programmable gate arrays (FPGAs), application-specific integrated DSP chips, or general purpose personal computer systems. This paper presents a novel DSP platform that has been developed for feedback control on the HBT-EP tokamak device. The system runs all signal processing exclusively on a Graphics Processing Unit (GPU) to achieve real-time performance with latencies below 8 μs. Signals are transferred into and out of the GPU using PCI Express peer-to-peer direct-memory-access transfers without involvement of the central processing unit or host memory. Tests were performed on the feedback control system of the HBT-EP tokamak using forty 16-bit floating point inputs and outputs each and a sampling rate of up to 250 kHz. Signals were digitized by a D-TACQ ACQ196 module, processing done on an NVIDIA GTX 580 GPU programmed in CUDA, and analog output was generated by D-TACQ AO32CPCI modules.

  1. Five Computational Actions in Information Processing

    Directory of Open Access Journals (Sweden)

    Stefan Vladutescu

    2014-12-01

    Full Text Available This study is circumscribed to the Information Science. The zetetic aim of research is double: a to define the concept of action of information computational processing and b to design a taxonomy of actions of information computational processing. Our thesis is that any information processing is a computational processing. First, the investigation trays to demonstrate that the computati onal actions of information processing or the informational actions are computationalinvestigative configurations for structuring information: clusters of highlyaggregated operations which are carried out in a unitary manner operate convergent and behave like a unique computational device. From a methodological point of view, they are comprised within the category of analytical instruments for the informational processing of raw material, of data, of vague, confused, unstructured informational elements. As internal articulation, the actions are patterns for the integrated carrying out of operations of informational investigation. Secondly, we propose an inventory and a description of five basic informational computational actions: exploring, grouping, anticipation, schematization, inferential structuring. R. S. Wyer and T. K. Srull (2014 speak about "four information processing". We would like to continue with further and future investigation of the relationship between operations, actions, strategies and mechanisms of informational processing.

  2. Fast calculation of HELAS amplitudes using graphics processing unit (GPU)

    CERN Document Server

    Hagiwara, K; Okamura, N; Rainwater, D L; Stelzer, T

    2009-01-01

    We use the graphics processing unit (GPU) for fast calculations of helicity amplitudes of physics processes. As our first attempt, we compute $u\\overline{u}\\to n\\gamma$ ($n=2$ to 8) processes in $pp$ collisions at $\\sqrt{s} = 14$TeV by transferring the MadGraph generated HELAS amplitudes (FORTRAN) into newly developed HEGET ({\\bf H}ELAS {\\bf E}valuation with {\\bf G}PU {\\bf E}nhanced {\\bf T}echnology) codes written in CUDA, a C-platform developed by NVIDIA for general purpose computing on the GPU. Compared with the usual CPU programs, we obtain 40-150 times better performance on the GPU.

  3. Efficient magnetohydrodynamic simulations on graphics processing units with CUDA

    Science.gov (United States)

    Wong, Hon-Cheng; Wong, Un-Hong; Feng, Xueshang; Tang, Zesheng

    2011-10-01

    Magnetohydrodynamic (MHD) simulations based on the ideal MHD equations have become a powerful tool for modeling phenomena in a wide range of applications including laboratory, astrophysical, and space plasmas. In general, high-resolution methods for solving the ideal MHD equations are computationally expensive and Beowulf clusters or even supercomputers are often used to run the codes that implemented these methods. With the advent of the Compute Unified Device Architecture (CUDA), modern graphics processing units (GPUs) provide an alternative approach to parallel computing for scientific simulations. In this paper we present, to the best of the author's knowledge, the first implementation of MHD simulations entirely on GPUs with CUDA, named GPU-MHD, to accelerate the simulation process. GPU-MHD supports both single and double precision computations. A series of numerical tests have been performed to validate the correctness of our code. Accuracy evaluation by comparing single and double precision computation results is also given. Performance measurements of both single and double precision are conducted on both the NVIDIA GeForce GTX 295 (GT200 architecture) and GTX 480 (Fermi architecture) graphics cards. These measurements show that our GPU-based implementation achieves between one and two orders of magnitude of improvement depending on the graphics card used, the problem size, and the precision when comparing to the original serial CPU MHD implementation. In addition, we extend GPU-MHD to support the visualization of the simulation results and thus the whole MHD simulation and visualization process can be performed entirely on GPUs.

  4. Controlling Laboratory Processes From A Personal Computer

    Science.gov (United States)

    Will, H.; Mackin, M. A.

    1991-01-01

    Computer program provides natural-language process control from IBM PC or compatible computer. Sets up process-control system that either runs without operator or run by workers who have limited programming skills. Includes three smaller programs. Two of them, written in FORTRAN 77, record data and control research processes. Third program, written in Pascal, generates FORTRAN subroutines used by other two programs to identify user commands with device-driving routines written by user. Also includes set of input data allowing user to define user commands to be executed by computer. Requires personal computer operating under MS-DOS with suitable hardware interfaces to all controlled devices. Also requires FORTRAN 77 compiler and device drivers written by user.

  5. The Interaction between Central and Peripheral Processes in Handwriting Production

    Science.gov (United States)

    Roux, Sebastien; McKeeff, Thomas J.; Grosjacques, Geraldine; Afonso, Olivia; Kandel, Sonia

    2013-01-01

    Written production studies investigating central processing have ignored research on the peripheral components of movement execution, and vice versa. This study attempts to integrate both approaches and provide evidence that central and peripheral processes interact during word production. French participants wrote regular words (e.g. FORME),…

  6. Computer Aided Continuous Time Stochastic Process Modelling

    DEFF Research Database (Denmark)

    Kristensen, N.R.; Madsen, Henrik; Jørgensen, Sten Bay

    2001-01-01

    A grey-box approach to process modelling that combines deterministic and stochastic modelling is advocated for identification of models for model-based control of batch and semi-batch processes. A computer-aided tool designed for supporting decision-making within the corresponding modelling cycle...

  7. Algorithms for image processing and computer vision

    CERN Document Server

    Parker, J R

    2010-01-01

    A cookbook of algorithms for common image processing applications Thanks to advances in computer hardware and software, algorithms have been developed that support sophisticated image processing without requiring an extensive background in mathematics. This bestselling book has been fully updated with the newest of these, including 2D vision methods in content-based searches and the use of graphics cards as image processing computational aids. It's an ideal reference for software engineers and developers, advanced programmers, graphics programmers, scientists, and other specialists wh

  8. On the use of graphics processing units (GPUs) for molecular dynamics simulation of spherical particles

    NARCIS (Netherlands)

    Hidalgo, R.C.; Kanzaki, T.; Alonso-Marroquin, F.; Luding, S.; Yu, A.; Dong, K.; Yang, R.; Luding, S.

    2013-01-01

    General-purpose computation on Graphics Processing Units (GPU) on personal computers has recently become an attractive alternative to parallel computing on clusters and supercomputers. We present the GPU-implementation of an accurate molecular dynamics algorithm for a system of spheres. The new hybr

  9. The impact of centrality on cooperative processes

    CERN Document Server

    Reia, Sandro M; Fontanari, José F

    2016-01-01

    The solution of today's complex problems requires the grouping of task forces whose members are usually connected remotely over long physical distances and different time zones, so the importance of understanding the effects of imposed communication patterns (i.e., who can communicate with whom) on group performance. Here we use an agent-based model to explore the influence of the betweenness centrality of the nodes on the time the group requires to find the global maxima of families of NK-fitness landscapes. The agents cooperate by broadcasting messages informing on their fitness to their neighbors and use this information to copy the more successful agent in their neighborhood. We find that for easy tasks (smooth landscapes) the topology of the communication network has no effect on the performance of the group and that the more central nodes are the most likely to find the global maximum first. For difficult tasks (rugged landscapes), however, we find a positive correlation between the variance of the betw...

  10. Impact of centrality on cooperative processes

    Science.gov (United States)

    Reia, Sandro M.; Herrmann, Sebastian; Fontanari, José F.

    2017-02-01

    The solution of today's complex problems requires the grouping of task forces whose members are usually connected remotely over long physical distances and different time zones. Hence, understanding the effects of imposed communication patterns (i.e., who can communicate with whom) on group performance is important. Here we use an agent-based model to explore the influence of the betweenness centrality of the nodes on the time the group requires to find the global maxima of NK-fitness landscapes. The agents cooperate by broadcasting messages, informing on their fitness to their neighbors, and use this information to copy the more successful agents in their neighborhood. We find that for easy tasks (smooth landscapes), the topology of the communication network has no effect on the performance of the group, and that the more central nodes are the most likely to find the global maximum first. For difficult tasks (rugged landscapes), however, we find a positive correlation between the variance of the betweenness among the network nodes and the group performance. For these tasks, the performances of individual nodes are strongly influenced by the agents' dispositions to cooperate and by the particular realizations of the rugged landscapes.

  11. Computing collinear 4-Body Problem central configurations with given masses

    CERN Document Server

    Piña, E

    2011-01-01

    An interesting description of a collinear configuration of four particles is found in terms of two spherical coordinates. An algorithm to compute the four coordinates of particles of a collinear Four-Body central configuration is presented by using an orthocentric tetrahedron, which edge lengths are function of given masses. Each mass is placed at the corresponding vertex of the tetrahedron. The center of mass (and orthocenter) of the tetrahedron is at the origin of coordinates. The initial position of the tetrahedron is placed with two pairs of vertices each in a coordinate plan, the lines joining any pair of them parallel to a coordinate axis, the center of masses of each and the center of mass of the four on one coordinate axis. From this original position the tetrahedron is rotated by two angles around the center of mass until the direction of configuration coincides with one axis of coordinates. The four coordinates of the vertices of the tetrahedron along this direction determine the central configurati...

  12. Making War Work for Industry: The United Alkali Company's Central Laboratory During World War One.

    Science.gov (United States)

    Reed, Peter

    2015-02-01

    The creation of the Central Laboratory immediately after the United Alkali Company (UAC) was formed in 1890, by amalgamating the Leblanc alkali works in Britain, brought high expectations of repositioning the company by replacing its obsolete Leblanc process plant and expanding its range of chemical products. By 1914, UAC had struggled with few exceptions to adopt new technologies and processes and was still reliant on the Leblanc process. From 1914, the Government would rely heavily on its contribution to the war effort. As a major heavy-chemical manufacturer, UAC produced chemicals for explosives and warfare gases, while also trying to maintain production of many essential chemicals including fertilisers for homeland consumption. UAC's wartime effort was led by the Central Laboratory, working closely with the recently established Engineer's Department to develop new process pathways, build new plant, adapt existing plant, and produce the contracted quantities, all as quickly as possible to meet the changing battlefield demands. This article explores how wartime conditions and demands provided the stimulus for the Central Laboratory's crucial R&D work during World War One.

  13. Advanced Computing Architectures for Cognitive Processing

    Science.gov (United States)

    2009-07-01

    bioinformatics, computing forces for molecular dynamics simulations, or to perform floating point operations for linear algebra . Reconfigurable computing...science codes typically involve high precision, very large data sets, and often include linear algebra formulations. Processing these applications on...www.ncbi.nlm.nih.gov/ 77 Bateman A, Birney E, Cerruti L, Durbin R, Etwiller l, Eddy SR, Griffiths-Jones S, Howe KL, Marshall M, Sonnhammer ELL. The Pfam protein

  14. Accelerating Radio Astronomy Cross-Correlation with Graphics Processing Units

    CERN Document Server

    Clark, M A; Greenhill, L J

    2011-01-01

    We present a highly parallel implementation of the cross-correlation of time-series data using graphics processing units (GPUs), which is scalable to hundreds of independent inputs and suitable for the processing of signals from "Large-N" arrays of many radio antennas. The computational part of the algorithm, the X-engine, is implementated efficiently on Nvidia's Fermi architecture, sustaining up to 79% of the peak single precision floating-point throughput. We compare performance obtained for hardware- and software-managed caches, observing significantly better performance for the latter. The high performance reported involves use of a multi-level data tiling strategy in memory and use of a pipelined algorithm with simultaneous computation and transfer of data from host to device memory. The speed of code development, flexibility, and low cost of the GPU implementations compared to ASIC and FPGA implementations have the potential to greatly shorten the cycle of correlator development and deployment, for case...

  15. A Performance Comparison of Different Graphics Processing Units Running Direct N-Body Simulations

    CERN Document Server

    Capuzzo-Dolcetta, Roberto

    2013-01-01

    Hybrid computational architectures based on the joint power of Central Processing Units and Graphic Processing Units (GPUs) are becoming popular and powerful hardware tools for a wide range of simulations in biology, chemistry, engineering, physics, etc.. In this paper we present a comparison of performance of various GPUs available on market when applied to the numerical integration of the classic, gravitational, N-body problem. To do this, we developed an OpenCL version of the parallel code (HiGPUs) to use for these tests, because this version is the only apt to work on GPUs of different makes. The main general result is that we confirm the reliability, speed and cheapness of GPUs when applied to the examined kind of problems (i.e. when the forces to evaluate are dependent on the mutual distances, as it happens in gravitational physics and molecular dynamics). More specifically, we find that also the cheap GPUs built to be employed just for gaming applications are very performant in terms of computing speed...

  16. BitTorrent Processing Unit BPU发展观望

    Institute of Scientific and Technical Information of China (English)

    Zone; 杨原青

    2007-01-01

    在电脑发展的早期,无论是运算处理、还是图形处理、还是输入、输出处理,都由CPU(Central Processing Unit,中央处理器)一力承担,然而随着处理专用化发展,1999年NVIDIA率先将图形处理独立出来,提出了GPU(Graphics Processing unit,绘图处理单元)概念。八年过去,现在GPU已经成为图形处理的中坚力量,并让所玩家耳熟能详。而近期,台湾2家公刊则提出了BPU(BitTorrent Processing Unit,BT处理单元)概念。下面,就让我们一起看看,这款极为新鲜的概念产品。

  17. Use of general purpose graphics processing units with MODFLOW.

    Science.gov (United States)

    Hughes, Joseph D; White, Jeremy T

    2013-01-01

    To evaluate the use of general-purpose graphics processing units (GPGPUs) to improve the performance of MODFLOW, an unstructured preconditioned conjugate gradient (UPCG) solver has been developed. The UPCG solver uses a compressed sparse row storage scheme and includes Jacobi, zero fill-in incomplete, and modified-incomplete lower-upper (LU) factorization, and generalized least-squares polynomial preconditioners. The UPCG solver also includes options for sequential and parallel solution on the central processing unit (CPU) using OpenMP. For simulations utilizing the GPGPU, all basic linear algebra operations are performed on the GPGPU; memory copies between the central processing unit CPU and GPCPU occur prior to the first iteration of the UPCG solver and after satisfying head and flow criteria or exceeding a maximum number of iterations. The efficiency of the UPCG solver for GPGPU and CPU solutions is benchmarked using simulations of a synthetic, heterogeneous unconfined aquifer with tens of thousands to millions of active grid cells. Testing indicates GPGPU speedups on the order of 2 to 8, relative to the standard MODFLOW preconditioned conjugate gradient (PCG) solver, can be achieved when (1) memory copies between the CPU and GPGPU are optimized, (2) the percentage of time performing memory copies between the CPU and GPGPU is small relative to the calculation time, (3) high-performance GPGPU cards are utilized, and (4) CPU-GPGPU combinations are used to execute sequential operations that are difficult to parallelize. Furthermore, UPCG solver testing indicates GPGPU speedups exceed parallel CPU speedups achieved using OpenMP on multicore CPUs for preconditioners that can be easily parallelized.

  18. Heterogeneous Multicore Parallel Programming for Graphics Processing Units

    Directory of Open Access Journals (Sweden)

    Francois Bodin

    2009-01-01

    Full Text Available Hybrid parallel multicore architectures based on graphics processing units (GPUs can provide tremendous computing power. Current NVIDIA and AMD Graphics Product Group hardware display a peak performance of hundreds of gigaflops. However, exploiting GPUs from existing applications is a difficult task that requires non-portable rewriting of the code. In this paper, we present HMPP, a Heterogeneous Multicore Parallel Programming workbench with compilers, developed by CAPS entreprise, that allows the integration of heterogeneous hardware accelerators in a unintrusive manner while preserving the legacy code.

  19. Soft Computing Techniques for Process Control Applications

    Directory of Open Access Journals (Sweden)

    Rahul Malhotra

    2011-09-01

    Full Text Available Technological innovations in soft computing techniques have brought automation capabilities to new levelsof applications. Process control is an important application of any industry for controlling the complexsystem parameters, which can greatly benefit from such advancements. Conventional control theory isbased on mathematical models that describe the dynamic behaviour of process control systems. Due to lackin comprehensibility, conventional controllers are often inferior to the intelligent controllers. Softcomputing techniques provide an ability to make decisions and learning from the reliable data or expert’sexperience. Moreover, soft computing techniques can cope up with a variety of environmental and stabilityrelated uncertainties. This paper explores the different areas of soft computing techniques viz. Fuzzy logic,genetic algorithms and hybridization of two and abridged the results of different process control casestudies. It is inferred from the results that the soft computing controllers provide better control on errorsthan conventional controllers. Further, hybrid fuzzy genetic algorithm controllers have successfullyoptimized the errors than standalone soft computing and conventional techniques.

  20. Computational cameras: convergence of optics and processing.

    Science.gov (United States)

    Zhou, Changyin; Nayar, Shree K

    2011-12-01

    A computational camera uses a combination of optics and processing to produce images that cannot be captured with traditional cameras. In the last decade, computational imaging has emerged as a vibrant field of research. A wide variety of computational cameras has been demonstrated to encode more useful visual information in the captured images, as compared with conventional cameras. In this paper, we survey computational cameras from two perspectives. First, we present a taxonomy of computational camera designs according to the coding approaches, including object side coding, pupil plane coding, sensor side coding, illumination coding, camera arrays and clusters, and unconventional imaging systems. Second, we use the abstract notion of light field representation as a general tool to describe computational camera designs, where each camera can be formulated as a projection of a high-dimensional light field to a 2-D image sensor. We show how individual optical devices transform light fields and use these transforms to illustrate how different computational camera designs (collections of optical devices) capture and encode useful visual information.

  1. United States Military in Central Asia: Beyond Operation Enduring Freedom

    Science.gov (United States)

    2009-10-23

    Malinowski , advocacy director for Human Rights Watch, stated, “the United States is most effective in promoting liberty around the world when people...26 U.S. President, The National Security Strategy of the United States of America, page? 27 Thomas Malinowski , “Testimony

  2. Kernel density estimation using graphical processing unit

    Science.gov (United States)

    Sunarko, Su'ud, Zaki

    2015-09-01

    Kernel density estimation for particles distributed over a 2-dimensional space is calculated using a single graphical processing unit (GTX 660Ti GPU) and CUDA-C language. Parallel calculations are done for particles having bivariate normal distribution and by assigning calculations for equally-spaced node points to each scalar processor in the GPU. The number of particles, blocks and threads are varied to identify favorable configuration. Comparisons are obtained by performing the same calculation using 1, 2 and 4 processors on a 3.0 GHz CPU using MPICH 2.0 routines. Speedups attained with the GPU are in the range of 88 to 349 times compared the multiprocessor CPU. Blocks of 128 threads are found to be the optimum configuration for this case.

  3. On the computational modeling of FSW processes

    OpenAIRE

    Agelet de Saracibar Bosch, Carlos; Chiumenti, Michèle; Santiago, Diego de; Cervera Ruiz, Miguel; Dialami, Narges; Lombera, Guillermo

    2010-01-01

    This work deals with the computational modeling and numerical simulation of Friction Stir Welding (FSW) processes. Here a quasi-static, transient, mixed stabilized Eulerian formulation is used. Norton-Hoff and Sheppard-Wright rigid thermoplastic material models have been considered. A product formula algorithm, leading to a staggered solution scheme, has been used. The model has been implemented into the in-house developed FE code COMET. Results obtained in the simulation of FSW process are c...

  4. Digital image processing mathematical and computational methods

    CERN Document Server

    Blackledge, J M

    2005-01-01

    This authoritative text (the second part of a complete MSc course) provides mathematical methods required to describe images, image formation and different imaging systems, coupled with the principle techniques used for processing digital images. It is based on a course for postgraduates reading physics, electronic engineering, telecommunications engineering, information technology and computer science. This book relates the methods of processing and interpreting digital images to the 'physics' of imaging systems. Case studies reinforce the methods discussed, with examples of current research

  5. Loess studies in central United States: Evolution of concepts

    Science.gov (United States)

    Follmer, L.R.

    1996-01-01

    Few words in the realm of earth science have caused more debate than "loess". It is a common term that was first used as a name of a silt deposit before it was defined in a scientific sense. Because this "loose" deposit is easily distinguished from other more coherent deposits, it was recognized as a matter of practical concern and later became the object of much scientific scrutiny. Loess was first recognized along the Rhine Valley in Germany in the 1830s and was first noted in the United States in 1846 along the lower Mississippi River where it later became the center of attention. The use of the name eventually spread around the world, but its use has not been consistently applied. Over the years some interpretations and stratigraphic correlations have been validated, but others have been hotly contested on conceptual grounds and semantic issues. The concept of loess evolved into a complex issue as loess and loess-like deposits were discovered in different parts of the US. The evolution of concepts in the central US developed in four indefinite stages: the eras of (1) discovery and development of hypotheses, (2) conditional acceptance of the eolian origin of loess, (3) "bandwagon" popularity of loess research, and (4) analytical inquiry on the nature of loess. Toward the end of the first era around 1900, the popular opinion on the meaning of the term loess shifted from a lithological sense of loose silt to a lithogenetic sense of eolian silt. However, the dual use of the term fostered a lingering skepticism during the second era that ended in 1944 with an explosion of interest that lasted for more than a decade. In 1944, R.J. Russell proposed and H.N. Fisk defended a new non-eolian, property-based, concept of loess. The eolian advocates reacted with surprise and enthusiasm. Each side used constrained arguments to show their view of the problem, but did not examine the fundamental problem, which was not in the proofs of their hypothesis, but in the definition of

  6. Computation of confidence intervals for Poisson processes

    Science.gov (United States)

    Aguilar-Saavedra, J. A.

    2000-07-01

    We present an algorithm which allows a fast numerical computation of Feldman-Cousins confidence intervals for Poisson processes, even when the number of background events is relatively large. This algorithm incorporates an appropriate treatment of the singularities that arise as a consequence of the discreteness of the variable.

  7. Computation of confidence intervals for Poisson processes

    CERN Document Server

    Aguilar-Saavedra, J A

    2000-01-01

    We present an algorithm which allows a fast numerical computation of Feldman-Cousins confidence intervals for Poisson processes, even when the number of background events is relatively large. This algorithm incorporates an appropriate treatment of the singularities that arise as a consequence of the discreteness of the variable.

  8. Image processing and computing in structural biology

    NARCIS (Netherlands)

    Jiang, Linhua

    2009-01-01

    With the help of modern techniques of imaging processing and computing, image data obtained by electron cryo-microscopy of biomolecules can be reconstructed to three-dimensional biological models at sub-nanometer resolution. These models allow answering urgent problems in life science, for instance,

  9. Parallelizing the Cellular Potts Model on graphics processing units

    Science.gov (United States)

    Tapia, José Juan; D'Souza, Roshan M.

    2011-04-01

    The Cellular Potts Model (CPM) is a lattice based modeling technique used for simulating cellular structures in computational biology. The computational complexity of the model means that current serial implementations restrict the size of simulation to a level well below biological relevance. Parallelization on computing clusters enables scaling the size of the simulation but marginally addresses computational speed due to the limited memory bandwidth between nodes. In this paper we present new data-parallel algorithms and data structures for simulating the Cellular Potts Model on graphics processing units. Our implementations handle most terms in the Hamiltonian, including cell-cell adhesion constraint, cell volume constraint, cell surface area constraint, and cell haptotaxis. We use fine level checkerboards with lock mechanisms using atomic operations to enable consistent updates while maintaining a high level of parallelism. A new data-parallel memory allocation algorithm has been developed to handle cell division. Tests show that our implementation enables simulations of >10 cells with lattice sizes of up to 256 3 on a single graphics card. Benchmarks show that our implementation runs ˜80× faster than serial implementations, and ˜5× faster than previous parallel implementations on computing clusters consisting of 25 nodes. The wide availability and economy of graphics cards mean that our techniques will enable simulation of realistically sized models at a fraction of the time and cost of previous implementations and are expected to greatly broaden the scope of CPM applications.

  10. Iterative Methods for MPC on Graphical Processing Units

    DEFF Research Database (Denmark)

    2012-01-01

    The high oating point performance and memory bandwidth of Graphical Processing Units (GPUs) makes them ideal for a large number of computations which often arises in scientic computing, such as matrix operations. GPUs achieve this performance by utilizing massive par- allelism, which requires...... on their applicability for GPUs. We examine published techniques for iterative methods in interior points methods (IPMs) by applying them to simple test cases, such as a system of masses connected by springs. Iterative methods allows us deal with the ill-conditioning occurring in the later iterations of the IPM as well...... as to avoid the use of dense matrices, which may be too large for the limited memory capacity of current graphics cards....

  11. Graphics processing units accelerated semiclassical initial value representation molecular dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Tamascelli, Dario; Dambrosio, Francesco Saverio [Dipartimento di Fisica, Università degli Studi di Milano, via Celoria 16, 20133 Milano (Italy); Conte, Riccardo [Department of Chemistry and Cherry L. Emerson Center for Scientific Computation, Emory University, Atlanta, Georgia 30322 (United States); Ceotto, Michele, E-mail: michele.ceotto@unimi.it [Dipartimento di Chimica, Università degli Studi di Milano, via Golgi 19, 20133 Milano (Italy)

    2014-05-07

    This paper presents a Graphics Processing Units (GPUs) implementation of the Semiclassical Initial Value Representation (SC-IVR) propagator for vibrational molecular spectroscopy calculations. The time-averaging formulation of the SC-IVR for power spectrum calculations is employed. Details about the GPU implementation of the semiclassical code are provided. Four molecules with an increasing number of atoms are considered and the GPU-calculated vibrational frequencies perfectly match the benchmark values. The computational time scaling of two GPUs (NVIDIA Tesla C2075 and Kepler K20), respectively, versus two CPUs (Intel Core i5 and Intel Xeon E5-2687W) and the critical issues related to the GPU implementation are discussed. The resulting reduction in computational time and power consumption is significant and semiclassical GPU calculations are shown to be environment friendly.

  12. Implementing wide baseline matching algorithms on a graphics processing unit.

    Energy Technology Data Exchange (ETDEWEB)

    Rothganger, Fredrick H.; Larson, Kurt W.; Gonzales, Antonio Ignacio; Myers, Daniel S.

    2007-10-01

    Wide baseline matching is the state of the art for object recognition and image registration problems in computer vision. Though effective, the computational expense of these algorithms limits their application to many real-world problems. The performance of wide baseline matching algorithms may be improved by using a graphical processing unit as a fast multithreaded co-processor. In this paper, we present an implementation of the difference of Gaussian feature extractor, based on the CUDA system of GPU programming developed by NVIDIA, and implemented on their hardware. For a 2000x2000 pixel image, the GPU-based method executes nearly thirteen times faster than a comparable CPU-based method, with no significant loss of accuracy.

  13. Simulating Lattice Spin Models on Graphics Processing Units

    CERN Document Server

    Levy, Tal; Rabani, Eran; 10.1021/ct100385b

    2012-01-01

    Lattice spin models are useful for studying critical phenomena and allow the extraction of equilibrium and dynamical properties. Simulations of such systems are usually based on Monte Carlo (MC) techniques, and the main difficulty is often the large computational effort needed when approaching critical points. In this work, it is shown how such simulations can be accelerated with the use of NVIDIA graphics processing units (GPUs) using the CUDA programming architecture. We have developed two different algorithms for lattice spin models, the first useful for equilibrium properties near a second-order phase transition point and the second for dynamical slowing down near a glass transition. The algorithms are based on parallel MC techniques, and speedups from 70- to 150-fold over conventional single-threaded computer codes are obtained using consumer-grade hardware.

  14. Accelerating sparse linear algebra using graphics processing units

    Science.gov (United States)

    Spagnoli, Kyle E.; Humphrey, John R.; Price, Daniel K.; Kelmelis, Eric J.

    2011-06-01

    The modern graphics processing unit (GPU) found in many standard personal computers is a highly parallel math processor capable of over 1 TFLOPS of peak computational throughput at a cost similar to a high-end CPU with excellent FLOPS-to-watt ratio. High-level sparse linear algebra operations are computationally intense, often requiring large amounts of parallel operations and would seem a natural fit for the processing power of the GPU. Our work is on a GPU accelerated implementation of sparse linear algebra routines. We present results from both direct and iterative sparse system solvers. The GPU execution model featured by NVIDIA GPUs based on CUDA demands very strong parallelism, requiring between hundreds and thousands of simultaneous operations to achieve high performance. Some constructs from linear algebra map extremely well to the GPU and others map poorly. CPUs, on the other hand, do well at smaller order parallelism and perform acceptably during low-parallelism code segments. Our work addresses this via hybrid a processing model, in which the CPU and GPU work simultaneously to produce results. In many cases, this is accomplished by allowing each platform to do the work it performs most naturally. For example, the CPU is responsible for graph theory portion of the direct solvers while the GPU simultaneously performs the low level linear algebra routines.

  15. Computer Supported Collaborative Processes in Virtual Organizations

    CERN Document Server

    Paszkiewicz, Zbigniew

    2012-01-01

    In global economy, turbulent organization environment strongly influences organization's operation. Organizations must constantly adapt to changing circumstances and search for new possibilities of gaining competitive advantage. To face this challenge, small organizations base their operation on collaboration within Virtual Organizations (VOs). VO operation is based on collaborative processes. Due to dynamism and required flexibility of collaborative processes, existing business information systems are insufficient to efficiently support them. In this paper a novel method for supporting collaborative processes based on process mining techniques is proposed. The method allows activity patterns in various instances of collaborative processes to be identified and used for recommendation of activities. This provides an opportunity for better computer support of collaborative processes leading to more efficient and effective realization of business goals.

  16. Function Follows Performance in Evolutionary Computational Processing

    DEFF Research Database (Denmark)

    Pasold, Anke; Foged, Isak Worre

    2011-01-01

    architectural projects. At the core lies the formulation of a methodology that is based upon the idea of human and computational selection in accordance with pre-defined performance criteria that can be adapted to different requirements by the mere change of parameter input in order to reach location specific......As the title ‘Function Follows Performance in Evolutionary Computational Processing’ suggests, this paper explores the potentials of employing multiple design and evaluation criteria within one processing model in order to account for a number of performative parameters desired within varied...

  17. Causes and impacts of the deportation of Central American immigrants from the United States to Mexico

    Directory of Open Access Journals (Sweden)

    Simón Pedro Izcara Palacios

    2015-01-01

    Full Text Available Over the last decade, the number of immigrants deported from the United States to Mexico based on an order of removal has hearly doubled. Not all migrants removed to Mexico are Mexican citizens, some are Central America citizens. This article, usins qualitative methods that includes in-depth interviews with 75 Central American migrants who were deported from the United States, examines the causes and impacts of the deportation of Central American immigrants from United States to Mexico and concludes that these deportations led to increase in violence in Mexico.

  18. Computation and brain processes, with special reference to neuroendocrine systems.

    Science.gov (United States)

    Toni, Roberto; Spaletta, Giulia; Casa, Claudia Della; Ravera, Simone; Sandri, Giorgio

    2007-01-01

    The development of neural networks and brain automata has made neuroscientists aware that the performance limits of these brain-like devices lies, at least in part, in their computational power. The computational basis of a. standard cybernetic design, in fact, refers to that of a discrete and finite state machine or Turing Machine (TM). In contrast, it has been suggested that a number of human cerebral activites, from feedback controls up to mental processes, rely on a mixing of both finitary, digital-like and infinitary, continuous-like procedures. Therefore, the central nervous system (CNS) of man would exploit a form of computation going beyond that of a TM. This "non conventional" computation has been called hybrid computation. Some basic structures for hybrid brain computation are believed to be the brain computational maps, in which both Turing-like (digital) computation and continuous (analog) forms of calculus might occur. The cerebral cortex and brain stem appears primary candidate for this processing. However, also neuroendocrine structures like the hypothalamus are believed to exhibit hybrid computional processes, and might give rise to computational maps. Current theories on neural activity, including wiring and volume transmission, neuronal group selection and dynamic evolving models of brain automata, bring fuel to the existence of natural hybrid computation, stressing a cooperation between discrete and continuous forms of communication in the CNS. In addition, the recent advent of neuromorphic chips, like those to restore activity in damaged retina and visual cortex, suggests that assumption of a discrete-continuum polarity in designing biocompatible neural circuitries is crucial for their ensuing performance. In these bionic structures, in fact, a correspondence exists between the original anatomical architecture and synthetic wiring of the chip, resulting in a correspondence between natural and cybernetic neural activity. Thus, chip "form

  19. Line-by-line spectroscopic simulations on graphics processing units

    Science.gov (United States)

    Collange, Sylvain; Daumas, Marc; Defour, David

    2008-01-01

    We report here on software that performs line-by-line spectroscopic simulations on gases. Elaborate models (such as narrow band and correlated-K) are accurate and efficient for bands where various components are not simultaneously and significantly active. Line-by-line is probably the most accurate model in the infrared for blends of gases that contain high proportions of H 2O and CO 2 as this was the case for our prototype simulation. Our implementation on graphics processing units sustains a speedup close to 330 on computation-intensive tasks and 12 on memory intensive tasks compared to implementations on one core of high-end processors. This speedup is due to data parallelism, efficient memory access for specific patterns and some dedicated hardware operators only available in graphics processing units. It is obtained leaving most of processor resources available and it would scale linearly with the number of graphics processing units in parallel machines. Line-by-line simulation coupled with simulation of fluid dynamics was long believed to be economically intractable but our work shows that it could be done with some affordable additional resources compared to what is necessary to perform simulations on fluid dynamics alone. Program summaryProgram title: GPU4RE Catalogue identifier: ADZY_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADZY_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 62 776 No. of bytes in distributed program, including test data, etc.: 1 513 247 Distribution format: tar.gz Programming language: C++ Computer: x86 PC Operating system: Linux, Microsoft Windows. Compilation requires either gcc/g++ under Linux or Visual C++ 2003/2005 and Cygwin under Windows. It has been tested using gcc 4.1.2 under Ubuntu Linux 7.04 and using Visual C

  20. Computer-aided operational management system for process control in Eems central combined cycle power station/The Netherlands; Rechnergestuetztes Betriebsmanagementsystem fuer das Prozesscontrolling im Kombi-Kraftwerk Eemscentrale/Niederlande

    Energy Technology Data Exchange (ETDEWEB)

    Helmich, P. [Elsag Bailey Hartman und Braun, Minden (Germany); Barends, H.W.M.D. [EPON, Zwolle (Netherlands)

    1997-06-01

    The operational management system supports the operating managers on site and in the headquarters of the undertaking in the following tasks: In the evaluation of process data, important process factors and plausibility are tested; the criterion for plausibility is the attainment of mass and energy balances which are lodged in a computer model. Actual assessment parameters, such as efficiency, are compared with the prescribed reference parameters which are the maximum attainable in the actual operating situation. Finally, the automatic balancing of production and consumption data, income and energy costs takes place within the framework of a profit and loss balance. (orig.) [Deutsch] Das Betriebsmanagementsystem unterstuetzt die Betriebsfuehrung vor Ort und in der Unternehmenszentrale bei folgenden Aufgaben. Bei der Prozessdatenbewertung werden wichtige Prozessgroessen auf Plausibilitaet geprueft; Kriterium fuer die Plausibilitaet ist die Erfuellung von Massen- und Energiebilanzen, die in einem Rechenmodell hinterlegt werden. Ist-Bewertungsparameter, z.B. Wirkungsgrade, werden den in der aktuellen Betriebssituation maximal erreichbaren Soll-Parametern gegenuebergestellt. Im Rahmen einer Gewinn- und Verlustbilanzierung findet schliesslich die automatische Bilanzierung von Produktions- und Verbrauchsdaten, Erloesen und Energiekosten statt. (orig.)

  1. The First Prototype for the FastTracker Processing Unit

    CERN Document Server

    Andreani, A; The ATLAS collaboration; Beretta, M; Bogdan, M; Citterio, M; Alberti, F; Giannetti, P; Lanza, A; Magalotti, D; Piendibene, M; Shochet, M; Stabile, A; Tang, J; Tompkins, L

    2012-01-01

    Modern experiments search for extremely rare processes hidden in much larger background levels. As the experiment complexity and the accelerator backgrounds and luminosity increase we need increasingly complex and exclusive selections. We present the first prototype of a new Processing Unit, the core of the FastTracker processor for Atlas, whose computing power is such that a couple of hundreds of them will be able to reconstruct all the tracks with transverse momentum above 1 GeV in the ATLAS events up to Phase II instantaneous luminosities (5×1034 cm-2 s-1) with an event input rate of 100 kHz and a latency below hundreds of microseconds. We plan extremely powerful, very compact and low consumption units for the far future, essential to increase efficiency and purity of the Level 2 selected samples through the intensive use of tracking. This strategy requires massive computing power to minimize the online execution time of complex tracking algorithms. The time consuming pattern recognition problem, generall...

  2. Managing internode data communications for an uninitialized process in a parallel computer

    Science.gov (United States)

    Archer, Charles J; Blocksome, Michael A; Miller, Douglas R; Parker, Jeffrey J; Ratterman, Joseph D; Smith, Brian E

    2014-05-20

    A parallel computer includes nodes, each having main memory and a messaging unit (MU). Each MU includes computer memory, which in turn includes, MU message buffers. Each MU message buffer is associated with an uninitialized process on the compute node. In the parallel computer, managing internode data communications for an uninitialized process includes: receiving, by an MU of a compute node, one or more data communications messages in an MU message buffer associated with an uninitialized process on the compute node; determining, by an application agent, that the MU message buffer associated with the uninitialized process is full prior to initialization of the uninitialized process; establishing, by the application agent, a temporary message buffer for the uninitialized process in main computer memory; and moving, by the application agent, data communications messages from the MU message buffer associated with the uninitialized process to the temporary message buffer in main computer memory.

  3. Computation of combustion and gasifying processes

    Energy Technology Data Exchange (ETDEWEB)

    Kozaczaka, J. [Univ. of Mining and Metallurgy, Krakow, Faculty of Mechanical Engineering and Robotics (Poland); Horbaj, P. [Kosice Univ., Dept. of Power Engineering (Poland)

    2003-08-01

    Engineer computation methods of combustion and gasifying processes, their application and taking into account NO{sub x} and SO{sub x} contents in resulting gases using chemical equilibrium considerations. The paper deals with stoichiometric calculation of combustion processes with equilibrium on the side of products; with calculations of gasifying processes and with calculations of quasi - equilibrium processes. The main part of the article is oriented on problem - directional equilibrium combustion calculation. The engineer calculation methods of fuel conversion processes presented in this paper can be applied for thermodynamic analyses of complex power systems wherever the heat supply has been assumed in hitherto considerations. It will make these analyses more reliable and closer to the real conditions. (orig.)

  4. Earthquakes of the Central United States, 1795-2002

    Science.gov (United States)

    Wheeler, Russell L.

    2003-01-01

    This report describes construction of a list of Central U.S. earthquakes to be shown on a large-format map that is targeted for a non-technical audience. The map shows the locations and sizes of historical earthquakes of magnitude 3.0 or larger over the most seismically active part of the central U.S., including the New Madrid seismic zone. The map shows more than one-half million square kilometers and parts or all of ten States. No existing earthquake catalog had provided current, uniform coverage down to magnitude 3.0, so one had to be made. Consultation with State geological surveys insured compatibility with earthquake lists maintained by them, thereby allowing the surveys and the map to present consistent information to the public.

  5. Evaluating Computer Technology Integration in a Centralized School System

    Science.gov (United States)

    Eteokleous, N.

    2008-01-01

    The study evaluated the current situation in Cyprus elementary classrooms regarding computer technology integration in an attempt to identify ways of expanding teachers' and students' experiences with computer technology. It examined how Cypriot elementary teachers use computers, and the factors that influence computer integration in their…

  6. Massively Parallel Latent Semantic Analyzes using a Graphics Processing Unit

    Energy Technology Data Exchange (ETDEWEB)

    Cavanagh, Joseph M [ORNL; Cui, Xiaohui [ORNL

    2009-01-01

    Latent Semantic Indexing (LSA) aims to reduce the dimensions of large Term-Document datasets using Singular Value Decomposition. However, with the ever expanding size of data sets, current implementations are not fast enough to quickly and easily compute the results on a standard PC. The Graphics Processing Unit (GPU) can solve some highly parallel problems much faster than the traditional sequential processor (CPU). Thus, a deployable system using a GPU to speedup large-scale LSA processes would be a much more effective choice (in terms of cost/performance ratio) than using a computer cluster. Due to the GPU s application-specific architecture, harnessing the GPU s computational prowess for LSA is a great challenge. We present a parallel LSA implementation on the GPU, using NVIDIA Compute Unified Device Architecture and Compute Unified Basic Linear Algebra Subprograms. The performance of this implementation is compared to traditional LSA implementation on CPU using an optimized Basic Linear Algebra Subprograms library. After implementation, we discovered that the GPU version of the algorithm was twice as fast for large matrices (1000x1000 and above) that had dimensions not divisible by 16. For large matrices that did have dimensions divisible by 16, the GPU algorithm ran five to six times faster than the CPU version. The large variation is due to architectural benefits the GPU has for matrices divisible by 16. It should be noted that the overall speeds for the CPU version did not vary from relative normal when the matrix dimensions were divisible by 16. Further research is needed in order to produce a fully implementable version of LSA. With that in mind, the research we presented shows that the GPU is a viable option for increasing the speed of LSA, in terms of cost/performance ratio.

  7. Understanding the Functional Central Limit Theorems with Some Applications to Unit Root Testing with Structural Change

    Directory of Open Access Journals (Sweden)

    Juan Carlos Aquino

    2013-06-01

    Full Text Available The application of different unit root statistics is by now a standard practice in empirical work. Even when it is a practical issue, these statistics have complex nonstandard distributions depending on functionals of certain stochastic processes, and their derivations represent a barrier even for many theoretical econometricians. These derivations are based on rigorous and fundamental statistical tools which are not (very well known by standard econometricians. This paper aims to fill this gap by explaining in a simple way one of these fundamental tools: namely, the Functional Central Limit Theorem. To this end, this paper analyzes the foundations and applicability of two versions of the Functional Central Limit Theorem within the framework of a unit root with a structural break. Initial attention is focused on the probabilistic structure of the time series to be considered. Thereafter, attention is focused on the asymptotic theory for nonstationary time series proposed by Phillips (1987a, which is applied by Perron (1989 to study the effects of an (assumed exogenous structural break on the power of the augmented Dickey-Fuller test and by Zivot and Andrews (1992 to criticize the exogeneity assumption and propose a method for estimating an endogenous breakpoint. A systematic method for dealing with efficiency issues is introduced by Perron and Rodriguez (2003, which extends the Generalized Least Squares detrending approach due to Elliot et al. (1996. An empirical application is provided.

  8. Implicit Theories of Creativity in Computer Science in the United States and China

    Science.gov (United States)

    Tang, Chaoying; Baer, John; Kaufman, James C.

    2015-01-01

    To study implicit concepts of creativity in computer science in the United States and mainland China, we first asked 308 Chinese computer scientists for adjectives that would describe a creative computer scientist. Computer scientists and non-computer scientists from China (N = 1069) and the United States (N = 971) then rated how well those…

  9. Significantly reducing registration time in IGRT using graphics processing units

    DEFF Research Database (Denmark)

    Noe, Karsten Østergaard; Denis de Senneville, Baudouin; Tanderup, Kari

    2008-01-01

    Purpose/Objective For online IGRT, rapid image processing is needed. Fast parallel computations using graphics processing units (GPUs) have recently been made more accessible through general purpose programming interfaces. We present a GPU implementation of the Horn and Schunck method...... respiration phases in a free breathing volunteer and 41 anatomical landmark points in each image series. The registration method used is a multi-resolution GPU implementation of the 3D Horn and Schunck algorithm. It is based on the CUDA framework from Nvidia. Results On an Intel Core 2 CPU at 2.4GHz each...... registration took 30 minutes. On an Nvidia Geforce 8800GTX GPU in the same machine this registration took 37 seconds, making the GPU version 48.7 times faster. The nine image series of different respiration phases were registered to the same reference image (full inhale). Accuracy was evaluated on landmark...

  10. Chemical computing with reaction-diffusion processes.

    Science.gov (United States)

    Gorecki, J; Gizynski, K; Guzowski, J; Gorecka, J N; Garstecki, P; Gruenert, G; Dittrich, P

    2015-07-28

    Chemical reactions are responsible for information processing in living organisms. It is believed that the basic features of biological computing activity are reflected by a reaction-diffusion medium. We illustrate the ideas of chemical information processing considering the Belousov-Zhabotinsky (BZ) reaction and its photosensitive variant. The computational universality of information processing is demonstrated. For different methods of information coding constructions of the simplest signal processing devices are described. The function performed by a particular device is determined by the geometrical structure of oscillatory (or of excitable) and non-excitable regions of the medium. In a living organism, the brain is created as a self-grown structure of interacting nonlinear elements and reaches its functionality as the result of learning. We discuss whether such a strategy can be adopted for generation of chemical information processing devices. Recent studies have shown that lipid-covered droplets containing solution of reagents of BZ reaction can be transported by a flowing oil. Therefore, structures of droplets can be spontaneously formed at specific non-equilibrium conditions, for example forced by flows in a microfluidic reactor. We describe how to introduce information to a droplet structure, track the information flow inside it and optimize medium evolution to achieve the maximum reliability. Applications of droplet structures for classification tasks are discussed.

  11. Green Computing

    Directory of Open Access Journals (Sweden)

    K. Shalini

    2013-01-01

    Full Text Available Green computing is all about using computers in a smarter and eco-friendly way. It is the environmentally responsible use of computers and related resources which includes the implementation of energy-efficient central processing units, servers and peripherals as well as reduced resource consumption and proper disposal of electronic waste .Computers certainly make up a large part of many people lives and traditionally are extremely damaging to the environment. Manufacturers of computer and its parts have been espousing the green cause to help protect environment from computers and electronic waste in any way.Research continues into key areas such as making the use of computers as energy-efficient as Possible, and designing algorithms and systems for efficiency-related computer technologies.

  12. 78 FR 24775 - Certain Wireless Communication Devices, Portable Music and Data Processing Devices, Computers and...

    Science.gov (United States)

    2013-04-26

    ... From the Federal Register Online via the Government Publishing Office INTERNATIONAL TRADE COMMISSION Certain Wireless Communication Devices, Portable Music and Data Processing Devices, Computers and... United States after importation of certain wireless communication devices, portable music and...

  13. Central Vestibular Dysfunction in an Otorhinolaryngological Vestibular Unit: Incidence and Diagnostic Strategy

    OpenAIRE

    2014-01-01

    Introduction  Vertigo can be due to a variety of central and peripheral causes. The relative incidence of central causes is underestimated. This may have an important impact of the patients' management and prognosis. Objective  The objective of this work is to determine the incidence of central vestibular disorders in patients presenting to a vestibular unit in a tertiary referral academic center. It also aims at determining the best strategy to increase the diagnostic yield of th...

  14. Fluid Centrality: A Social Network Analysis of Social-Technical Relations in Computer-Mediated Communication

    Science.gov (United States)

    Enriquez, Judith Guevarra

    2010-01-01

    In this article, centrality is explored as a measure of computer-mediated communication (CMC) in networked learning. Centrality measure is quite common in performing social network analysis (SNA) and in analysing social cohesion, strength of ties and influence in CMC, and computer-supported collaborative learning research. It argues that measuring…

  15. Computational Fluid Dynamics - Applications in Manufacturing Processes

    Science.gov (United States)

    Beninati, Maria Laura; Kathol, Austin; Ziemian, Constance

    2012-11-01

    A new Computational Fluid Dynamics (CFD) exercise has been developed for the undergraduate introductory fluid mechanics course at Bucknell University. The goal is to develop a computational exercise that students complete which links the manufacturing processes course and the concurrent fluid mechanics course in a way that reinforces the concepts in both. In general, CFD is used as a tool to increase student understanding of the fundamentals in a virtual world. A ``learning factory,'' which is currently in development at Bucknell seeks to use the laboratory as a means to link courses that previously seemed to have little correlation at first glance. A large part of the manufacturing processes course is a project using an injection molding machine. The flow of pressurized molten polyurethane into the mold cavity can also be an example of fluid motion (a jet of liquid hitting a plate) that is applied in manufacturing. The students will run a CFD process that captures this flow using their virtual mold created with a graphics package, such as SolidWorks. The laboratory structure is currently being implemented and analyzed as a part of the ``learning factory''. Lastly, a survey taken before and after the CFD exercise demonstrate a better understanding of both the CFD and manufacturing process.

  16. Magma chamber processes in central volcanic systems of Iceland

    DEFF Research Database (Denmark)

    Þórarinsson, Sigurjón Böðvar; Tegner, Christian

    2009-01-01

    New field work and petrological investigations of the largest gabbro outcrop in Iceland, the Hvalnesfjall gabbro of the 6-7 Ma Austurhorn intrusive complex, have established a stratigraphic sequence exceeding 800 m composed of at least 8 macrorhythmic units. The bases of the macrorhythmic units...... olivine basalts from Iceland that had undergone about 20% crystallisation of olivine, plagioclase and clinopyroxene and that the macrorhythmic units formed from thin magma layers not exceeding 200-300 m. Such a "mushy" magma chamber is akin to volcanic plumbing systems in settings of high magma supply...... rate including the mid-ocean ridges and present-day magma chambers over the Iceland mantle plume. The Austurhorn central volcano likely formed in an off-rift flank zone proximal to the Iceland mantle plume during a major rift relocation....

  17. Integration Process for the Habitat Demonstration Unit

    Science.gov (United States)

    Gill, Tracy; Merbitz, Jerad; Kennedy, Kriss; Tn, Terry; Toups, Larry; Howe, A. Scott; Smitherman, David

    2011-01-01

    The Habitat Demonstration Unit (HDU) is an experimental exploration habitat technology and architecture test platform designed for analog demonstration activities. The HDU previously served as a test bed for testing technologies and sub-systems in a terrestrial surface environment. in 2010 in the Pressurized Excursion Module (PEM) configuration. Due to the amount of work involved to make the HDU project successful, the HDU project has required a team to integrate a variety of contributions from NASA centers and outside collaborators The size of the team and number of systems involved With the HDU makes Integration a complicated process. However, because the HDU shell manufacturing is complete, the team has a head start on FY--11 integration activities and can focus on integrating upgrades to existing systems as well as integrating new additions. To complete the development of the FY-11 HDU from conception to rollout for operations in July 2011, a cohesive integration strategy has been developed to integrate the various systems of HDU and the payloads. The highlighted HDU work for FY-11 will focus on performing upgrades to the PEM configuration, adding the X-Hab as a second level, adding a new porch providing the astronauts a larger work area outside the HDU for EVA preparations, and adding a Hygiene module. Together these upgrades result in a prototype configuration of the Deep Space Habitat (DSH), an element under evaluation by NASA's Human Exploration Framework Team (HEFT) Scheduled activates include early fit-checks and the utilization of a Habitat avionics test bed prior to installation into HDU. A coordinated effort to utilize modeling and simulation systems has aided in design and integration concept development. Modeling tools have been effective in hardware systems layout, cable routing, sub-system interface length estimation and human factors analysis. Decision processes on integration and use of all new subsystems will be defined early in the project to

  18. Towards a Unified Sentiment Lexicon Based on Graphics Processing Units

    Directory of Open Access Journals (Sweden)

    Liliana Ibeth Barbosa-Santillán

    2014-01-01

    Full Text Available This paper presents an approach to create what we have called a Unified Sentiment Lexicon (USL. This approach aims at aligning, unifying, and expanding the set of sentiment lexicons which are available on the web in order to increase their robustness of coverage. One problem related to the task of the automatic unification of different scores of sentiment lexicons is that there are multiple lexical entries for which the classification of positive, negative, or neutral {P,N,Z} depends on the unit of measurement used in the annotation methodology of the source sentiment lexicon. Our USL approach computes the unified strength of polarity of each lexical entry based on the Pearson correlation coefficient which measures how correlated lexical entries are with a value between 1 and −1, where 1 indicates that the lexical entries are perfectly correlated, 0 indicates no correlation, and −1 means they are perfectly inversely correlated and so is the UnifiedMetrics procedure for CPU and GPU, respectively. Another problem is the high processing time required for computing all the lexical entries in the unification task. Thus, the USL approach computes a subset of lexical entries in each of the 1344 GPU cores and uses parallel processing in order to unify 155802 lexical entries. The results of the analysis conducted using the USL approach show that the USL has 95.430 lexical entries, out of which there are 35.201 considered to be positive, 22.029 negative, and 38.200 neutral. Finally, the runtime was 10 minutes for 95.430 lexical entries; this allows a reduction of the time computing for the UnifiedMetrics by 3 times.

  19. Accelerating VASP electronic structure calculations using graphic processing units

    KAUST Repository

    Hacene, Mohamed

    2012-08-20

    We present a way to improve the performance of the electronic structure Vienna Ab initio Simulation Package (VASP) program. We show that high-performance computers equipped with graphics processing units (GPUs) as accelerators may reduce drastically the computation time when offloading these sections to the graphic chips. The procedure consists of (i) profiling the performance of the code to isolate the time-consuming parts, (ii) rewriting these so that the algorithms become better-suited for the chosen graphic accelerator, and (iii) optimizing memory traffic between the host computer and the GPU accelerator. We chose to accelerate VASP with NVIDIA GPU using CUDA. We compare the GPU and original versions of VASP by evaluating the Davidson and RMM-DIIS algorithms on chemical systems of up to 1100 atoms. In these tests, the total time is reduced by a factor between 3 and 8 when running on n (CPU core + GPU) compared to n CPU cores only, without any accuracy loss. © 2012 Wiley Periodicals, Inc.

  20. 2008 Groundwater Monitoring Report Central Nevada Test Area, Corrective Action Unit 443

    Energy Technology Data Exchange (ETDEWEB)

    None

    2009-03-01

    This report presents the 2008 groundwater monitoring results collected by the U.S. Department of Energy (DOE) Office of Legacy Management (LM) for the Central Nevada Test Area (CNTA) Subsurface Corrective Action Unit (CAU) 443. Responsibility for the environmental site restoration of the CNTA was transferred from the DOE Office of Environmental Management (DOE-EM) to DOE-LM on October 1, 2006. The environmental restoration process and corrective action strategy for CAU 443 are conducted in accordance with the Federal Facility Agreement and Consent Order (FFACO 2005) entered into by DOE, the U.S. Department of Defense, and the State of Nevada. The corrective action strategy for the site includes proof-of-concept monitoring in support of site closure. This report summarizes investigation activities associated with CAU 443 that were conducted at the site during fiscal year 2008. This is the second groundwater monitoring report prepared by DOE-LM for the CNTA.

  1. FOCUS: Fault Observatory for the Central United States

    Science.gov (United States)

    Wolf, L. W.; Langston, C. A.; Powell, C. A.; Cramer, C.; Johnston, A.; Hill, A.

    2007-12-01

    The mid-continent has a long, complex history of crustal modification and tectonism. Precambrian basement rocks record intense deformation from rifting and convergence that precedes accumulation of a thick sequence of Phanerozoic and recent sediments that constitute the present-day Mississippi Embayment. Despite its location far from the active North American plate margins, the New Madrid seismic zone of central U.S. exhibits a diffuse yet persistent pattern of seismicity, indicating that the region continues to be tectonically active. What causes this intraplate seismicity? How does the intraplate lithosphere support local, regional and plate-wide forces that maintain earthquake productivity in this supposedly stable tectonic setting? These long-standing scientific questions are the motivation behind the proposed establishment of a borehole geo-observatory in the New Madrid seismic zone. FOCUS (Fault Observatory for the Central U.S.) would allow an unprecedented look into the deep sediments and underlying rocks of the Embayment. The proposed drill hole would fill a critical need for better information on the geophysical, mechanical, petrological, and hydrological properties of the brittle crust and overlying sediments that would help to refine models of earthquake generation, wave propagation, and seismic hazard. Measurements of strains and strain transients, episodic tremor, seismic wave velocities, wave attenuation and amplification, heat flow, non-linear sediment response, fluid pressures, crustal permeabilities, fluid chemistry, and rock strength are just some of the target data sets needed. The ultimate goal of FOCUS is to drill a 5-6 km deep scientific hole into the Precambrian basement and into the New Madrid seismic zone. The scientific goal of FOCUS is a better understanding of why earthquakes occur in intraplate settings and a better definition of seismic hazard to benefit the public safety. Short-term objectives include the preparation of an

  2. Feature extraction & image processing for computer vision

    CERN Document Server

    Nixon, Mark

    2012-01-01

    This book is an essential guide to the implementation of image processing and computer vision techniques, with tutorial introductions and sample code in Matlab. Algorithms are presented and fully explained to enable complete understanding of the methods and techniques demonstrated. As one reviewer noted, ""The main strength of the proposed book is the exemplar code of the algorithms."" Fully updated with the latest developments in feature extraction, including expanded tutorials and new techniques, this new edition contains extensive new material on Haar wavelets, Viola-Jones, bilateral filt

  3. Computer performance optimization systems, applications, processes

    CERN Document Server

    Osterhage, Wolfgang W

    2013-01-01

    Computing power performance was important at times when hardware was still expensive, because hardware had to be put to the best use. Later on this criterion was no longer critical, since hardware had become inexpensive. Meanwhile, however, people have realized that performance again plays a significant role, because of the major drain on system resources involved in developing complex applications. This book distinguishes between three levels of performance optimization: the system level, application level and business processes level. On each, optimizations can be achieved and cost-cutting p

  4. Central and Eastern United States (CEUS) Seismic Source Characterization (SSC) for Nuclear Facilities Project

    Energy Technology Data Exchange (ETDEWEB)

    Kevin J. Coppersmith; Lawrence A. Salomone; Chris W. Fuller; Laura L. Glaser; Kathryn L. Hanson; Ross D. Hartleb; William R. Lettis; Scott C. Lindvall; Stephen M. McDuffie; Robin K. McGuire; Gerry L. Stirewalt; Gabriel R. Toro; Robert R. Youngs; David L. Slayter; Serkan B. Bozkurt; Randolph J. Cumbest; Valentina Montaldo Falero; Roseanne C. Perman' Allison M. Shumway; Frank H. Syms; Martitia (Tish) P. Tuttle

    2012-01-31

    This report describes a new seismic source characterization (SSC) model for the Central and Eastern United States (CEUS). It will replace the Seismic Hazard Methodology for the Central and Eastern United States, EPRI Report NP-4726 (July 1986) and the Seismic Hazard Characterization of 69 Nuclear Plant Sites East of the Rocky Mountains, Lawrence Livermore National Laboratory Model, (Bernreuter et al., 1989). The objective of the CEUS SSC Project is to develop a new seismic source model for the CEUS using a Senior Seismic Hazard Analysis Committee (SSHAC) Level 3 assessment process. The goal of the SSHAC process is to represent the center, body, and range of technically defensible interpretations of the available data, models, and methods. Input to a probabilistic seismic hazard analysis (PSHA) consists of both seismic source characterization and ground motion characterization. These two components are used to calculate probabilistic hazard results (or seismic hazard curves) at a particular site. This report provides a new seismic source model. Results and Findings The product of this report is a regional CEUS SSC model. This model includes consideration of an updated database, full assessment and incorporation of uncertainties, and the range of diverse technical interpretations from the larger technical community. The SSC model will be widely applicable to the entire CEUS, so this project uses a ground motion model that includes generic variations to allow for a range of representative site conditions (deep soil, shallow soil, hard rock). Hazard and sensitivity calculations were conducted at seven test sites representative of different CEUS hazard environments. Challenges and Objectives The regional CEUS SSC model will be of value to readers who are involved in PSHA work, and who wish to use an updated SSC model. This model is based on a comprehensive and traceable process, in accordance with SSHAC guidelines in NUREG/CR-6372, Recommendations for Probabilistic

  5. Can Children with (Central) Auditory Processing Disorders Ignore Irrelevant Sounds?

    Science.gov (United States)

    Elliott, Emily M.; Bhagat, Shaum P.; Lynn, Sharon D.

    2007-01-01

    This study investigated the effects of irrelevant sounds on the serial recall performance of visually presented digits in a sample of children diagnosed with (central) auditory processing disorders [(C)APD] and age- and span-matched control groups. The irrelevant sounds used were samples of tones and speech. Memory performance was significantly…

  6. Insulating process for HT-7U central solenoid model coils

    Institute of Scientific and Technical Information of China (English)

    2003-01-01

    The HT-7U superconducting Tokamak is a whole superconducting magnetically confined fusion device. The insulating system of its central solenoid coils is critical to its properties. In this paper the forming of the insulating system and the vacuum-pressure-impregnating (VPI) are introduced, and the whole insulating process is verified under the superconducting experiment condition.

  7. Review of computational fluid dynamics applications in biotechnology processes.

    Science.gov (United States)

    Sharma, C; Malhotra, D; Rathore, A S

    2011-01-01

    Computational fluid dynamics (CFD) is well established as a tool of choice for solving problems that involve one or more of the following phenomena: flow of fluids, heat transfer,mass transfer, and chemical reaction. Unit operations that are commonly utilized in biotechnology processes are often complex and as such would greatly benefit from application of CFD. The thirst for deeper process and product understanding that has arisen out of initiatives such as quality by design provides further impetus toward usefulness of CFD for problems that may otherwise require extensive experimentation. Not surprisingly, there has been increasing interest in applying CFD toward a variety of applications in biotechnology processing in the last decade. In this article, we will review applications in the major unit operations involved with processing of biotechnology products. These include fermentation,centrifugation, chromatography, ultrafiltration, microfiltration, and freeze drying. We feel that the future applications of CFD in biotechnology processing will focus on establishing CFD as a tool of choice for providing process understanding that can be then used to guide more efficient and effective experimentation. This article puts special emphasis on the work done in the last 10 years.

  8. Handling geophysical flows: Numerical modelling using Graphical Processing Units

    Science.gov (United States)

    Garcia-Navarro, Pilar; Lacasta, Asier; Juez, Carmelo; Morales-Hernandez, Mario

    2016-04-01

    Computational tools may help engineers in the assessment of sediment transport during the decision-making processes. The main requirements are that the numerical results have to be accurate and simulation models must be fast. The present work is based on the 2D shallow water equations in combination with the 2D Exner equation [1]. The resulting numerical model accuracy was already discussed in previous work. Regarding the speed of the computation, the Exner equation slows down the already costly 2D shallow water model as the number of variables to solve is increased and the numerical stability is more restrictive. On the other hand, the movement of poorly sorted material over steep areas constitutes a hazardous environmental problem. Computational tools help in the predictions of such landslides [2]. In order to overcome this problem, this work proposes the use of Graphical Processing Units (GPUs) for decreasing significantly the simulation time [3, 4]. The numerical scheme implemented in GPU is based on a finite volume scheme. The mathematical model and the numerical implementation are compared against experimental and field data. In addition, the computational times obtained with the Graphical Hardware technology are compared against Single-Core (sequential) and Multi-Core (parallel) CPU implementations. References [Juez et al.(2014)] Juez, C., Murillo, J., & Garca-Navarro, P. (2014) A 2D weakly-coupled and efficient numerical model for transient shallow flow and movable bed. Advances in Water Resources. 71 93-109. [Juez et al.(2013)] Juez, C., Murillo, J., & Garca-Navarro, P. (2013) . 2D simulation of granular flow over irregular steep slopes using global and local coordinates. Journal of Computational Physics. 225 166-204. [Lacasta et al.(2014)] Lacasta, A., Morales-Hernndez, M., Murillo, J., & Garca-Navarro, P. (2014) An optimized GPU implementation of a 2D free surface simulation model on unstructured meshes Advances in Engineering Software. 78 1-15. [Lacasta

  9. Viscoelastic Finite Difference Modeling Using Graphics Processing Units

    Science.gov (United States)

    Fabien-Ouellet, G.; Gloaguen, E.; Giroux, B.

    2014-12-01

    Full waveform seismic modeling requires a huge amount of computing power that still challenges today's technology. This limits the applicability of powerful processing approaches in seismic exploration like full-waveform inversion. This paper explores the use of Graphics Processing Units (GPU) to compute a time based finite-difference solution to the viscoelastic wave equation. The aim is to investigate whether the adoption of the GPU technology is susceptible to reduce significantly the computing time of simulations. The code presented herein is based on the freely accessible software of Bohlen (2002) in 2D provided under a General Public License (GNU) licence. This implementation is based on a second order centred differences scheme to approximate time differences and staggered grid schemes with centred difference of order 2, 4, 6, 8, and 12 for spatial derivatives. The code is fully parallel and is written using the Message Passing Interface (MPI), and it thus supports simulations of vast seismic models on a cluster of CPUs. To port the code from Bohlen (2002) on GPUs, the OpenCl framework was chosen for its ability to work on both CPUs and GPUs and its adoption by most of GPU manufacturers. In our implementation, OpenCL works in conjunction with MPI, which allows computations on a cluster of GPU for large-scale model simulations. We tested our code for model sizes between 1002 and 60002 elements. Comparison shows a decrease in computation time of more than two orders of magnitude between the GPU implementation run on a AMD Radeon HD 7950 and the CPU implementation run on a 2.26 GHz Intel Xeon Quad-Core. The speed-up varies depending on the order of the finite difference approximation and generally increases for higher orders. Increasing speed-ups are also obtained for increasing model size, which can be explained by kernel overheads and delays introduced by memory transfers to and from the GPU through the PCI-E bus. Those tests indicate that the GPU memory size

  10. Quantitative computer representation of propellant processing

    Science.gov (United States)

    Hicks, M. D.; Nikravesh, P. E.

    1990-01-01

    With the technology currently available for the manufacture of propellants, it is possible to control the variance of the total specific impulse obtained from the rocket boosters to within approximately five percent. Though at first inspection this may appear to be a reasonable amount of control, when it is considered that any uncertainty in the total kinetic energy delivered to the spacecraft translates into a design with less total usable payload, even this degree of uncertainty becomes unacceptable. There is strong motivation to control the variance in the specific impulse of the shuttle's solid boosters. Any small gains in the predictability and reliability of the booster would lead to a very substantial payoff in earth-to-orbit payload. The purpose of this study is to examine one aspect of the manufacture of solid propellants, namely, the mixing process. The traditional approach of computational fluid mechanics is notoriously complex and time consuming. Certain simplifications are made, yet certain fundamental aspects of the mixing process are investigated as a whole. It is possible to consider a mixing process in a mathematical sense as an operator, F, which maps a domain back upon itself. An operator which demonstrates good mixing should be able to spread any subset of the domain completely and evenly throughout the whole domain by successive applications of the mixing operator, F. Two and three dimensional models are developed and graphical visualization two and three dimensional mixing processes are presented.

  11. Technical evaluation of proposed Ukrainian Central Radioactive Waste Processing Facility

    Energy Technology Data Exchange (ETDEWEB)

    Gates, R.; Glukhov, A.; Markowski, F.

    1996-06-01

    This technical report is a comprehensive evaluation of the proposal by the Ukrainian State Committee on Nuclear Power Utilization to create a central facility for radioactive waste (not spent fuel) processing. The central facility is intended to process liquid and solid radioactive wastes generated from all of the Ukrainian nuclear power plants and the waste generated as a result of Chernobyl 1, 2 and 3 decommissioning efforts. In addition, this report provides general information on the quantity and total activity of radioactive waste in the 30-km Zone and the Sarcophagus from the Chernobyl accident. Processing options are described that may ultimately be used in the long-term disposal of selected 30-km Zone and Sarcophagus wastes. A detailed report on the issues concerning the construction of a Ukrainian Central Radioactive Waste Processing Facility (CRWPF) from the Ukrainian Scientific Research and Design institute for Industrial Technology was obtained and incorporated into this report. This report outlines various processing options, their associated costs and construction schedules, which can be applied to solving the operating and decommissioning radioactive waste management problems in Ukraine. The costs and schedules are best estimates based upon the most current US industry practice and vendor information. This report focuses primarily on the handling and processing of what is defined in the US as low-level radioactive wastes.

  12. Computer Applications in the Design Process.

    Science.gov (United States)

    Winchip, Susan

    Computer Assisted Design (CAD) and Computer Assisted Manufacturing (CAM) are emerging technologies now being used in home economics and interior design applications. A microcomputer in a computer network system is capable of executing computer graphic functions such as three-dimensional modeling, as well as utilizing office automation packages to…

  13. Proton Testing of Advanced Stellar Compass Digital Processing Unit

    DEFF Research Database (Denmark)

    Thuesen, Gøsta; Denver, Troelz; Jørgensen, Finn E

    1999-01-01

    The Advanced Stellar Compass Digital Processing Unit was radiation tested with 300 MeV protons at Proton Irradiation Facility (PIF), Paul Scherrer Institute, Switzerland.......The Advanced Stellar Compass Digital Processing Unit was radiation tested with 300 MeV protons at Proton Irradiation Facility (PIF), Paul Scherrer Institute, Switzerland....

  14. Modeling of biopharmaceutical processes. Part 2: Process chromatography unit operation

    DEFF Research Database (Denmark)

    Kaltenbrunner, Oliver; McCue, Justin; Engel, Philip;

    2008-01-01

    Process modeling can be a useful tool to aid in process development, process optimization, and process scale-up. When modeling a chromatography process, one must first select the appropriate models that describe the mass transfer and adsorption that occurs within the porous adsorbent...

  15. Efficient graphics processing unit-based voxel carving for surveillance

    Science.gov (United States)

    Ober-Gecks, Antje; Zwicker, Marius; Henrich, Dominik

    2016-07-01

    A graphics processing unit (GPU)-based implementation of a space carving method for the reconstruction of the photo hull is presented. In particular, the generalized voxel coloring with item buffer approach is transferred to the GPU. The fast computation on the GPU is realized by an incrementally calculated standard deviation within the likelihood ratio test, which is applied as color consistency criterion. A fast and efficient computation of complete voxel-pixel projections is provided using volume rendering methods. This generates a speedup of the iterative carving procedure while considering all given pixel color information. Different volume rendering methods, such as texture mapping and raycasting, are examined. The termination of the voxel carving procedure is controlled through an anytime concept. The photo hull algorithm is examined for its applicability to real-world surveillance scenarios as an online reconstruction method. For this reason, a GPU-based redesign of a visual hull algorithm is provided that utilizes geometric knowledge about known static occluders of the scene in order to create a conservative and complete visual hull that includes all given objects. This visual hull approximation serves as input for the photo hull algorithm.

  16. An Examination of the Relationship between Acculturation Level and PTSD among Central American Immigrants in the United States

    Science.gov (United States)

    Sankey, Sarita Marie

    2010-01-01

    The purpose of this study was to examine the relationship between acculturation level and posttraumatic stress disorder (PTSD) prevalence in Central American immigrants in the United States. Central American immigrants represent a population that is a part of the Latino/Hispanic Diaspora in the United States. By the year 2050 the United States…

  17. Computer modeling of complete IC fabrication process

    Science.gov (United States)

    Dutton, Robert W.

    1987-05-01

    The development of fundamental algorithms for process and device modeling as well as novel integration of the tools for advanced Integrated Circuit (IC) technology design is discussed. The development of the first complete 2D process simulator, SUPREM 4, is reported. The algorithms are discussed as well as application to local-oxidation and extrinsic diffusion conditions which occur in CMOS AND BiCMOS technologies. The evolution of 1D (SEDAN) and 2D (PISCES) device analysis is discussed. The application of SEDAN to a variety of non-silicon technologies (GaAs and HgCdTe) are considered. A new multi-window analysis capability for PISCES which exploits Monte Carlo analysis of hot carriers has been demonstrated and used to characterize a variety of silicon MOSFET and GaAs MESFET effects. A parallel computer implementation of PISCES has been achieved using a Hypercube architecture. The PISCES program has been used for a range of important device studies including: latchup, analog switch analysis, MOSFET capacitance studies and bipolar transient device for ECL gates. The program is broadly applicable to RAM and BiCMOS technology analysis and design. In the analog switch technology area this research effort has produced a variety of important modeling and advances.

  18. Seismic hazard methodology for the Central and Eastern United States: Volume 1: Part 2, Methodology (Revision 1): Final report

    Energy Technology Data Exchange (ETDEWEB)

    McGuire, R.K.; Veneziano, D.; Van Dyck, J.; Toro, G.; O' Hara, T.; Drake, L.; Patwardhan, A.; Kulkarni, R.; Keeney, R.; Winkler, R.

    1988-11-01

    Aided by its consultant, the US Geologic Survey (USGS), the Nuclear Regulatory Commission (NRC) reviewed ''Seismic Hazard Methodology for the Central and Eastern United States.'' This topical report was submitted jointly by the Seismicity Owners Group (SOG) and the Electric Power Research Institute (EPRI) in July 1986 and was revised in February 1987. The NRC staff concludes that SOG/EPRI Seismic Hazard Methodology as documented in the topical report and associated submittals, is an acceptable methodology for use in calculating seismic hazard in the Central and Eastern United States (CEUS). These calculations will be based upon the data and information documented in the material that was submitted as the SOG/EPRI topical report and ancillary submittals. However, as part of the review process the staff conditions its approval by noting areas in which problems may arise unless precautions detailed in the report are observed. 23 refs.

  19. Central pain processing in osteoarthritis: implications for treatment.

    Science.gov (United States)

    Hassan, Hafiz; Walsh, David A

    2014-01-01

    Osteoarthritis (OA) is a major cause of pain and is characterized by loss of articular cartilage integrity, synovitis and remodeling of subchondral bone. However, OA pain mechanisms remain incompletely understood. Pain severity does not always correlate with the extent of joint damage. Furthermore, many people with OA continue to experience pain despite optimal use of standard therapies that target the joints, including joint-replacement surgery. There is compelling evidence that altered central pain processing plays an important role in maintaining pain and increasing pain severity in some people with OA. A key challenge is to identify this subgroup of patients with abnormal central pain processing in order to improve their clinical outcomes by developing and targeting specific analgesic treatments.

  20. GPU-accelerated micromagnetic simulations using cloud computing

    Science.gov (United States)

    Jermain, C. L.; Rowlands, G. E.; Buhrman, R. A.; Ralph, D. C.

    2016-03-01

    Highly parallel graphics processing units (GPUs) can improve the speed of micromagnetic simulations significantly as compared to conventional computing using central processing units (CPUs). We present a strategy for performing GPU-accelerated micromagnetic simulations by utilizing cost-effective GPU access offered by cloud computing services with an open-source Python-based program for running the MuMax3 micromagnetics code remotely. We analyze the scaling and cost benefits of using cloud computing for micromagnetics.

  1. GPU-accelerated micromagnetic simulations using cloud computing

    CERN Document Server

    Jermain, C L; Buhrman, R A; Ralph, D C

    2015-01-01

    Highly-parallel graphics processing units (GPUs) can improve the speed of micromagnetic simulations significantly as compared to conventional computing using central processing units (CPUs). We present a strategy for performing GPU-accelerated micromagnetic simulations by utilizing cost-effective GPU access offered by cloud computing services with an open-source Python-based program for running the MuMax3 micromagnetics code remotely. We analyze the scaling and cost benefits of using cloud computing for micromagnetics.

  2. Optimized Technology for Residuum Processing in the ARGG Unit

    Institute of Scientific and Technical Information of China (English)

    Pan Luoqi; Yuan hongxing; Nie Baiqiu

    2006-01-01

    The influence of feedstock property on operation in the FCC unit was studied to identify the cause leading to deteriorated products distribution related with increasingly heavier feedstock for the ARGG unit. In order to maximize the economic benefits of the ARGG unit a string of measures, including the modification of catalyst formulation, retention of high catalyst activity, application of mixed termination agents to control the reaction temperature and once-through operation, and optimization of catalyst regeneration technique, were adopted to adapt the ARGG unit to processing of the heavy feedstock with its carbon residue equating to 7% on an average. The heavy oil processing technology has brought about apparent economic benefits.

  3. Beowulf Distributed Processing and the United States Geological Survey

    Science.gov (United States)

    Maddox, Brian G.

    2002-01-01

    Introduction In recent years, the United States Geological Survey's (USGS) National Mapping Discipline (NMD) has expanded its scientific and research activities. Work is being conducted in areas such as emergency response research, scientific visualization, urban prediction, and other simulation activities. Custom-produced digital data have become essential for these types of activities. High-resolution, remotely sensed datasets are also seeing increased use. Unfortunately, the NMD is also finding that it lacks the resources required to perform some of these activities. Many of these projects require large amounts of computer processing resources. Complex urban-prediction simulations, for example, involve large amounts of processor-intensive calculations on large amounts of input data. This project was undertaken to learn and understand the concepts of distributed processing. Experience was needed in developing these types of applications. The idea was that this type of technology could significantly aid the needs of the NMD scientific and research programs. Porting a numerically intensive application currently being used by an NMD science program to run in a distributed fashion would demonstrate the usefulness of this technology. There are several benefits that this type of technology can bring to the USGS's research programs. Projects can be performed that were previously impossible due to a lack of computing resources. Other projects can be performed on a larger scale than previously possible. For example, distributed processing can enable urban dynamics research to perform simulations on larger areas without making huge sacrifices in resolution. The processing can also be done in a more reasonable amount of time than with traditional single-threaded methods (a scaled version of Chester County, Pennsylvania, took about fifty days to finish its first calibration phase with a single-threaded program). This paper has several goals regarding distributed processing

  4. Accelerating glassy dynamics using graphics processing units

    CERN Document Server

    Colberg, Peter H

    2009-01-01

    Modern graphics hardware offers peak performances close to 1 Tflop/s, and NVIDIA's CUDA provides a flexible and convenient programming interface to exploit these immense computing resources. We demonstrate the ability of GPUs to perform high-precision molecular dynamics simulations for nearly a million particles running stably over many days. Particular emphasis is put on the numerical long-time stability in terms of energy and momentum conservation. Floating point precision is a crucial issue here, and sufficient precision is maintained by double-single emulation of the floating point arithmetic. As a demanding test case, we have reproduced the slow dynamics of a binary Lennard-Jones mixture close to the glass transition. The improved numerical accuracy permits us to follow the relaxation dynamics of a large system over 4 non-trivial decades in time. Further, our data provide evidence for a negative power-law decay of the velocity autocorrelation function with exponent 5/2 in the close vicinity of the transi...

  5. Processing instrumentation technology: Process definition with a cognitive computer

    Energy Technology Data Exchange (ETDEWEB)

    Price, H.L. [Wilkes Univ., Wilkes-Barre, PA (United States). Mechanical and Materials Engineering Dept.

    1996-11-01

    Much of the polymer composites industry is built around the thermochemical conversion of raw material into useful composites. The raw materials (molding compound, prepreg) often are made up of thermosetting resins and small fibers or particles. While this conversion can follow a large number of paths, only a few paths are efficient, economical and lead to desirable composite properties. Processing instrument (P/I) technology enables a computer to sense and interpret changes taking place during the cure of prepreg or molding compound. P/I technology has been used to make estimates of gel time and cure time, thermal diffusivity measurements and transition temperature measurements. Control and sensing software is comparatively straightforward. The interpretation of results with appropriate software is under development.

  6. Latar as the Central Point of Houses Group Unit: Identifiability for Spatial Structure in Kasongan, Yogyakarta, Indonesia

    Directory of Open Access Journals (Sweden)

    T. Yoyok Wahyu Subroto

    2012-05-01

    Full Text Available The massive spatial expansion of the city into the rural area in recent decades has caused such problems as related to the spatial exploitation in villages surrounding. This raises a question of whether the open space change into land coverage building may have a spatial structure implication on settlement growth and evolution process in the villages surrounding. This paper reports a case study of Kasongan village in Bantul regency, Yogyakarta, Indonesia in between 1973-2010 in which the problem refers to the discussion of spatial structure is rarely addressed especially in village’s settlement growth and evolution analysis. The bound axis which consists of 4 (four quadrants and one intersection refers to the reference axes in a Cartesian Coordinate System (CCS is used to analyze the setting of the houses group unit around 4 areas/ quadrants. Through such spatial process analysis by means spatial structure approach, the continuity of latar (yard, in the central of houses group unit is detected. There is finding from this research that the latar which exists in ‘the central point’ of houses group unit in Kasongan during 4 decades significantly becomes the prominent factor of the basic spatial structure. It composes the houses group unit in Kasongan.

  7. Accelerating chemical database searching using graphics processing units.

    Science.gov (United States)

    Liu, Pu; Agrafiotis, Dimitris K; Rassokhin, Dmitrii N; Yang, Eric

    2011-08-22

    The utility of chemoinformatics systems depends on the accurate computer representation and efficient manipulation of chemical compounds. In such systems, a small molecule is often digitized as a large fingerprint vector, where each element indicates the presence/absence or the number of occurrences of a particular structural feature. Since in theory the number of unique features can be exceedingly large, these fingerprint vectors are usually folded into much shorter ones using hashing and modulo operations, allowing fast "in-memory" manipulation and comparison of molecules. There is increasing evidence that lossless fingerprints can substantially improve retrieval performance in chemical database searching (substructure or similarity), which have led to the development of several lossless fingerprint compression algorithms. However, any gains in storage and retrieval afforded by compression need to be weighed against the extra computational burden required for decompression before these fingerprints can be compared. Here we demonstrate that graphics processing units (GPU) can greatly alleviate this problem, enabling the practical application of lossless fingerprints on large databases. More specifically, we show that, with the help of a ~$500 ordinary video card, the entire PubChem database of ~32 million compounds can be searched in ~0.2-2 s on average, which is 2 orders of magnitude faster than a conventional CPU. If multiple query patterns are processed in batch, the speedup is even more dramatic (less than 0.02-0.2 s/query for 1000 queries). In the present study, we use the Elias gamma compression algorithm, which results in a compression ratio as high as 0.097.

  8. In-line filters in central venous catheters in a neonatal intensive care unit

    NARCIS (Netherlands)

    van den Hoogen, A; Krediet, TG; Uiterwaal, CSPM; Bolenius, JFGA; Gerards, LJ; Fleer, A

    2006-01-01

    Nosocomial sepsis remains an important cause of morbidity in neonatal intensive care units. Central venous catheters (CVCs) and parenteral nutrition (TPN) are major risk factors. In-line filters in the intravenous (IV) administration sets prevent the infusion of particles, which may reduce infectiou

  9. 77 FR 59679 - Central Vermont Public Service Corporation (Millstone Power Station, Unit 3); Order Approving...

    Science.gov (United States)

    2012-09-28

    .... (DNC), Central Vermont Public Service Corporation (CVPS) and Massachusetts Municipal Wholesale Electric Company (MMWE) (collectively ``the licensees'' or ``DNC, Inc., et al.'') are the co-holders of the Renewed... Power Station, Unit 3 (MPS3). CVPS is a non-operating owner of a 1.7303% interest in MPS3. DNC is...

  10. Spatiotemporal computed tomography of dynamic processes

    Science.gov (United States)

    Kaestner, Anders; Münch, Beat; Trtik, Pavel; Butler, Les

    2011-12-01

    Modern computed tomography (CT) equipment allowing fast 3-D imaging also makes it possible to monitor dynamic processes by 4-D imaging. Because the acquisition time of various 3-D-CT systems is still in the range of at least milliseconds or even hours, depending on the detector system and the source, the balance of the desired temporal and spatial resolution must be adjusted. Furthermore, motion artifacts will occur, especially at high spatial resolution and longer measuring times. We propose two approaches based on nonsequential projection angle sequences allowing a convenient postacquisition balance of temporal and spatial resolution. Both strategies are compatible with existing instruments, needing only a simple reprograming of the angle list used for projection acquisition and care with the projection order list. Both approaches will reduce the impact of artifacts due to motion. The strategies are applied and validated with cold neutron imaging of water desorption from originally saturated particles during natural air-drying experiments and with x-ray tomography of a polymer blend heated during imaging.

  11. Computer software for process hazards analysis.

    Science.gov (United States)

    Hyatt, N

    2000-10-01

    Computerized software tools are assuming major significance in conducting HAZOPs. This is because they have the potential to offer better online presentations and performance to HAZOP teams, as well as better documentation and downstream tracking. The chances of something being "missed" are greatly reduced. We know, only too well, that HAZOP sessions can be like the industrial equivalent of a trip to the dentist. Sessions can (and usually do) become arduous and painstaking. To make the process easier for all those involved, we need all the help computerized software can provide. In this paper I have outlined the challenges addressed in the production of Windows software for performing HAZOP and other forms of PHA. The object is to produce more "intelligent", more user-friendly software for performing HAZOP where technical interaction between team members is of key significance. HAZOP techniques, having already proven themselves, are extending into the field of computer control and human error. This makes further demands on HAZOP software and emphasizes its importance.

  12. Four central questions about prediction in language processing.

    Science.gov (United States)

    Huettig, Falk

    2015-11-11

    The notion that prediction is a fundamental principle of human information processing has been en vogue over recent years. The investigation of language processing may be particularly illuminating for testing this claim. Linguists traditionally have argued prediction plays only a minor role during language understanding because of the vast possibilities available to the language user as each word is encountered. In the present review I consider four central questions of anticipatory language processing: Why (i.e. what is the function of prediction in language processing)? What (i.e. what are the cues used to predict up-coming linguistic information and what type of representations are predicted)? How (what mechanisms are involved in predictive language processing and what is the role of possible mediating factors such as working memory)? When (i.e. do individuals always predict up-coming input during language processing)? I propose that prediction occurs via a set of diverse PACS (production-, association-, combinatorial-, and simulation-based prediction) mechanisms which are minimally required for a comprehensive account of predictive language processing. Models of anticipatory language processing must be revised to take multiple mechanisms, mediating factors, and situational context into account. Finally, I conjecture that the evidence considered here is consistent with the notion that prediction is an important aspect but not a fundamental principle of language processing. This article is part of a Special Issue entitled SI: Prediction and Attention.

  13. Computer Support for Document Management in the Danish Central Government

    DEFF Research Database (Denmark)

    Hertzum, Morten

    1995-01-01

    Document management systems are generally assumed to hold a potential for delegating the recording and retrieval of documents to professionals such as civil servants and for supporting the coordination and control of work, so-called workflow management. This study investigates the use and organiz......Document management systems are generally assumed to hold a potential for delegating the recording and retrieval of documents to professionals such as civil servants and for supporting the coordination and control of work, so-called workflow management. This study investigates the use...... and organizational impact of document management systems in the Danish central government. The currently used systems unfold around the recording of incoming and outgoing paper mail and have typically not been accompanied by organizational changes. Rather, document management tends to remain an appendix...

  14. Computer data processing operation speed influencing factors%计算机数据处理的运算速度影响因素探讨

    Institute of Scientific and Technical Information of China (English)

    吕睿

    2015-01-01

    In view of the current speed of the computer data processing is difficult to meet people's growing entertainment and office demand, restricts the development and progress of computer technology, this paper made a brief analysis of the basic concepts of computer data processing, clear focus its data processing features, combined with the computer data processing of the professional theory, the influence of the operation speed of the computer data processing with the factors, including the central processing unit, computer, computer hard drives, memory, etc., it is concluded that the central processing unit (CPU), computer memory, computer hard disk is the main factors influencing the speed of computer data processing, suggested comprehensive optimization, in order to improve the operation speed of computer data processing.%针对当前计算机数据处理的运算速度难以满足人们日渐增长的娱乐与办公需求,制约着计算机技术的发展与进步,本文通过对计算机数据处理的基本概念进行简要的分析,重点明确其数据处理特征,结合计算机数据处理的专业理论,深入剖析计算机数据处理的运算速度的影响因素,包括中央处理器、计算机内存、计算机硬盘等方面,得出了中央处理器、计算机内存、计算机硬盘是影响计算机数据处理的运算速度的主要因素的结论,建议全面优化,以提高计算机数据处理的运算速度。

  15. Marrying Content and Process in Computer Science Education

    Science.gov (United States)

    Zendler, A.; Spannagel, C.; Klaudt, D.

    2011-01-01

    Constructivist approaches to computer science education emphasize that as well as knowledge, thinking skills and processes are involved in active knowledge construction. K-12 computer science curricula must not be based on fashions and trends, but on contents and processes that are observable in various domains of computer science, that can be…

  16. Fast extended focused imaging in digital holography using a graphics processing unit.

    Science.gov (United States)

    Wang, Le; Zhao, Jianlin; Di, Jianglei; Jiang, Hongzhen

    2011-05-01

    We present a simple and effective method for reconstructing extended focused images in digital holography using a graphics processing unit (GPU). The Fresnel transform method is simplified by an algorithm named fast Fourier transform pruning with frequency shift. Then the pixel size consistency problem is solved by coordinate transformation and combining the subpixel resampling and the fast Fourier transform pruning with frequency shift. With the assistance of the GPU, we implemented an improved parallel version of this method, which obtained about a 300-500-fold speedup compared with central processing unit codes.

  17. Coordination processes in computer supported collaborative writing

    NARCIS (Netherlands)

    Kanselaar, G.; Erkens, Gijsbert; Jaspers, Jos; Prangsma, M.E.

    2005-01-01

    In the COSAR-project a computer-supported collaborative learning environment enables students to collaborate in writing an argumentative essay. The TC3 groupware environment (TC3: Text Composer, Computer supported and Collaborative) offers access to relevant information sources, a private notepad, a

  18. Bandwidth Enhancement between Graphics Processing Units on the Peripheral Component Interconnect Bus

    Directory of Open Access Journals (Sweden)

    ANTON Alin

    2015-10-01

    Full Text Available General purpose computing on graphics processing units is a new trend in high performance computing. Present day applications require office and personal supercomputers which are mostly based on many core hardware accelerators communicating with the host system through the Peripheral Component Interconnect (PCI bus. Parallel data compression is a difficult topic but compression has been used successfully to improve the communication between parallel message passing interface (MPI processes on high performance computing clusters. In this paper we show that special pur pose compression algorithms designed for scientific floating point data can be used to enhance the bandwidth between 2 graphics processing unit (GPU devices on the PCI Express (PCIe 3.0 x16 bus in a homebuilt personal supercomputer (PSC.

  19. 图形处理器在大规模力学问题计算中的应用进展%ADVANCES IN GRAPHICS PROCESSING UNITS' APPLICATIONS TO THE COMPUTATION OF LARGE-SCALE MECHANICAL PROBLEMS

    Institute of Scientific and Technical Information of China (English)

    夏健明; 魏德敏

    2010-01-01

    现代图形处理器(graphics processing units,GPU)具有较强的并行数值运算功能.该文简单介绍了GPU的硬件结构,基于GPU通用计算的数据结构和实现方法,以及用于编写片元程序的OpenGL着色语言.介绍了应用GPU计算大规模力学问题的研究进展.简要介绍了以下内容:应用GPU模拟自然界的流体现象,其实质是使用有限差分法求解Navier-Stokes方程;应用GPU实现有限元法计算,使用基于GPU的共轭梯度法求解有限元方程组;应用GPU实现分子动力学计算,用GPU计算原子间短程作用力,并生成邻近原子列表;应用GPU实现量子力学Monte Carlo计算;应用GPU实现n个物体的引力相互作用,用GPU纹理存储n个物体的位置、质量、速度和加速度等.对基于图象处理器和中央处理器的计算作比较,已完成了以下基于GPU的计算:实现求解线性方程组的高斯消元法和共轭梯度法,并应用于大规模的有限元计算;加速无网格法计算;加速线性和非线性分子结构力学方法计算;用于计算分析碳纳米管的力学性能.指出GPU在大规模力学计算中的研究方向.

  20. Central Computer Science Concepts to Research-Based Teacher Training in Computer Science: An Experimental Study

    Science.gov (United States)

    Zendler, Andreas; Klaudt, Dieter

    2012-01-01

    The significance of computer science for economics and society is undisputed. In particular, computer science is acknowledged to play a key role in schools (e.g., by opening multiple career paths). The provision of effective computer science education in schools is dependent on teachers who are able to properly represent the discipline and whose…

  1. Acceleration of integral imaging based incoherent Fourier hologram capture using graphic processing unit.

    Science.gov (United States)

    Jeong, Kyeong-Min; Kim, Hee-Seung; Hong, Sung-In; Lee, Sung-Keun; Jo, Na-Young; Kim, Yong-Soo; Lim, Hong-Gi; Park, Jae-Hyeung

    2012-10-01

    Speed enhancement of integral imaging based incoherent Fourier hologram capture using a graphic processing unit is reported. Integral imaging based method enables exact hologram capture of real-existing three-dimensional objects under regular incoherent illumination. In our implementation, we apply parallel computation scheme using the graphic processing unit, accelerating the processing speed. Using enhanced speed of hologram capture, we also implement a pseudo real-time hologram capture and optical reconstruction system. The overall operation speed is measured to be 1 frame per second.

  2. Ground motion-simulations of 1811-1812 New Madrid earthquakes, central United States

    Science.gov (United States)

    Ramirez-Guzman, L.; Graves, Robert; Olsen, Kim B.; Boyd, Oliver; Cramer, Chris H.; Hartzell, Stephen; Ni, Sidao; Somerville, Paul G.; Williams, Robert; Zhong, Jinquan

    2015-01-01

    We performed a suite of numerical simulations based on the 1811–1812 New Madrid seismic zone (NMSZ) earthquakes, which demonstrate the importance of 3D geologic structure and rupture directivity on the ground‐motion response throughout a broad region of the central United States (CUS) for these events. Our simulation set consists of 20 hypothetical earthquakes located along two faults associated with the current seismicity trends in the NMSZ. The hypothetical scenarios range in magnitude from M 7.0 to 7.7 and consider various epicenters, slip distributions, and rupture characterization approaches. The low‐frequency component of our simulations was computed deterministically up to a frequency of 1 Hz using a regional 3D seismic velocity model and was combined with higher‐frequency motions calculated for a 1D medium to generate broadband synthetics (0–40 Hz in some cases). For strike‐slip earthquakes located on the southwest–northeast‐striking NMSZ axial arm of seismicity, our simulations show 2–10 s period energy channeling along the trend of the Reelfoot rift and focusing strong shaking northeast toward Paducah, Kentucky, and Evansville, Indiana, and southwest toward Little Rock, Arkansas. These waveguide effects are further accentuated by rupture directivity such that an event with a western epicenter creates strong amplification toward the northeast, whereas an eastern epicenter creates strong amplification toward the southwest. These effects are not as prevalent for simulations on the reverse‐mechanism Reelfoot fault, and large peak ground velocities (>40  cm/s) are typically confined to the near‐source region along the up‐dip projection of the fault. Nonetheless, these basin response and rupture directivity effects have a significant impact on the pattern and level of the estimated intensities, which leads to additional uncertainty not previously considered in magnitude estimates of the 1811–1812 sequence based only on historical

  3. Calculation of HELAS amplitudes for QCD processes using graphics processing unit (GPU)

    CERN Document Server

    Hagiwara, K; Okamura, N; Rainwater, D L; Stelzer, T

    2009-01-01

    We use a graphics processing unit (GPU) for fast calculations of helicity amplitudes of quark and gluon scattering processes in massless QCD. New HEGET ({\\bf H}ELAS {\\bf E}valuation with {\\bf G}PU {\\bf E}nhanced {\\bf T}echnology) codes for gluon self-interactions are introduced, and a C++ program to convert the MadGraph generated FORTRAN codes into HEGET codes in CUDA (a C-platform for general purpose computing on GPU) is created. Because of the proliferation of the number of Feynman diagrams and the number of independent color amplitudes, the maximum number of final state jets we can evaluate on a GPU is limited to 4 for pure gluon processes ($gg\\to 4g$), or 5 for processes with one or more quark lines such as $q\\bar{q}\\to 5g$ and $qq\\to qq+3g$. Compared with the usual CPU-based programs, we obtain 60-100 times better performance on the GPU, except for 5-jet production processes and the $gg\\to 4g$ processes for which the GPU gain over the CPU is about 20.

  4. Central Vestibular Dysfunction in an Otorhinolaryngological Vestibular Unit: Incidence and Diagnostic Strategy

    Directory of Open Access Journals (Sweden)

    Mostafa, Badr E.

    2014-03-01

    Full Text Available Introduction Vertigo can be due to a variety of central and peripheral causes. The relative incidence of central causes is underestimated. This may have an important impact of the patients' management and prognosis. Objective The objective of this work is to determine the incidence of central vestibular disorders in patients presenting to a vestibular unit in a tertiary referral academic center. It also aims at determining the best strategy to increase the diagnostic yield of the patients' visit. Methods This is a prospective observational study on 100 consecutive patients with symptoms suggestive of vestibular dysfunction. All patients completed a structured questionnaire and received bedside and vestibular examination and neuroimaging as required. Results There were 69 women and 31 men. Their ages ranged between 28 and 73 (mean 42.48 years. Provisional videonystagmography (VNG results were: 40% benign paroxysmal positional vertigo (BPPV, 23% suspicious of central causes, 18% undiagnosed, 15% Meniere disease, and 4% vestibular neuronitis. Patients with an unclear diagnosis or central features (41 had magnetic resonance imaging (MRI and Doppler studies. Combining data from history, VNG, and imaging studies, 23 patients (23% were diagnosed as having a central vestibular lesion (10 with generalized ischemia/vertebra basilar insufficiency, 4 with multiple sclerosis, 4 with migraine vestibulopathy, 4 with phobic postural vertigo, and 1 with hyperventilation-induced nystagmus. Conclusions Combining a careful history with clinical examination, VNG, MRI, and Doppler studies decreases the number of undiagnosed cases and increases the detection of possible central lesions.

  5. Central vestibular dysfunction in an otorhinolaryngological vestibular unit: incidence and diagnostic strategy.

    Science.gov (United States)

    Mostafa, Badr E; Kahky, Ayman O El; Kader, Hisham M Abdel; Rizk, Michael

    2014-07-01

    Introduction Vertigo can be due to a variety of central and peripheral causes. The relative incidence of central causes is underestimated. This may have an important impact of the patients' management and prognosis. Objective The objective of this work is to determine the incidence of central vestibular disorders in patients presenting to a vestibular unit in a tertiary referral academic center. It also aims at determining the best strategy to increase the diagnostic yield of the patients' visit. Methods This is a prospective observational study on 100 consecutive patients with symptoms suggestive of vestibular dysfunction. All patients completed a structured questionnaire and received bedside and vestibular examination and neuroimaging as required. Results There were 69 women and 31 men. Their ages ranged between 28 and 73 (mean 42.48 years). Provisional videonystagmography (VNG) results were: 40% benign paroxysmal positional vertigo (BPPV), 23% suspicious of central causes, 18% undiagnosed, 15% Meniere disease, and 4% vestibular neuronitis. Patients with an unclear diagnosis or central features (41) had magnetic resonance imaging (MRI) and Doppler studies. Combining data from history, VNG, and imaging studies, 23 patients (23%) were diagnosed as having a central vestibular lesion (10 with generalized ischemia/vertebra basilar insufficiency, 4 with multiple sclerosis, 4 with migraine vestibulopathy, 4 with phobic postural vertigo, and 1 with hyperventilation-induced nystagmus). Conclusions Combining a careful history with clinical examination, VNG, MRI, and Doppler studies decreases the number of undiagnosed cases and increases the detection of possible central lesions.

  6. Unit cell-based computer-aided manufacturing system for tissue engineering.

    Science.gov (United States)

    Kang, Hyun-Wook; Park, Jeong Hun; Kang, Tae-Yun; Seol, Young-Joon; Cho, Dong-Woo

    2012-03-01

    Scaffolds play an important role in the regeneration of artificial tissues or organs. A scaffold is a porous structure with a micro-scale inner architecture in the range of several to several hundreds of micrometers. Therefore, computer-aided construction of scaffolds should provide sophisticated functionality for porous structure design and a tool path generation strategy that can achieve micro-scale architecture. In this study, a new unit cell-based computer-aided manufacturing (CAM) system was developed for the automated design and fabrication of a porous structure with micro-scale inner architecture that can be applied to composite tissue regeneration. The CAM system was developed by first defining a data structure for the computing process of a unit cell representing a single pore structure. Next, an algorithm and software were developed and applied to construct porous structures with a single or multiple pore design using solid freeform fabrication technology and a 3D tooth/spine computer-aided design model. We showed that this system is quite feasible for the design and fabrication of a scaffold for tissue engineering.

  7. Computer simulation for designing waste reduction in chemical processing

    Energy Technology Data Exchange (ETDEWEB)

    Mallick, S.K. [Oak Ridge Inst. for Science and Technology, TN (United States); Cabezas, H.; Bare, J.C. [Environmental Protection Agency, Cincinnati, OH (United States)

    1996-12-31

    A new methodology has been developed for implementing waste reduction in the design of chemical processes using computer simulation. The methodology is based on a generic pollution balance around a process. For steady state conditions, the pollution balance equation is used as the basis to define a pollution index with units of pounds of pollution per pound of products. The pollution balance has been modified by weighing the mass of each pollutant by a chemical ranking of environmental impact. The chemical ranking expresses the well known fact that all chemicals do not have the same environmental impact, e.g., all chemicals are not equally toxic. Adding the chemical ranking effectively converts the pollutant mass balance into a balance over environmental impact. A modified pollution index or impact index with units of environmental impact per mass of products is derived from the impact balance. The impact index is a measure of the environmental effects due to the waste generated by a process. It is extremely useful when comparing the effect of the pollution generated by alternative processes or process conditions in the manufacture of any given product. The following three different schemes for the chemical ranking have been considered: (i) no ranking, i.e., considering that all chemicals have the same environmental impact, (ii) a simple numerical ranking of wastes from 0 to 3 according to the authors judgement of the impact of each chemical, and (iii) ranking wastes according to a scientifically derived combined index of human health and environmental effects. Use of the methodology has been illustrated with an example of production of synthetic ammonia. 3 refs., 2 figs., 1 tab.

  8. Evaluation of the Central Hearing Process in Parkinson Patients

    Directory of Open Access Journals (Sweden)

    Santos, Rosane Sampaio

    2011-04-01

    Full Text Available Introduction: Parkinson disease (PD is a degenerating disease with a deceitful character, impairing the central nervous system and causing biological, psychological and social changes. It shows motor signs and symptoms characterized by trembling, postural instability, rigidity and bradykinesia. Objective: To evaluate the central hearing function in PD patients. Method: A descriptive, prospect and transversal study, in which 10 individuals diagnosed of PD named study group (SG and 10 normally hearing individuals named control group (CG were evaluated, age average of 63.8 and (SD 5.96. Both groups went through otorhinolaryngological and ordinary audiological evaluations, and dichotic test of alternate disyllables (SSW. Results: In the quantitative analysis, CG showed 80% normality on competitive right-ear hearing (RC and 60% on the competitive left-ear hearing (LC in comparison with the SG that presented 70% on RC and 40% on LC. In the qualitative analysis, the biggest percentage of errors was evident in the SG in the order effect. The results showed a difficulty in identifying a sound when there is another competitive sound and in the memory ability. Conclusion: A qualitative and quantitative difference was observed in the SSW test between the evaluated groups, although statistical data does not show significant differences. The importance to evaluate the central hearing process is emphasized when contributing to the procedures to be taken at the therapeutic follow-up.

  9. 2009 Groundwater Monitoring Report Central Nevada Test Area, Corrective Action Unit 443

    Energy Technology Data Exchange (ETDEWEB)

    None

    2010-09-01

    This report presents the 2009 groundwater monitoring results collected by the U.S. Department of Energy (DOE) Office of Legacy Management (LM) for the Central Nevada Test Area (CNTA) Subsurface Corrective Action Unit (CAU) 443. Responsibility for the environmental site restoration of CNTA was transferred from the DOE Office of Environmental Management to LM on October 1, 2006. The environmental restoration process and corrective action strategy for CAU 443 are conducted in accordance with the Federal Facility Agreement and Consent Order entered into by DOE, the U.S. Department of Defense, and the State of Nevada. The corrective action strategy for the site includes proof-of-concept monitoring in support of site closure. This report summarizes investigation activities associated with CAU 443 that were conducted at the site from October 2008 through December 2009. It also represents the first year of the enhanced monitoring network and begins the new 5-year proof-of-concept monitoring period that is intended to validate the compliance boundary

  10. 2010 Groundwater Monitoring Report Central Nevada Test Area, Corrective Action Unit 443

    Energy Technology Data Exchange (ETDEWEB)

    None

    2011-02-01

    This report presents the 2010 groundwater monitoring results collected by the U.S. Department of Energy (DOE) Office of Legacy Management (LM) for the Central Nevada Test Area (CNTA) Subsurface Corrective Action Unit (CAU) 443. Responsibility for the environmental site restoration of CNTA was transferred from the DOE Office of Environmental Management to LM on October 1, 2006. The environmental restoration process and corrective action strategy for CAU 443 are conducted in accordance with the Federal Facility Agreement and Consent Order entered into by DOE, the U.S. Department of Defense, and the State of Nevada. The corrective action strategy for the site includes proof-of-concept monitoring in support of site closure. This report summarizes investigation activities associated with CAU 443 that were conducted at the site from December 2009 through December 2010. It also represents the second year of the enhanced monitoring network and the 5-year proof-of-concept monitoring period that is intended to validate the compliance boundary

  11. Closure Report Central Nevada Test Area Subsurface Corrective Action Unit 443 January 2016

    Energy Technology Data Exchange (ETDEWEB)

    Findlay, Rick [US Department of Energy, Washington, DC (United States). Office of Legacy Management

    2015-11-01

    The U.S. Department of Energy (DOE) Office of Legacy Management (LM) prepared this Closure Report for the subsurface Corrective Action Unit (CAU) 443 at the Central Nevada Test Area (CNTA), Nevada, Site. CNTA was the site of a 0.2- to 1-megaton underground nuclear test in 1968. Responsibility for the site’s environmental restoration was transferred from the DOE, National Nuclear Security Administration, Nevada Field Office to LM on October 1, 2006. The environmental restoration process and corrective action strategy for CAU 443 are conducted in accordance with the Federal Facility Agreement and Consent Order (FFACO 1996, as amended 2011) and all applicable Nevada Division of Environmental Protection (NDEP) policies and regulations. This Closure Report provides justification for closure of CAU 443 and provides a summary of completed closure activities; describes the selected corrective action alternative; provides an implementation plan for long-term monitoring with well network maintenance and approaches/policies for institutional controls (ICs); and presents the contaminant, compliance, and use-restriction boundaries for the site.

  12. A quantum computer based on recombination processes in microelectronic devices

    Science.gov (United States)

    Theodoropoulos, K.; Ntalaperas, D.; Petras, I.; Konofaos, N.

    2005-01-01

    In this paper a quantum computer based on the recombination processes happening in semiconductor devices is presented. A "data element" and a "computational element" are derived based on Schokley-Read-Hall statistics and they can later be used to manifest a simple and known quantum computing process. Such a paradigm is shown by the application of the proposed computer onto a well known physical system involving traps in semiconductor devices.

  13. A quantum computer based on recombination processes in microelectronic devices

    Energy Technology Data Exchange (ETDEWEB)

    Theodoropoulos, K [Computer Engineering and Informatics Department, University of Patras, Patras (Greece); Ntalaperas, D [Computer Engineering and Informatics Department, University of Patras, Patras (Greece); Research Academic Computer Technology Institute, Riga Feraiou 61, 26110, Patras (Greece); Petras, I [Computer Engineering and Informatics Department, University of Patras, Patras (Greece); Konofaos, N [Computer Engineering and Informatics Department, University of Patras, Patras (Greece)

    2005-01-01

    In this paper a quantum computer based on the recombination processes happening in semiconductor devices is presented. A 'data element' and a 'computational element' are derived based on Schokley-Read-Hall statistics and they can later be used to manifest a simple and known quantum computing process. Such a paradigm is shown by the application of the proposed computer onto a well known physical system involving traps in semiconductor devices.

  14. Kinematic Modelling of Disc Galaxies using Graphics Processing Units

    CERN Document Server

    Bekiaris, Georgios; Fluke, Christopher J; Abraham, Roberto

    2015-01-01

    With large-scale Integral Field Spectroscopy (IFS) surveys of thousands of galaxies currently under-way or planned, the astronomical community is in need of methods, techniques and tools that will allow the analysis of huge amounts of data. We focus on the kinematic modelling of disc galaxies and investigate the potential use of massively parallel architectures, such as the Graphics Processing Unit (GPU), as an accelerator for the computationally expensive model-fitting procedure. We review the algorithms involved in model-fitting and evaluate their suitability for GPU implementation. We employ different optimization techniques, including the Levenberg-Marquardt and Nested Sampling algorithms, but also a naive brute-force approach based on Nested Grids. We find that the GPU can accelerate the model-fitting procedure up to a factor of ~100 when compared to a single-threaded CPU, and up to a factor of ~10 when compared to a multi-threaded dual CPU configuration. Our method's accuracy, precision and robustness a...

  15. Kinematic modelling of disc galaxies using graphics processing units

    Science.gov (United States)

    Bekiaris, G.; Glazebrook, K.; Fluke, C. J.; Abraham, R.

    2016-01-01

    With large-scale integral field spectroscopy (IFS) surveys of thousands of galaxies currently under-way or planned, the astronomical community is in need of methods, techniques and tools that will allow the analysis of huge amounts of data. We focus on the kinematic modelling of disc galaxies and investigate the potential use of massively parallel architectures, such as the graphics processing unit (GPU), as an accelerator for the computationally expensive model-fitting procedure. We review the algorithms involved in model-fitting and evaluate their suitability for GPU implementation. We employ different optimization techniques, including the Levenberg-Marquardt and nested sampling algorithms, but also a naive brute-force approach based on nested grids. We find that the GPU can accelerate the model-fitting procedure up to a factor of ˜100 when compared to a single-threaded CPU, and up to a factor of ˜10 when compared to a multithreaded dual CPU configuration. Our method's accuracy, precision and robustness are assessed by successfully recovering the kinematic properties of simulated data, and also by verifying the kinematic modelling results of galaxies from the GHASP and DYNAMO surveys as found in the literature. The resulting GBKFIT code is available for download from: http://supercomputing.swin.edu.au/gbkfit.

  16. Research of general purpose computing technology based on graphic processing unit%基于图形处理器的通用计算技术的研究

    Institute of Scientific and Technical Information of China (English)

    戴长江; 张尤赛

    2013-01-01

    In order to research the general purpose computing technology of GPU based on PC, the classic GPU general pur-pose computing method base on texture mapping technology was adopted, and the experiments of discrete convolution of 2D images and volume rendering based on 3D texture mapping were carried out. The experiment result indicates that, on the basis of a suitable algorithm design, the classic GPU general purpose computing technology can significantly enhance the program run-ning performance. In this article, it is concluded that the CPU+GPU heterogeneous computing mode will become a choice for high-performance computation, and the further development of the general purpose computing technology based on GPU is prospected.%为了研究基于PC的图形处理器(GPU)的通用计算技术,采用了基于纹理映射的经典GPU通用计算方法,进行了二维图像离散卷积和三维纹理映射体绘制的实验.实验证明了经典GPU通用计算技术在合适的算法设计基础上能够显著提升程序的运算速度,得出了基于CPU+GPU的异构计算模式可以成为高性能计算的一种选择的结论,展望了基于图形处理器的通用计算技术在未来的发展.

  17. Seismic evidence for whole lithosphere separation between Saxothuringian and Moldanubian tectonic units in central Europe

    OpenAIRE

    Heuer, B.; Horst Kämpf; Rainer Kind; W. H. Geissler

    2007-01-01

    The Bohemian Massif is part of the Variscan belt of central Europe. We carried out a high resolution mapping of lithospheric thickness beneath central Europe by investigating 264 teleseismic events recorded at 80 broad band stations in the western Bohemian Massif with the method of S receiver function analysis. A negative phase beneath the Saxothuringian and north-eastern Teplá-Barrandian units at about 9-10 s before the S onset is interpreted as caused by the lithosphere-asthenosphere bounda...

  18. Business Process Compliance through Reusable Units of Compliant Processes

    NARCIS (Netherlands)

    Shumm, D.; Turetken, O.; Kokash, N.; Elgammal, A.; Leymann, F.; Heuvel, J. van den

    2010-01-01

    Compliance management is essential for ensuring that organizational business processes and supporting information systems are in accordance with a set of prescribed requirements originating from laws, regulations, and various legislative or technical documents such as Sarbanes-Oxley Act or ISO 17799

  19. Study guide to accompany computers data and processing

    CERN Document Server

    Deitel, Harvey M

    1985-01-01

    Study Guide to Accompany Computer and Data Processing provides information pertinent to the fundamental aspects of computers and computer technology. This book presents the key benefits of using computers.Organized into five parts encompassing 19 chapters, this book begins with an overview of the evolution of modern computing systems from the earliest mechanical calculating devices to microchips. This text then introduces computer hardware and describes the processor. Other chapters describe how microprocessors are made and describe the physical operation of computers. This book discusses as w

  20. A Centralized Processing Framework for Foliage Penetration Human Tracking in Multistatic Radar

    Directory of Open Access Journals (Sweden)

    J. Zhang

    2016-04-01

    Full Text Available A complete centralized processing framework is proposed for human tracking using multistatic radar in the foliage-penetration environment. The configuration of the multistatic radar system is described. Primary attention is devoted to time of arrival (TOA estimation and target localization. An improved approach that takes the geometrical center as the TOA estimation of the human target is given. The minimum mean square error paring (MMSEP approach is introduced for multi-target localization in the multistatic radar system. An improved MMSEP algorithm is proposed using the maximum velocity limitation and the global nearest neighbor criterion, efficiently decreasing the computational cost of MMSEP. The experimental results verify the effectiveness of the centralized processing framework.

  1. Atmospheric processes triggering the central European floods in June 2013

    Directory of Open Access Journals (Sweden)

    C. M. Grams

    2014-07-01

    Full Text Available In June 2013, central Europe was hit by a century flood affecting the Danube and Elbe catchments after a 4 day period of heavy precipitation and causing severe human and economic loss. In this study model analysis and observational data are investigated to reveal the key atmospheric processes that caused the heavy precipitation event. The period preceding the flood was characterised by a weather regime associated with cool and unusual wet conditions resulting from repeated Rossby wave breaking (RWB. During the event a single RWB established a reversed baroclinicity in the low to mid-troposphere in central Europe with cool air trapped over the Alps and warmer air to the north. The upper-level cut-off resulting from the RWB instigated three consecutive cyclones in eastern Europe that unusually tracked westward during the days of heavy precipitation. Continuous large-scale slantwise ascent in so-called "equatorward ascending" warm conveyor belts (WCBs associated with these cyclones is found as the key process that caused the 4 day heavy precipitation period. Fed by moisture sources from continental evapotranspiration, these WCBs unusually ascended equatorward along the southward sloping moist isentropes. Although "equatorward ascending" WCBs are climatologically rare events, they have great potential for causing high impact weather.

  2. Computer-aided modeling of aluminophosphate zeolites as packings of building units

    KAUST Repository

    Peskov, Maxim

    2012-03-22

    New building schemes of aluminophosphate molecular sieves from packing units (PUs) are proposed. We have investigated 61 framework types discovered in zeolite-like aluminophosphates and have identified important PU combinations using a recently implemented computational algorithm of the TOPOS package. All PUs whose packing completely determines the overall topology of the aluminophosphate framework were described and catalogued. We have enumerated 235 building models for the aluminophosphates belonging to 61 zeolite framework types, from ring- or cage-like PU clusters. It is indicated that PUs can be considered as precursor species in the zeolite synthesis processes. © 2012 American Chemical Society.

  3. Image-Processing Software For A Hypercube Computer

    Science.gov (United States)

    Lee, Meemong; Mazer, Alan S.; Groom, Steven L.; Williams, Winifred I.

    1992-01-01

    Concurrent Image Processing Executive (CIPE) is software system intended to develop and use image-processing application programs on concurrent computing environment. Designed to shield programmer from complexities of concurrent-system architecture, it provides interactive image-processing environment for end user. CIPE utilizes architectural characteristics of particular concurrent system to maximize efficiency while preserving architectural independence from user and programmer. CIPE runs on Mark-IIIfp 8-node hypercube computer and associated SUN-4 host computer.

  4. High Input Voltage, Silicon Carbide Power Processing Unit Performance Demonstration

    Science.gov (United States)

    Bozak, Karin E.; Pinero, Luis R.; Scheidegger, Robert J.; Aulisio, Michael V.; Gonzalez, Marcelo C.; Birchenough, Arthur G.

    2015-01-01

    A silicon carbide brassboard power processing unit has been developed by the NASA Glenn Research Center in Cleveland, Ohio. The power processing unit operates from two sources: a nominal 300 Volt high voltage input bus and a nominal 28 Volt low voltage input bus. The design of the power processing unit includes four low voltage, low power auxiliary supplies, and two parallel 7.5 kilowatt (kW) discharge power supplies that are capable of providing up to 15 kilowatts of total power at 300 to 500 Volts (V) to the thruster. Additionally, the unit contains a housekeeping supply, high voltage input filter, low voltage input filter, and master control board, such that the complete brassboard unit is capable of operating a 12.5 kilowatt Hall effect thruster. The performance of the unit was characterized under both ambient and thermal vacuum test conditions, and the results demonstrate exceptional performance with full power efficiencies exceeding 97%. The unit was also tested with a 12.5kW Hall effect thruster to verify compatibility and output filter specifications. With space-qualified silicon carbide or similar high voltage, high efficiency power devices, this would provide a design solution to address the need for high power electric propulsion systems.

  5. High-throughput sequence alignment using Graphics Processing Units

    Directory of Open Access Journals (Sweden)

    Trapnell Cole

    2007-12-01

    Full Text Available Abstract Background The recent availability of new, less expensive high-throughput DNA sequencing technologies has yielded a dramatic increase in the volume of sequence data that must be analyzed. These data are being generated for several purposes, including genotyping, genome resequencing, metagenomics, and de novo genome assembly projects. Sequence alignment programs such as MUMmer have proven essential for analysis of these data, but researchers will need ever faster, high-throughput alignment tools running on inexpensive hardware to keep up with new sequence technologies. Results This paper describes MUMmerGPU, an open-source high-throughput parallel pairwise local sequence alignment program that runs on commodity Graphics Processing Units (GPUs in common workstations. MUMmerGPU uses the new Compute Unified Device Architecture (CUDA from nVidia to align multiple query sequences against a single reference sequence stored as a suffix tree. By processing the queries in parallel on the highly parallel graphics card, MUMmerGPU achieves more than a 10-fold speedup over a serial CPU version of the sequence alignment kernel, and outperforms the exact alignment component of MUMmer on a high end CPU by 3.5-fold in total application time when aligning reads from recent sequencing projects using Solexa/Illumina, 454, and Sanger sequencing technologies. Conclusion MUMmerGPU is a low cost, ultra-fast sequence alignment program designed to handle the increasing volume of data produced by new, high-throughput sequencing technologies. MUMmerGPU demonstrates that even memory-intensive applications can run significantly faster on the relatively low-cost GPU than on the CPU.

  6. Monte Carlo Simulations of Random Frustrated Systems on Graphics Processing Units

    Science.gov (United States)

    Feng, Sheng; Fang, Ye; Hall, Sean; Papke, Ariane; Thomasson, Cade; Tam, Ka-Ming; Moreno, Juana; Jarrell, Mark

    2012-02-01

    We study the implementation of the classical Monte Carlo simulation for random frustrated models using the multithreaded computing environment provided by the the Compute Unified Device Architecture (CUDA) on modern Graphics Processing Units (GPU) with hundreds of cores and high memory bandwidth. The key for optimizing the performance of the GPU computing is in the proper handling of the data structure. Utilizing the multi-spin coding, we obtain an efficient GPU implementation of the parallel tempering Monte Carlo simulation for the Edwards-Anderson spin glass model. In the typical simulations, we find over two thousand times of speed-up over the single threaded CPU implementation.

  7. Putting down roots in earthquake country-Your handbook for earthquakes in the Central United States

    Science.gov (United States)

    Contributors: Dart, Richard; McCarthy, Jill; McCallister, Natasha; Williams, Robert A.

    2011-01-01

    This handbook provides information to residents of the Central United States about the threat of earthquakes in that area, particularly along the New Madrid seismic zone, and explains how to prepare for, survive, and recover from such events. It explains the need for concern about earthquakes for those residents and describes what one can expect during and after an earthquake. Much is known about the threat of earthquakes in the Central United States, including where they are likely to occur and what can be done to reduce losses from future earthquakes, but not enough has been done to prepare for future earthquakes. The handbook describes such preparations that can be taken by individual residents before an earthquake to be safe and protect property.

  8. Failure detection in high-performance clusters and computers using chaotic map computations

    Science.gov (United States)

    Rao, Nageswara S.

    2015-09-01

    A programmable media includes a processing unit capable of independent operation in a machine that is capable of executing 10.sup.18 floating point operations per second. The processing unit is in communication with a memory element and an interconnect that couples computing nodes. The programmable media includes a logical unit configured to execute arithmetic functions, comparative functions, and/or logical functions. The processing unit is configured to detect computing component failures, memory element failures and/or interconnect failures by executing programming threads that generate one or more chaotic map trajectories. The central processing unit or graphical processing unit is configured to detect a computing component failure, memory element failure and/or an interconnect failure through an automated comparison of signal trajectories generated by the chaotic maps.

  9. Genetic Algorithm Supported by Graphical Processing Unit Improves the Exploration of Effective Connectivity in Functional Brain Imaging

    Directory of Open Access Journals (Sweden)

    Lawrence Wing Chi Chan

    2015-05-01

    Full Text Available Brain regions of human subjects exhibit certain levels of associated activation upon specific environmental stimuli. Functional Magnetic Resonance Imaging (fMRI detects regional signals, based on which we could infer the direct or indirect neuronal connectivity between the regions. Structural Equation Modeling (SEM is an appropriate mathematical approach for analyzing the effective connectivity using fMRI data. A maximum likelihood (ML discrepancy function is minimized against some constrained coefficients of a path model. The minimization is an iterative process. The computing time is very long as the number of iterations increases geometrically with the number of path coefficients. Using regular Quad-Core Central Processing Unit (CPU platform, duration up to three months is required for the iterations from 0 to 30 path coefficients. This study demonstrates the application of Graphical Processing Unit (GPU with the parallel Genetic Algorithm (GA that replaces the Powell minimization in the standard program code of the analysis software package. It was found in the same example that GA under GPU reduced the duration to 20 hours and provided more accurate solution when compared with standard program code under CPU.

  10. Social Studies: Application Units. Course II, Teachers. Computer-Oriented Curriculum. REACT (Relevant Educational Applications of Computer Technology).

    Science.gov (United States)

    Tecnica Education Corp., San Carlos, CA.

    This book is one of a series in Course II of the Relevant Educational Applications of Computer Technology (REACT) Project. It is designed to point out to teachers two of the major applications of computers in the social sciences: simulation and data analysis. The first section contains a variety of simulation units organized under the following…

  11. Formalizing the Process of Constructing Chains of Lexical Units

    Directory of Open Access Journals (Sweden)

    Grigorij Chetverikov

    2015-06-01

    Full Text Available Formalizing the Process of Constructing Chains of Lexical Units The paper investigates mathematical aspects of describing the construction of chains of lexical units on the basis of finite-predicate algebra. Analyzing the construction peculiarities is carried out and application of the method of finding the power of linear logical transformation for removing characteristic words of a dictionary entry is given. Analysis and perspectives of the results of the study are provided.

  12. Biomimetic design processes in architecture: morphogenetic and evolutionary computational design.

    Science.gov (United States)

    Menges, Achim

    2012-03-01

    Design computation has profound impact on architectural design methods. This paper explains how computational design enables the development of biomimetic design processes specific to architecture, and how they need to be significantly different from established biomimetic processes in engineering disciplines. The paper first explains the fundamental difference between computer-aided and computational design in architecture, as the understanding of this distinction is of critical importance for the research presented. Thereafter, the conceptual relation and possible transfer of principles from natural morphogenesis to design computation are introduced and the related developments of generative, feature-based, constraint-based, process-based and feedback-based computational design methods are presented. This morphogenetic design research is then related to exploratory evolutionary computation, followed by the presentation of two case studies focusing on the exemplary development of spatial envelope morphologies and urban block morphologies.

  13. Hydroclimatological Processes in the Central American Dry Corridor

    Science.gov (United States)

    Hidalgo, H. G.; Duran-Quesada, A. M.; Amador, J. A.; Alfaro, E. J.; Mora, G.

    2015-12-01

    This work studies the hydroclimatological variability and the climatic precursors of drought in the Central American Dry Corridor (CADC), a subregion located in the Pacific coast of Southern Mexico and Central America. Droughts are frequent in the CADC, which is featured by a higher climatological aridity compared to the highlands and Caribbean coast of Central America. The CADC region presents large social vulnerability to hydroclimatological impacts originated from dry conditions, as there is a large part of population that depends on subsistance agriculture. The influence of large-scale climatic precursors such as ENSO, the Caribbean Low-Level Jet (CLLJ), low frequency signals from the Pacific and Caribbean and some intra-seasonal signals such as the MJO are evaluated. Previous work by the authors identified a connection between the CLLJ and CADC precipitation. This connection is more complex than a simple rain-shadow effect, and instead it was suggested that convection at the exit of the jet in the Costa-Rica and Nicaragua Caribbean coasts and consequent subsidence in the Pacific could be playing a role in this connection. During summer, when the CLLJ is stronger than normal, the Inter-Tropical Convergence Zone (located mainly in the Pacific) displaces to a more southern position, and vice-versa, suggesting a connection between these two processes that has not been fully explained yet. The role of the Western Hemisphere Warm Pool also needs more research. All this is important, as it suggest a working hypothesis that during summer, the effect of the Caribbean wind strength may be responsible for the dry climate of the CADC. Another previous analysis by the authors was based on downscaled precipitation and temperature from GCMs and the NCEP/NCAR reanalysis. The data was later used in a hydrological model. Results showed a negative trend in reanalysis' runoff for 1980-2012 in San José (Costa Rica) and Tegucigalpa (Honduras). This highly significant drying trend

  14. Earthquake catalog for estimation of maximum earthquake magnitude, Central and Eastern United States: Part A, Prehistoric earthquakes

    Science.gov (United States)

    Wheeler, Russell L.

    2014-01-01

    Computation of probabilistic earthquake hazard requires an estimate of Mmax, the maximum earthquake magnitude thought to be possible within a specified geographic region. This report is Part A of an Open-File Report that describes the construction of a global catalog of moderate to large earthquakes, from which one can estimate Mmax for most of the Central and Eastern United States and adjacent Canada. The catalog and Mmax estimates derived from it were used in the 2014 edition of the U.S. Geological Survey national seismic-hazard maps. This Part A discusses prehistoric earthquakes that occurred in eastern North America, northwestern Europe, and Australia, whereas a separate Part B deals with historical events.

  15. A Computational Dual-Process Model of Social Interaction

    Science.gov (United States)

    2014-01-30

    AFRL-RH-WP-TR-2014-0123       A COMPUTATIONAL DUAL- PROCESS MODEL OF SOCIAL INTERACTION Stephen Deutsch BBN Technologies 10 Moulton...From – To) 01 January 2012 – 01 January 2014 4. TITLE AND SUBTITLE A Computational Dual- Process Model of Social Interaction 5a. CONTRACT NUMBER...88ABW-2014-5663; Cleared 3 December 2014 14. ABSTRACT (Maximum 200 words) Dual- process models postulate two distinct modes of information processing

  16. Cloud Computing Solutions for the Marine Corps: An Architecture to Support Expeditionary Logistics

    Science.gov (United States)

    2013-09-01

    operations center COTS commercial off the shelf CONUS Continental United States CPU central processing unit CRM Customer Relationship Management ...Delivery Service (GCDS); Forge.mil development platform tools; RightNow Customer Relationship Management ( CRM ) tools; and Rapid Access Computing... customers , freeing the customer from the burden and costs of maintaining the IT network since it is managed by an external provider (United States

  17. Operating The Central Process Systems At Glenn Research Center

    Science.gov (United States)

    Weiler, Carly P.

    2004-01-01

    As a research facility, the Glenn Research Center (GRC) trusts and expects all the systems, controlling their facilities to run properly and efficiently in order for their research and operations to occur proficiently and on time. While there are many systems necessary for the operations at GRC, one of those most vital systems is the Central Process Systems (CPS). The CPS controls operations used by GRC's wind tunnels, propulsion systems lab, engine components research lab, and compressor, turbine and combustor test cells. Used widely throughout the lab, it operates equipment such as exhausters, chillers, cooling towers, compressors, dehydrators, and other such equipment. Through parameters such as pressure, temperature, speed, flow, etc., it performs its primary operations on the major systems of Electrical Dispatch (ED), Central Air Dispatch (CAD), Central Air Equipment Building (CAEB), and Engine Research Building (ERB). In order for the CPS to continue its operations at Glenn, a new contract must be awarded. Consequently, one of my primary responsibilities was assisting the Source Evaluation Board (SEB) with the process of awarding the recertification contract of the CPS. The job of the SEB was to evaluate the proposals of the contract bidders and then to present their findings to the Source Selecting Official (SSO). Before the evaluations began, the Center Director established the level of the competition. For this contract, the competition was limited to those companies classified as a small, disadvantaged business. After an industry briefing that explained to qualified companies the CPS and type of work required, each of the interested companies then submitted proposals addressing three components: Mission Suitability, Cost, and Past Performance. These proposals were based off the Statement of Work (SOW) written by the SEB. After companies submitted their proposals, the SEB reviewed all three components and then presented their results to the SSO. While the

  18. 从图形处理器到基于GPU的通用计算%From Graphic Processing Unit to General Purpose Graphic Processing Unit

    Institute of Scientific and Technical Information of China (English)

    刘金硕; 刘天晓; 吴慧; 曾秋梅; 任梦菲; 顾宜淳

    2013-01-01

    对GPU(graphic process unit)、基于GPU的通用计算(general purpose GPU,GPGPU)、基于GPU的编程模型与环境进行了界定;将GPU的发展分为4个阶段,阐述了GPU的架构由非统一的渲染架构到统一的渲染架构,再到新一代的费米架构的变化;通过对基于GPU的通用计算的架构与多核CPU架构、分布式集群架构进行了软硬件的对比.分析表明:当进行中粒度的线程级数据密集型并行运算时,采用多核多线程并行;当进行粗粒度的网络密集型并行运算时,采用集群并行;当进行细粒度的计算密集型并行运算时,采用GPU通用计算并行.最后本文展示了未来的GPGPU的研究热点和发展方向--GPGPU自动并行化、CUDA对多种语言的支持、CUDA的性能优化,并介绍了GPGPU的一些典型应用.%This paper defines the outline of GPU(graphic processing unit) , the general purpose computation, the programming model and the environment for GPU. Besides, it introduces the evolution process from GPU to GPGPU (general purpose graphic processing unit) , and the change from non-uniform render architecture to the unified render architecture and the next Fermi architecture in details. Then it compares GPGPU architecture with multi-core GPU architecture and distributed cluster architecture from the perspective of software and hardware. When doing the middle grain level thread data intensive parallel computing, the multi-core and multi-thread should be utilized. When doing the coarse grain network computing, the cluster computing should be utilized. When doing the fine grained compute intensive parallel computing, the general purpose computation should be adopted. Meanwhile, some classical applications of GPGPU have been mentioned. At last, this paper demonstrates the further developments and research hotspots of GPGPU, which are automatic parallelization of GPGPU, multi-language support and performance optimization of CUDA, and introduces the classic

  19. High performance direct gravitational N-body simulations on graphics processing units II: An implementation in CUDA

    NARCIS (Netherlands)

    Belleman, R.G.; Bédorf, J.; Portegies Zwart, S.F.

    2008-01-01

    We present the results of gravitational direct N-body simulations using the graphics processing unit (GPU) on a commercial NVIDIA GeForce 8800GTX designed for gaming computers. The force evaluation of the N-body problem is implemented in "Compute Unified Device Architecture" (CUDA) using the GPU to

  20. 基于问题的一元Fuzzy事件认知计算%Cognitive computation of single fuzzy unit event based on problems

    Institute of Scientific and Technical Information of China (English)

    冯康

    2015-01-01

    为改正现有认知计算的不足,提出了基于问题的一元Fuzzy事件认知计算。它包含感知计算、建模计算、决策计算三个不同的阶段,感知计算依据问题和模型对外界发生的一元Fuzzy事件进行预处理,并将筛选出的一元Fuzzy事件计算为认识;建模计算将不同的认识计算为模型;决策计算接收外界提交的指令,根据模型计算出完成指令的答案。实验发现,基于问题的一元Fuzzy事件认知计算改正了已有认知计算的不足。因此,它是对人类大脑处理认知信息的准确模拟。%Aiming at the flaw of the current cognitive computations, the single fuzzy unit event cognitive computation based on problems was proposed. There were three different cognitive processes which included perception computation process, modeling computation process and decision computation process. In the perception computation process, the outside single fuzzy unit events were preprocessed according to the problems and the models, then the selected single fuzzy unit events were computed as the cognitions. In the modeling computation process, the different cognitions were computed as the models. The answers of the instructions were computed by the models in the decision computation process. Experimental results demonstrated that the single fuzzy unit event cognitive computation based on problems corrected the flaw of the current cognitive computations, so it accurately simulates the process which the brain processes the cognitive information.

  1. (Central Auditory Processing: the impact of otitis media

    Directory of Open Access Journals (Sweden)

    Leticia Reis Borges

    2013-07-01

    Full Text Available OBJECTIVE: To analyze auditory processing test results in children suffering from otitis media in their first five years of age, considering their age. Furthermore, to classify central auditory processing test findings regarding the hearing skills evaluated. METHODS: A total of 109 students between 8 and 12 years old were divided into three groups. The control group consisted of 40 students from public school without a history of otitis media. Experimental group I consisted of 39 students from public schools and experimental group II consisted of 30 students from private schools; students in both groups suffered from secretory otitis media in their first five years of age and underwent surgery for placement of bilateral ventilation tubes. The individuals underwent complete audiological evaluation and assessment by Auditory Processing tests. RESULTS: The left ear showed significantly worse performance when compared to the right ear in the dichotic digits test and pitch pattern sequence test. The students from the experimental groups showed worse performance when compared to the control group in the dichotic digits test and gaps-in-noise. Children from experimental group I had significantly lower results on the dichotic digits and gaps-in-noise tests compared with experimental group II. The hearing skills that were altered were temporal resolution and figure-ground perception. CONCLUSION: Children who suffered from secretory otitis media in their first five years and who underwent surgery for placement of bilateral ventilation tubes showed worse performance in auditory abilities, and children from public schools had worse results on auditory processing tests compared with students from private schools.

  2. Conceptual framework for the integration of computer-aided design and computer-aided process planning

    Energy Technology Data Exchange (ETDEWEB)

    Li, R.K.

    1986-01-01

    This research presents a conceptual framework for the integration of Computer-Aided Design (CAD) and Computer-Aided Process Planning (CAPP). The conceptual framework resides in an environmental of N CAD systems and M CAPP systems. It consists of three major modules: a generic-part definition data structure, a preprocessor, and a postprocessor. The generic-part definition data structure was developed to serve as a neutral-part definition data representation between CAD and CAPP systems. With this structure, the number of interfacing systems can be reduced to 1 + M systems. The preprocessor, a part feature recognition system, is designed to extract part definition data from an IGES file, evaluates that data, allows inclusion of unsupported data, and finally puts the data into the data structure. The postprocessor was written to convert the data from the data structure to the part input format of a selected CAPP system. A prototype systems that uses IBM's CAD package (CADAM), IGES and United Technologies Research Center's CAPP package (CMPP) was developed to test and prove the concept of this research. The input is a CADAM graphic design file and the outputs are a summary of operations and a tolerance control chart which are ready to be used in the production shops.

  3. Dissemination of computer skills among physicians: the infectious process model.

    Science.gov (United States)

    Quinn, F B; Hokanson, J A; McCracken, M M; Stiernberg, C M

    1984-08-01

    While the potential utility of computer technology to medicine is often acknowledged, little is known as to the best methods to actually teach physicians about computers. The current variability in physician computer fluency implies there is no accepted minimum required level of computer skills for physicians. Special techniques are needed to instill these skills in the physician and measure their effects within the medical profession. This hypothesis is suggested following the development of a specialized course for the new physician. In a population of physicians where medical computing usage was considered nonexistent, intense interest developed the following exposure to a role model having strong credentials in both medicine and computer science. This produced an atmosphere where there was a perceived benefit in being knowledgeable about the medical computer usage. The subsequent increase in computer systems use was the result of the availability of resources and development of computer skills that could be exchanged among the students and faculty. This growth in computer use is described using the parameters of an infectious process model. While other approaches may also be useful, the infectious process model permits the growth of medical computer usage to be quantitatively described, evaluates specific determinants of use patterns, and allows the future growth of computer utilization in medicine to be predicted.

  4. 77 FR 28621 - Certain Wireless Communication Devices, Portable Music and Data Processing Devices, Computers and...

    Science.gov (United States)

    2012-05-15

    ... From the Federal Register Online via the Government Publishing Office INTERNATIONAL TRADE COMMISSION Certain Wireless Communication Devices, Portable Music and Data Processing Devices, Computers and... and desist order in this investigation would affect the public health and welfare in the United...

  5. Soft computing in big data processing

    CERN Document Server

    Park, Seung-Jong; Lee, Jee-Hyong

    2014-01-01

    Big data is an essential key to build a smart world as a meaning of the streaming, continuous integration of large volume and high velocity data covering from all sources to final destinations. The big data range from data mining, data analysis and decision making, by drawing statistical rules and mathematical patterns through systematical or automatically reasoning. The big data helps serve our life better, clarify our future and deliver greater value. We can discover how to capture and analyze data. Readers will be guided to processing system integrity and implementing intelligent systems. With intelligent systems, we deal with the fundamental data management and visualization challenges in effective management of dynamic and large-scale data, and efficient processing of real-time and spatio-temporal data. Advanced intelligent systems have led to managing the data monitoring, data processing and decision-making in realistic and effective way. Considering a big size of data, variety of data and frequent chan...

  6. A Computational Chemistry Database for Semiconductor Processing

    Science.gov (United States)

    Jaffe, R.; Meyyappan, M.; Arnold, J. O. (Technical Monitor)

    1998-01-01

    The concept of 'virtual reactor' or 'virtual prototyping' has received much attention recently in the semiconductor industry. Commercial codes to simulate thermal CVD and plasma processes have become available to aid in equipment and process design efforts, The virtual prototyping effort would go nowhere if codes do not come with a reliable database of chemical and physical properties of gases involved in semiconductor processing. Commercial code vendors have no capabilities to generate such a database, rather leave the task to the user of finding whatever is needed. While individual investigations of interesting chemical systems continue at Universities, there has not been any large scale effort to create a database. In this presentation, we outline our efforts in this area. Our effort focuses on the following five areas: 1. Thermal CVD reaction mechanism and rate constants. 2. Thermochemical properties. 3. Transport properties.4. Electron-molecule collision cross sections. and 5. Gas-surface interactions.

  7. Integration of distributed computing into the drug discovery process.

    Science.gov (United States)

    von Korff, Modest; Rufener, Christian; Stritt, Manuel; Freyss, Joel; Bär, Roman; Sander, Thomas

    2011-02-01

    Grid computing offers an opportunity to gain massive computing power at low costs. We give a short introduction into the drug discovery process and exemplify the use of grid computing for image processing, docking and 3D pharmacophore descriptor calculations. The principle of a grid and its architecture are briefly explained. More emphasis is laid on the issues related to a company-wide grid installation and embedding the grid into the research process. The future of grid computing in drug discovery is discussed in the expert opinion section. Most needed, besides reliable algorithms to predict compound properties, is embedding the grid seamlessly into the discovery process. User friendly access to powerful algorithms without any restrictions, that is, by a limited number of licenses, has to be the goal of grid computing in drug discovery.

  8. Oaks were the historical foundation genus of the east-central United States

    Science.gov (United States)

    Hanberry, Brice B.; Nowacki, Gregory J.

    2016-08-01

    Foundation tree species are dominant and define ecosystems. Because of the historical importance of oaks (Quercus) in east-central United States, it was unlikely that oak associates, such as pines (Pinus), hickories (Carya) and chestnut (Castanea), rose to this status. We used 46 historical tree studies or databases (ca. 1620-1900) covering 28 states, 1.7 million trees, and 50% of the area of the eastern United States to examine importance of oaks compared to pines, hickories, and chestnuts. Oak was the most abundant genus, ranging from 40% to 70% of total tree composition at the ecological province scale and generally increasing in dominance from east to west across this area. Pines, hickories, and chestnuts were co-dominant (ratio of oak composition to other genera of <2) in no more than five of 70 ecological subsections and two of 20 ecological sections in east-central United States, and thus by definition, were not foundational. Although other genera may be called foundational because of localized abundance or perceptions resulting from inherited viewpoints, they decline from consideration when compared to overwhelming oak abundance across this spatial extent. The open structure and high-light conditions of oak ecosystems uniquely supported species-rich understories. Loss of oak as a foundation genus has occurred with loss of open forest ecosystems at landscape scales.

  9. COST ESTIMATION MODELS FOR DRINKING WATER TREATMENT UNIT PROCESSES

    Science.gov (United States)

    Cost models for unit processes typically utilized in a conventional water treatment plant and in package treatment plant technology are compiled in this paper. The cost curves are represented as a function of specified design parameters and are categorized into four major catego...

  10. Acceleration of option pricing technique on graphics processing units

    NARCIS (Netherlands)

    Zhang, B.; Oosterlee, C.W.

    2010-01-01

    The acceleration of an option pricing technique based on Fourier cosine expansions on the Graphics Processing Unit (GPU) is reported. European options, in particular with multiple strikes, and Bermudan options will be discussed. The influence of the number of terms in the Fourier cosine series expan

  11. Acceleration of option pricing technique on graphics processing units

    NARCIS (Netherlands)

    Zhang, B.; Oosterlee, C.W.

    2014-01-01

    The acceleration of an option pricing technique based on Fourier cosine expansions on the graphics processing unit (GPU) is reported. European options, in particular with multiple strikes, and Bermudan options will be discussed. The influence of the number of terms in the Fourier cosine series expan

  12. EEG processing and its application in brain-computer interface

    Institute of Scientific and Technical Information of China (English)

    Wang Jing; Xu Guanghua; Xie Jun; Zhang Feng; Li Lili; Han Chengcheng; Li Yeping; Sun Jingjing

    2013-01-01

    Electroencephalogram (EEG) is an efficient tool in exploring human brains.It plays a very important role in diagnosis of disorders related to epilepsy and development of new interaction techniques between machines and human beings,namely,brain-computer interface (BCI).The purpose of this review is to illustrate the recent researches in EEG processing and EEG-based BCI.First,we outline several methods in removing artifacts from EEGs,and classical algorithms for fatigue detection are discussed.Then,two BCI paradigms including motor imagery and steady-state motion visual evoked potentials (SSMVEP) produced by oscillating Newton' s rings are introduced.Finally,BCI systems including wheelchair controlling and electronic car navigation are elaborated.As a new technique to control equipments,BCI has promising potential in rehabilitation of disorders in central nervous system,such as stroke and spinal cord injury,treatment of attention deficit hyperactivity disorder (ADHD) in children and development of novel games such as brain-controlled auto racings.

  13. Computer Aided Teaching of Digital Signal Processing.

    Science.gov (United States)

    Castro, Ian P.

    1990-01-01

    Describes a microcomputer-based software package developed at the University of Surrey for teaching digital signal processing to undergraduate science and engineering students. Menu-driven software capabilities are explained, including demonstration of qualitative concepts and experimentation with quantitative data, and examples are given of…

  14. Spinning disc atomisation process: Modelling and computations

    Science.gov (United States)

    Li, Yuan; Sisoev, Grigory; Shikhmurzaev, Yulii

    2016-11-01

    The atomisation of liquids using a spinning disc (SDA), where the centrifugal force is used to generate a continuous flow, with the liquid eventually disintegrating into drops which, on solidification, become particles, is a key element in many technologies. Examples of such technologies range from powder manufacturing in metallurgy to various biomedical applications. In order to be able to control the SDA process, it is necessary to understand it as a whole, from the feeding of the liquid and the wave pattern developing on the disc to the disintegration of the liquid film into filaments and these into drops. The SDA process has been the subject of a number of experimental studies and some elements of it, notably the film on a spinning disc and the dynamics of the jets streaming out from it, have been investigated theoretically. However, to date there have been no studies of the process as a whole, including, most importantly, the transition zone where the film that has already developed a certain wave pattern disintegrates into jets that spiral out. The present work reports some results of an ongoing project aimed at producing a definitive map of regimes occurring in the SDA process and their outcome.

  15. Computer Modeling of Complete IC Fabrication Process.

    Science.gov (United States)

    1984-01-01

    now makes the correlation S between process specification and resulting physically observ- Csea able parameters a viable and valuable design tool. Fig... journal is are generally provided at a paid-up royalty often not adequate to transfer research or an annual royalty fee. methodologies and findings to other

  16. NUMATH: a nuclear-material-holdup estimator for unit operations and chemical processes

    Energy Technology Data Exchange (ETDEWEB)

    Krichinsky, A.M.

    1983-02-01

    A computer program, NUMATH (Nuclear Material Holdup Estimator), has been developed to estimate compositions of materials in vessels involved in unit operations and chemical processes. This program has been implemented in a remotely operated nuclear fuel processing plant. NUMATH provides estimates of the steady-state composition of materials residing in process vessels until representative samples can be obtained and chemical analyses can be performed. Since these compositions are used for inventory estimations, the results are determined for the cataloged in container-oriented files. The estimated compositions represent materials collected in applicable vessels - including consideration for materials previously acknowledged in these vessels. The program utilizes process measurements and simple performance models to estimate material holdup and distribution within unit operations. In simulated run-testing, NUMATH typically produced estimates within 5% of the measured inventories for uranium and within 8% of the measured inventories for thorium during steady-state process operation.

  17. Characterization of the Temporal Clustering of Flood Events across the Central United States in terms of Climate States

    Science.gov (United States)

    Mallakpour, Iman; Villarini, Gabriele; Jones, Michael; Smith, James

    2016-04-01

    The central United States is a region of the country that has been plagued by frequent catastrophic flooding (e.g., flood events of 1993, 2008, 2013, and 2014), with large economic and social repercussions (e.g., fatalities, agricultural losses, flood losses, water quality issues). The goal of this study is to examine whether it is possible to describe the occurrence of flood events at the sub-seasonal scale in terms of variations in the climate system. Daily streamflow time series from 774 USGS stream gage stations over the central United States (defined here to include North Dakota, South Dakota, Nebraska, Kansas, Missouri, Iowa, Minnesota, Wisconsin, Illinois, West Virginia, Kentucky, Ohio, Indiana, and Michigan) with a record of at least 50 years and ending no earlier than 2011 are used for this study. We use a peak-over-threshold (POT) approach to identify flood peaks so that we have, on average two events per year. We model the occurrence/non-occurrence of a flood event over time using regression models based on Cox processes. Cox processes are widely used in biostatistics and can be viewed as a generalization of Poisson processes. Rather than assuming that flood events occur independently of the occurrence of previous events (as in Poisson processes), Cox processes allow us to account for the potential presence of temporal clustering, which manifests itself in an alternation of quiet and active periods. Here we model the occurrence/non-occurrence of flood events using two climate indices as climate time-varying covariates: the North Atlantic Oscillation (NAO) and the Pacific-North American pattern (PNA). The results of this study show that NAO and/or PNA can explain the temporal clustering in flood occurrences in over 90% of the stream gage stations we considered. Analyses of the sensitivity of the results to different average numbers of flood events per year (from one to five) are also performed and lead to the same conclusions. The findings of this work

  18. Fast Pyrolysis Process Development Unit for Validating Bench Scale Data

    Energy Technology Data Exchange (ETDEWEB)

    Brown, Robert C. [Iowa State Univ., Ames, IA (United States). Biorenewables Research Lab.. Center for Sustainable Environmental Technologies. Bioeconomy Inst.; Jones, Samuel T. [Iowa State Univ., Ames, IA (United States). Biorenewables Research Lab.. Center for Sustainable Environmental Technologies. Bioeconomy Inst.

    2010-03-31

    The purpose of this project was to prepare and operate a fast pyrolysis process development unit (PDU) that can validate experimental data generated at the bench scale. In order to do this, a biomass preparation system, a modular fast pyrolysis fluidized bed reactor, modular gas clean-up systems, and modular bio-oil recovery systems were designed and constructed. Instrumentation for centralized data collection and process control were integrated. The bio-oil analysis laboratory was upgraded with the addition of analytical equipment needed to measure C, H, O, N, S, P, K, and Cl. To provide a consistent material for processing through the fluidized bed fast pyrolysis reactor, the existing biomass preparation capabilities of the ISU facility needed to be upgraded. A stationary grinder was installed to reduce biomass from bale form to 5-10 cm lengths. A 25 kg/hr rotary kiln drier was installed. It has the ability to lower moisture content to the desired level of less than 20% wt. An existing forage chopper was upgraded with new screens. It is used to reduce biomass to the desired particle size of 2-25 mm fiber length. To complete the material handling between these pieces of equipment, a bucket elevator and two belt conveyors must be installed. The bucket elevator has been installed. The conveyors are being procured using other funding sources. Fast pyrolysis bio-oil, char and non-condensable gases were produced from an 8 kg/hr fluidized bed reactor. The bio-oil was collected in a fractionating bio-oil collection system that produced multiple fractions of bio-oil. This bio-oil was fractionated through two separate, but equally important, mechanisms within the collection system. The aerosols and vapors were selectively collected by utilizing laminar flow conditions to prevent aerosol collection and electrostatic precipitators to collect the aerosols. The vapors were successfully collected through a selective condensation process. The combination of these two mechanisms

  19. A 1.5 GFLOPS Reciprocal Unit for Computer Graphics

    DEFF Research Database (Denmark)

    Nannarelli, Alberto; Rasmussen, Morten Sleth; Stuart, Matthias Bo

    2006-01-01

    The reciprocal operation 1/d is a frequent operation performed in graphics processors (GPUs). In this work, we present the design of a radix-16 reciprocal unit based on the algorithm combining the traditional digit-by-digit algorithm and the approximation of the reciprocal by one Newton...

  20. Neonatal mortality in intensive care units of Central Brazil Mortalidade neonatal em unidades de cuidados intensivos no Brasil Central

    Directory of Open Access Journals (Sweden)

    Claci F Weirich

    2005-10-01

    Full Text Available OBJECTIVE: To identify potential prognostic factors for neonatal mortality among newborns referred to intensive care units. METHODS: A live-birth cohort study was carried out in Goiânia, Central Brazil, from November 1999 to October 2000. Linked birth and infant death certificates were used to ascertain the cohort of live born infants. An additional active surveillance system of neonatal-based mortality was implemented. Exposure variables were collected from birth and death certificates. The outcome was survivors (n=713 and deaths (n=162 in all intensive care units in the study period. Cox's proportional hazards model was applied and a Receiver Operating Characteristic curve was used to compare the performance of statistically significant variables in the multivariable model. Adjusted mortality rates by birth weight and 5-min Apgar score were calculated for each intensive care unit. RESULTS: Low birth weight and 5-min Apgar score remained independently associated to death. Birth weight equal to 2,500g had 0.71 accuracy (95% CI: 0.65-0.77 for predicting neonatal death (sensitivity =72.2%. A wide variation in the mortality rates was found among intensive care units (9.5-48.1% and two of them remained with significant high mortality rates even after adjusting for birth weight and 5-min Apgar score. CONCLUSIONS: This study corroborates birth weight as a sensitive screening variable in surveillance programs for neonatal death and also to target intensive care units with high mortality rates for implementing preventive actions and interventions during the delivery period.OBJETIVO: Identificar fatores prognósticos de mortalidade neonatal em unidades de cuidados intensivos. MÉTODOS: Realizou-se estudo de coorte de nascidos vivos do município de Goiânia, no período de novembro de 1999 a outubro de 2000. Procedeu-se à vinculação das bases de dados das declarações de nascidos vivos e de óbitos, das quais as variáveis de exposição foram extra

  1. Convective transport over the central United States and its role in regional CO and ozone budgets

    Science.gov (United States)

    Thompson, Anne M.; Pickering, Kenneth E.; Dickerson, Russell R.; Ellis, William G., Jr.; Jacob, Daniel J.; Scala, John R.; Tao, Wei-Kuo; Mcnamara, Donna P.; Simpson, Joanne

    1994-01-01

    We have constructed a regional budget for boundary layer carbon monoxide over the central United States (32.5 deg - 50 deg N, 90 deg - 105 deg W), emphasizing a detailed evaluation of deep convective vertical fluxes appropriate for the month of June. Deep convective venting of the boundary layer (upward) dominates other components of the CO budget, e.g., downward convective transport, loss of CO by oxidation, anthropogenic emissions, and CO produced from oxidation of methane, isoprene, and anthropogenic nonmethane hydrocarbons (NMHCs). Calculations of deep convective venting are based on the method pf Pickering et al.(1992a) which uses a satellite-derived deep convective cloud climatology along with transport statistics from convective cloud model simulations of observed prototype squall line events. This study uses analyses of convective episodes in 1985 and 1989 and CO measurements taken during several midwestern field campaigns. Deep convective venting of the boundary layer over this moderately polluted region provides a net (upward minus downward) flux of 18.1 x 10(exp 8) kg CO/month to the free troposphere during early summer. Shallow cumulus and synoptic-scale weather systems together make a comparable contribution (total net flux 16.2 x 10(exp 8) kg CO/month). Boundary layer venting of CO with other O3 precursors leads to efficient free troposheric O3 formation. We estimate that deep convective transport of CO and other precursors over the central United States in early summer leads to a gross production of 0.66 - 1.1 Gmol O3/d in good agreement with estimates of O3 production from boundary layer venting in a continental-scale model (Jacob et al., 1993a, b). On this respect the central U.S. region acts as s `chimney' for the country, and presumably this O3 contributes to high background levels of O3 in the eastern United States and O3 export to the North Atlantic.

  2. Practical Secure Computation with Pre-Processing

    DEFF Research Database (Denmark)

    Zakarias, Rasmus Winther

    are implemented in practice and show state of the art performance for the Oblivious AES bench- mark application. We do 680 AES circuits in parallel within 3 seconds, resulting in an amortized execution time of 4ms per AES block. The latency of 3 seconds is hard to cope with in practical scenarios......-processing needed to do 15 AES blocks. Interesting continuation of this work would be to apply our technique on other symmetric primitives such as SHA-256. Another interesting application of the MiniMac protocol is that of large integer multiplication. We present in our third main result a technique that allows...

  3. Point process models for household distributions within small areal units

    Directory of Open Access Journals (Sweden)

    Zack W. Almquist

    2012-06-01

    Full Text Available Spatio-demographic data sets are increasingly available worldwide, permitting ever more realistic modeling and analysis of social processes ranging from mobility to disease trans- mission. The information provided by these data sets is typically aggregated by areal unit, for reasons of both privacy and administrative cost. Unfortunately, such aggregation does not permit fine-grained assessment of geography at the level of individual households. In this paper, we propose to partially address this problem via the development of point pro- cess models that can be used to effectively simulate the location of individual households within small areal units.

  4. Research on Three Dimensional Computer Assistance Assembly Process Design System

    Institute of Scientific and Technical Information of China (English)

    HOU Wenjun; YAN Yaoqi; DUAN Wenjia; SUN Hanxu

    2006-01-01

    The computer aided process planning will certainly play a significant role in the success of enterprise informationization. 3-dimensional design will promote Tri-dimensional process planning. This article analysis nowadays situation and problems of assembly process planning, gives a 3-dimensional computer aided process planning system (3D-VAPP), and researches on the product information extraction, assembly sequence and path planning in visual interactive assembly process design, dynamic emulation of assembly and process verification, assembly animation outputs and automatic exploding view generation, interactive craft filling and craft knowledge management, etc. It also gives a multi-layer collision detect and multi-perspective automatic camera switching algorithm. Experiments were done to validate the feasibility of such technology and algorithm, which established the foundation of tri-dimensional computer aided process planning.

  5. Artificial intelligence, expert systems, computer vision, and natural language processing

    Science.gov (United States)

    Gevarter, W. B.

    1984-01-01

    An overview of artificial intelligence (AI), its core ingredients, and its applications is presented. The knowledge representation, logic, problem solving approaches, languages, and computers pertaining to AI are examined, and the state of the art in AI is reviewed. The use of AI in expert systems, computer vision, natural language processing, speech recognition and understanding, speech synthesis, problem solving, and planning is examined. Basic AI topics, including automation, search-oriented problem solving, knowledge representation, and computational logic, are discussed.

  6. Investigation of Central Pain Processing in Post-Operative Shoulder Pain and Disability

    OpenAIRE

    Valencia, Carolina; Fillingim, Roger B.; Bishop, Mark; Wu, Samuel S.; Wright, Thomas W.; Moser, Michael; Farmer, Kevin; George, Steven Z.

    2014-01-01

    Measures of central pain processing like conditioned pain modulation (CPM), and suprathreshold heat pain response (SHPR) have been described to assess different components of central pain modulatory mechanisms. Central pain processing potentially play a role in the development of postsurgical pain, however, the role of CPM and SHPR in explaining postoperative clinical pain and disability is still unclear.

  7. Effect of High Receiver Thermal Loss Per Unit Area on the Performance of Solar Central Receiver Systems Having Optimum Heliostat Fields and Optimum Receiver Aperture Areas.

    Science.gov (United States)

    Pitman, Charles L.

    Recent efforts in solar central receiver research have been directed toward high temperature applications. Associated with high temperature processes are greater receiver thermal losses due to reradiation and convection. This dissertation examines the performance of central receiver systems having optimum heliostate fields and receiver aperture areas as a function of receiver thermal loss per unit area of receiver aperture. The results address the problem of application optimization (loss varies) as opposed to the problem of optimization of a design for a specific application (loss fixed). A reasonable range of values for the primary independent variable L (the average reradiative and convective loss per unit area of receiver aperture) and a reasonable set of design assumptions were first established. The optimum receiver aperture area, number and spacings of heliostats, and field boundary were then determined for two tower focal heights and for each value of L. From this, the solar subsystem performance for each optimized system was calculated. Heliostat field analysis and optimization required a detailed computational analysis. A significant modification to the standard method of solving the optimization equations, effectively a decoupling of the solution process into collector and receiver subsystem parts, greatly aided the analysis. Results are presented for tower focal heights of 150 and 180 m. Values of L ranging from 0.04 to 0.50 MW m('-2) were considered, roughly corresponding to working fluid temperatures (at receiver exit) in the range of 650 to 1650 C. As L increases over this range, the receiver thermal efficiency and the receiver interception factor decrease. The optimal power level drops by almost half, and the cost per unit of energy produced increases by about 25% for the base case set of design assumptions. The resulting decrease in solar subsystem efficiency (relative to the defined annual input energy) from 0.57 to 0.35 is about 40% and is a

  8. Accelerated 3D Monte Carlo light dosimetry using a graphics processing unit (GPU) cluster

    Science.gov (United States)

    Lo, William Chun Yip; Lilge, Lothar

    2010-11-01

    This paper presents a basic computational framework for real-time, 3-D light dosimetry on graphics processing unit (GPU) clusters. The GPU-based approach offers a direct solution to overcome the long computation time preventing Monte Carlo simulations from being used in complex optimization problems such as treatment planning, particularly if simulated annealing is employed as the optimization algorithm. The current multi- GPU implementation is validated using a commercial light modelling software (ASAP from Breault Research Organization). It also supports the latest Fermi GPU architecture and features an interactive 3-D visualization interface. The software is available for download at http://code.google.com/p/gpu3d.

  9. Software Graphics Processing Unit (sGPU) for Deep Space Applications

    Science.gov (United States)

    McCabe, Mary; Salazar, George; Steele, Glen

    2015-01-01

    A graphics processing capability will be required for deep space missions and must include a range of applications, from safety-critical vehicle health status to telemedicine for crew health. However, preliminary radiation testing of commercial graphics processing cards suggest they cannot operate in the deep space radiation environment. Investigation into an Software Graphics Processing Unit (sGPU)comprised of commercial-equivalent radiation hardened/tolerant single board computers, field programmable gate arrays, and safety-critical display software shows promising results. Preliminary performance of approximately 30 frames per second (FPS) has been achieved. Use of multi-core processors may provide a significant increase in performance.

  10. Computer and control applications in a vegetable processing plant

    Science.gov (United States)

    There are many advantages to the use of computers and control in food industry. Software in the food industry takes 2 forms - general purpose commercial computer software and software for specialized applications, such as drying and thermal processing of foods. Many applied simulation models for d...

  11. Computer Data Processing of the Hydrogen Peroxide Decomposition Reaction

    Institute of Scientific and Technical Information of China (English)

    余逸男; 胡良剑

    2003-01-01

    Two methods of computer data processing, linear fitting and nonlinear fitting, are applied to compute the rate constant for hydrogen peroxide decomposition reaction. The results indicate that not only the new methods work with no necessity to measure the final oxygen volume, but also the fitting errors decrease evidently.

  12. Computational Tools for Accelerating Carbon Capture Process Development

    Energy Technology Data Exchange (ETDEWEB)

    Miller, David; Sahinidis, N V; Cozad, A; Lee, A; Kim, H; Morinelly, J; Eslick, J; Yuan, Z

    2013-06-04

    This presentation reports development of advanced computational tools to accelerate next generation technology development. These tools are to develop an optimized process using rigorous models. They include: Process Models; Simulation-Based Optimization; Optimized Process; Uncertainty Quantification; Algebraic Surrogate Models; and Superstructure Optimization (Determine Configuration).

  13. Accelerating Image Reconstruction in Three-Dimensional Optoacoustic Tomography on Graphics Processing Units

    CERN Document Server

    Wang, Kun; Kao, Yu-Jiun; Chou, Cheng-Ying; Oraevsky, Alexander A; Anastasio, Mark A; 10.1118/1.4774361

    2013-01-01

    Purpose: Optoacoustic tomography (OAT) is inherently a three-dimensional (3D) inverse problem. However, most studies of OAT image reconstruction still employ two-dimensional (2D) imaging models. One important reason is because 3D image reconstruction is computationally burdensome. The aim of this work is to accelerate existing image reconstruction algorithms for 3D OAT by use of parallel programming techniques. Methods: Parallelization strategies are proposed to accelerate a filtered backprojection (FBP) algorithm and two different pairs of projection/backprojection operations that correspond to two different numerical imaging models. The algorithms are designed to fully exploit the parallel computing power of graphic processing units (GPUs). In order to evaluate the parallelization strategies for the projection/backprojection pairs, an iterative image reconstruction algorithm is implemented. Computer-simulation and experimental studies are conducted to investigate the computational efficiency and numerical a...

  14. Everything You Always Wanted to Know about Computers but Were Afraid to Ask.

    Science.gov (United States)

    DiSpezio, Michael A.

    1989-01-01

    An overview of the basics of computers is presented. Definitions and discussions of processing, programs, memory, DOS, anatomy and design, central processing unit (CPU), disk drives, floppy disks, and peripherals are included. This article was designed to help teachers to understand basic computer terminology. (CW)

  15. Farm Process (FMP) Parameters used in the Central Valley Hydrologic Model (CVHM)

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This digital dataset defines the farm-process parameters used in the transient hydrologic model of the Central Valley flow system. The Central Valley encompasses an...

  16. Interventions on central computing services during the weekend of 21 and 22 August

    CERN Multimedia

    2004-01-01

    As part of the planned upgrade of the computer centre infrastructure to meet the LHC computing needs, approximately 150 servers, hosting in particular the NICE home directories, Mail services and Web services, will need to be physically relocated to another part of the computing hall during the weekend of the 21 and 22 August. On Saturday 21 August, starting from 8:30a.m. interruptions of typically 60 minutes will take place on the following central computing services: NICE and the whole Windows infrastructure, Mail services, file services (including home directories and DFS workspaces), Web services, VPN access, Windows Terminal Services. During any interruption, incoming mail from outside CERN will be queued and delivered as soon as the service is operational again. All Services should be available again on Saturday 21 at 17:30 but a few additional interruptions will be possible after that time and on Sunday 22 August. IT Department

  17. Proton computed tomography from multiple physics processes

    Science.gov (United States)

    Bopp, C.; Colin, J.; Cussol, D.; Finck, Ch; Labalme, M.; Rousseau, M.; Brasse, D.

    2013-10-01

    Proton CT (pCT) nowadays aims at improving hadron therapy treatment planning by mapping the relative stopping power (RSP) of materials with respect to water. The RSP depends mainly on the electron density of the materials. The main information used is the energy of the protons. However, during a pCT acquisition, the spatial and angular deviation of each particle is recorded and the information about its transmission is implicitly available. The potential use of those observables in order to get information about the materials is being investigated. Monte Carlo simulations of protons sent into homogeneous materials were performed, and the influence of the chemical composition on the outputs was studied. A pCT acquisition of a head phantom scan was simulated. Brain lesions with the same electron density but different concentrations of oxygen were used to evaluate the different observables. Tomographic images from the different physics processes were reconstructed using a filtered back-projection algorithm. Preliminary results indicate that information is present in the reconstructed images of transmission and angular deviation that may help differentiate tissues. However, the statistical uncertainty on these observables generates further challenge in order to obtain an optimal reconstruction and extract the most pertinent information.

  18. Sleep-Driven Computations in Speech Processing

    Science.gov (United States)

    Frost, Rebecca L. A.; Monaghan, Padraic

    2017-01-01

    Acquiring language requires segmenting speech into individual words, and abstracting over those words to discover grammatical structure. However, these tasks can be conflicting—on the one hand requiring memorisation of precise sequences that occur in speech, and on the other requiring a flexible reconstruction of these sequences to determine the grammar. Here, we examine whether speech segmentation and generalisation of grammar can occur simultaneously—with the conflicting requirements for these tasks being over-come by sleep-related consolidation. After exposure to an artificial language comprising words containing non-adjacent dependencies, participants underwent periods of consolidation involving either sleep or wake. Participants who slept before testing demonstrated a sustained boost to word learning and a short-term improvement to grammatical generalisation of the non-adjacencies, with improvements after sleep outweighing gains seen after an equal period of wake. Thus, we propose that sleep may facilitate processing for these conflicting tasks in language acquisition, but with enhanced benefits for speech segmentation. PMID:28056104

  19. Use of a graphics processing unit (GPU) to facilitate real-time 3D graphic presentation of the patient skin-dose distribution during fluoroscopic interventional procedures.

    Science.gov (United States)

    Rana, Vijay; Rudin, Stephen; Bednarek, Daniel R

    2012-02-23

    We have developed a dose-tracking system (DTS) that calculates the radiation dose to the patient's skin in real-time by acquiring exposure parameters and imaging-system-geometry from the digital bus on a Toshiba Infinix C-arm unit. The cumulative dose values are then displayed as a color map on an OpenGL-based 3D graphic of the patient for immediate feedback to the interventionalist. Determination of those elements on the surface of the patient 3D-graphic that intersect the beam and calculation of the dose for these elements in real time demands fast computation. Reducing the size of the elements results in more computation load on the computer processor and therefore a tradeoff occurs between the resolution of the patient graphic and the real-time performance of the DTS. The speed of the DTS for calculating dose to the skin is limited by the central processing unit (CPU) and can be improved by using the parallel processing power of a graphics processing unit (GPU). Here, we compare the performance speed of GPU-based DTS software to that of the current CPU-based software as a function of the resolution of the patient graphics. Results show a tremendous improvement in speed using the GPU. While an increase in the spatial resolution of the patient graphics resulted in slowing down the computational speed of the DTS on the CPU, the speed of the GPU-based DTS was hardly affected. This GPU-based DTS can be a powerful tool for providing accurate, real-time feedback about patient skin-dose to physicians while performing interventional procedures.

  20. Use of a graphics processing unit (GPU) to facilitate real-time 3D graphic presentation of the patient skin-dose distribution during fluoroscopic interventional procedures

    Science.gov (United States)

    Rana, Vijay; Rudin, Stephen; Bednarek, Daniel R.

    2012-03-01

    We have developed a dose-tracking system (DTS) that calculates the radiation dose to the patient's skin in realtime by acquiring exposure parameters and imaging-system-geometry from the digital bus on a Toshiba Infinix C-arm unit. The cumulative dose values are then displayed as a color map on an OpenGL-based 3D graphic of the patient for immediate feedback to the interventionalist. Determination of those elements on the surface of the patient 3D-graphic that intersect the beam and calculation of the dose for these elements in real time demands fast computation. Reducing the size of the elements results in more computation load on the computer processor and therefore a tradeoff occurs between the resolution of the patient graphic and the real-time performance of the DTS. The speed of the DTS for calculating dose to the skin is limited by the central processing unit (CPU) and can be improved by using the parallel processing power of a graphics processing unit (GPU). Here, we compare the performance speed of GPU-based DTS software to that of the current CPU-based software as a function of the resolution of the patient graphics. Results show a tremendous improvement in speed using the GPU. While an increase in the spatial resolution of the patient graphics resulted in slowing down the computational speed of the DTS on the CPU, the speed of the GPU-based DTS was hardly affected. This GPU-based DTS can be a powerful tool for providing accurate, real-time feedback about patient skin-dose to physicians while performing interventional procedures.

  1. Theoretic computing model of combustion process of asphalt smoke

    Institute of Scientific and Technical Information of China (English)

    HUANG Rui; CHAI Li-yuan; HE De-wen; PENG Bing; WANG Yun-yan

    2005-01-01

    Based on the data and methods provided by research literature, dispersing mathematical model of combustion process of asphalt smoke is set by theoretic analysis. Through computer programming, the dynamic combustion process of asphalt smoke is calculated to simulate an experimental model. The computing result shows that the temperature and the concentration of asphalt smoke influence its burning temperature in approximatively linear manner. The consumed quantity of fuel to ignite the asphalt smoke needs to be measured from the two factors.

  2. Analysis of source spectra, attenuation, and site effects from central and eastern United States earthquakes

    Energy Technology Data Exchange (ETDEWEB)

    Lindley, G.

    1998-02-01

    This report describes the results from three studies of source spectra, attenuation, and site effects of central and eastern United States earthquakes. In the first study source parameter estimates taken from 27 previous studies were combined to test the assumption that the earthquake stress drop is roughly a constant, independent of earthquake size. 200 estimates of stress drop and seismic moment from eastern North American earthquakes were combined. It was found that the estimated stress drop from the 27 studies increases approximately as the square-root of the seismic moment, from about 3 bars at 10{sup 20} dyne-cm to 690 bars at 10{sup 25} dyne-cm. These results do not support the assumption of a constant stress drop when estimating ground motion parameters from eastern North American earthquakes. In the second study, broadband seismograms recorded by the United States National Seismograph Network and cooperating stations have been analysed to determine Q{sub Lg} as a function of frequency in five regions: the northeastern US, southeastern US, central US, northern Basin and Range, and California and western Nevada. In the third study, using spectral analysis, estimates have been made for the anelastic attenuation of four regional phases, and estimates have been made for the source parameters of 27 earthquakes, including the M{sub b} 5.6, 14 April, 1995, West Texas earthquake.

  3. Computer Forensics Field Triage Process Model

    Directory of Open Access Journals (Sweden)

    Marcus K. Rogers

    2006-06-01

    Full Text Available With the proliferation of digital based evidence, the need for the timely identification, analysis and interpretation of digital evidence is becoming more crucial. In many investigations critical information is required while at the scene or within a short period of time - measured in hours as opposed to days. The traditional cyber forensics approach of seizing a system(s/media, transporting it to the lab, making a forensic image(s, and then searching the entire system for potential evidence, is no longer appropriate in some circumstances. In cases such as child abductions, pedophiles, missing or exploited persons, time is of the essence. In these types of cases, investigators dealing with the suspect or crime scene need investigative leads quickly; in some cases it is the difference between life and death for the victim(s. The Cyber Forensic Field Triage Process Model (CFFTPM proposes an onsite or field approach for providing the identification, analysis and interpretation of digital evidence in a short time frame, without the requirement of having to take the system(s/media back to the lab for an in-depth examination or acquiring a complete forensic image(s. The proposed model adheres to commonly held forensic principles, and does not negate the ability that once the initial field triage is concluded, the system(s/storage media be transported back to a lab environment for a more thorough examination and analysis. The CFFTPM has been successfully used in various real world cases, and its investigative importance and pragmatic approach has been amply demonstrated. Furthermore, the derived evidence from these cases has not been challenged in the court proceedings where it has been introduced. The current article describes the CFFTPM in detail, discusses the model’s forensic soundness, investigative support capabilities and practical considerations.

  4. 77 FR 50722 - Software Unit Testing for Digital Computer Software Used in Safety Systems of Nuclear Power Plants

    Science.gov (United States)

    2012-08-22

    ... COMMISSION Software Unit Testing for Digital Computer Software Used in Safety Systems of Nuclear Power Plants... regulatory guide (DG), DG-1208, ``Software Unit Testing for Digital Computer Software used in Safety Systems... entitled ``Software Unit Testing for Digital Computer Software Used in Safety Systems of Nuclear...

  5. The Global Energy Situation on Earth, Student Guide. Computer Technology Program Environmental Education Units.

    Science.gov (United States)

    Northwest Regional Educational Lab., Portland, OR.

    This is the student guide in a set of five computer-oriented environmental/energy education units. Contents of this guide are: (1) Introduction to the unit; (2) The "EARTH" program; (3) Exercises; and (4) Sources of information on the energy crisis. This guide supplements a simulation which allows students to analyze different aspects of…

  6. Central Nervous System Based Computing Models for Shelf Life Prediction of Soft Mouth Melting Milk Cakes

    Directory of Open Access Journals (Sweden)

    Gyanendra Kumar Goyal

    2012-04-01

    Full Text Available This paper presents the latency and potential of central nervous system based system intelligent computer engineering system for detecting shelf life of soft mouth melting milk cakes stored at 10o C. Soft mouth melting milk cakes are exquisite sweetmeat cuisine made out of heat and acid thickened solidified sweetened milk. In today’s highly competitive market consumers look for good quality food products. Shelf life is a good and accurate indicator to the food quality and safety. To achieve good quality of food products, detection of shelf life is important. Central nervous system based intelligent computing model was developed which detected 19.82 days shelf life, as against 21 days experimental shelf life.

  7. Mathematical modelling in the computer-aided process planning

    Science.gov (United States)

    Mitin, S.; Bochkarev, P.

    2016-04-01

    This paper presents new approaches to organization of manufacturing preparation and mathematical models related to development of the computer-aided multi product process planning (CAMPP) system. CAMPP system has some peculiarities compared to the existing computer-aided process planning (CAPP) systems: fully formalized developing of the machining operations; a capacity to create and to formalize the interrelationships among design, process planning and process implementation; procedures for consideration of the real manufacturing conditions. The paper describes the structure of the CAMPP system and shows the mathematical models and methods to formalize the design procedures.

  8. New photosensitizer with phenylenebisthiophene central unit and cyanovinylene 4-nitrophenyl terminal units for dye-sensitized solar cells

    Energy Technology Data Exchange (ETDEWEB)

    Mikroyannidis, J.A., E-mail: mikroyan@chemistry.upatras.gr [Chemical Technology Laboratory, Department of Chemistry, University of Patras, GR-26500 Patras (Greece); Suresh, P. [Physics Department, Molecular Electronic and Optoelectronic Device Laboratory, JNV University, Jodhpur (Raj.) 342005 (India); Roy, M.S. [Defence Laboratory, Jodhpur (Raj.) 342011 (India); Sharma, G.D., E-mail: sharmagd_in@yahoo.com [Physics Department, Molecular Electronic and Optoelectronic Device Laboratory, JNV University, Jodhpur (Raj.) 342005 (India); R and D Centre for Engineering and Science, Jaipur Engineering College, Kukas, Jaipur (Raj.) (India)

    2011-06-30

    Graphical abstract: A novel dye D was synthesized and used as photosensitizer for quasi solid state dye-sensitized solar cells. A power conversion efficiency of 4.4% was obtained which was improved to 5.52% when diphenylphosphinic acid (DPPA) was added as coadsorbent. Display Omitted Highlights: > A new low band gap photosensitizer with cyanovinylene 4-nitrophenyl terminal units was synthesized. > A power conversion efficiency of 4.4% was obtained for the dye-sensitized solar cell based on this photosensitizer. > The power conversion efficiency of the dye-sensitized solar cell was further improved to 5.52% when diphenylphosphinic acid was added as coadsorbent. - Abstract: A new low band gap photosensitizer, D, which contains 2,2'-(1,4-phenylene) bisthiophene central unit and cyanovinylene 4-nitrophenyl terminal units at both sides was synthesized. The two carboxyls attached to the 2,5-positions of the phenylene ring act as anchoring groups. Dye D was soluble in common organic solvents, showed long-wavelength absorption maximum at 620-636 nm and optical band gap of 1.72 eV. The electrochemical parameters, i.e. the highest occupied molecular orbital (HOMO) (-5.1 eV) and the lowest unoccupied molecular orbital (LUMO) (-3.3 eV) energy levels of D show that this dye is suitable as molecular sensitizer. The quasi solid state dye-sensitized solar cell (DSSC) based on D shows a short circuit current (J{sub sc}) of 9.95 mA/cm{sup 2}, an open circuit voltage (V{sub oc}) of 0.70 V, and a fill factor (FF) of 0.64 corresponding to an overall power conversion efficiency (PCE) of 4.40% under 100 mW/cm{sup 2} irradiation. The overall PCE has been further improved to 5.52% when diphenylphosphinic acid (DPPA) coadsorbent is incorporated into the D solution. This increased PCE has been attributed to the enhancement in the electron lifetime and reduced recombination of injected electrons with the iodide ions present in the electrolyte with the use of DPPA as coadsorbant. The

  9. Accelerating Malware Detection via a Graphics Processing Unit

    Science.gov (United States)

    2010-09-01

    Processing Unit . . . . . . . . . . . . . . . . . . 4 PE Portable Executable . . . . . . . . . . . . . . . . . . . . . 4 COFF Common Object File Format...operating systems for the future [Szo05]. The PE format is an updated version of the common object file format ( COFF ) [Mic06]. Microsoft released a new...pro.mspx, Accessed July 2010, 2001. 79 Mic06. Microsoft. Common object file format ( coff ). MSDN, November 2006. Re- vision 4.1. Mic07a. Microsoft

  10. Effects of aging on peripheral and central auditory processing in rats.

    Science.gov (United States)

    Costa, Margarida; Lepore, Franco; Prévost, François; Guillemot, Jean-Paul

    2016-08-01

    Hearing loss is a hallmark sign in the elderly population. Decline in auditory perception provokes deficits in the ability to localize sound sources and reduces speech perception, particularly in noise. In addition to a loss of peripheral hearing sensitivity, changes in more complex central structures have also been demonstrated. Related to these, this study examines the auditory directional maps in the deep layers of the superior colliculus of the rat. Hence, anesthetized Sprague-Dawley adult (10 months) and aged (22 months) rats underwent distortion product of otoacoustic emissions (DPOAEs) to assess cochlear function. Then, auditory brainstem responses (ABRs) were assessed, followed by extracellular single-unit recordings to determine age-related effects on central auditory functions. DPOAE amplitude levels were decreased in aged rats although they were still present between 3.0 and 24.0 kHz. ABR level thresholds in aged rats were significantly elevated at an early (cochlear nucleus - wave II) stage in the auditory brainstem. In the superior colliculus, thresholds were increased and the tuning widths of the directional receptive fields were significantly wider. Moreover, no systematic directional spatial arrangement was present among the neurons of the aged rats, implying that the topographical organization of the auditory directional map was abolished. These results suggest that the deterioration of the auditory directional spatial map can, to some extent, be attributable to age-related dysfunction at more central, perceptual stages of auditory processing.

  11. COMPUTING

    CERN Document Server

    I. Fisk

    2010-01-01

    Introduction It has been a very active quarter in Computing with interesting progress in all areas. The activity level at the computing facilities, driven by both organised processing from data operations and user analysis, has been steadily increasing. The large-scale production of simulated events that has been progressing throughout the fall is wrapping-up and reprocessing with pile-up will continue. A large reprocessing of all the proton-proton data has just been released and another will follow shortly. The number of analysis jobs by users each day, that was already hitting the computing model expectations at the time of ICHEP, is now 33% higher. We are expecting a busy holiday break to ensure samples are ready in time for the winter conferences. Heavy Ion An activity that is still in progress is computing for the heavy-ion program. The heavy-ion events are collected without zero suppression, so the event size is much large at roughly 11 MB per event of RAW. The central collisions are more complex and...

  12. Fermilab Central Computing Facility: Energy conservation report and mechanical systems design optimization and cost analysis study

    Energy Technology Data Exchange (ETDEWEB)

    Krstulovich, S.F.

    1986-11-12

    This report is developed as part of the Fermilab Central Computing Facility Project Title II Design Documentation Update under the provisions of DOE Document 6430.1, Chapter XIII-21, Section 14, paragraph a. As such, it concentrates primarily on HVAC mechanical systems design optimization and cost analysis and should be considered as a supplement to the Title I Design Report date March 1986 wherein energy related issues are discussed pertaining to building envelope and orientation as well as electrical systems design.

  13. Using Graphics Processing Units to solve the classical N-body problem in physics and astrophysics

    CERN Document Server

    Spera, Mario

    2014-01-01

    Graphics Processing Units (GPUs) can speed up the numerical solution of various problems in astrophysics including the dynamical evolution of stellar systems; the performance gain can be more than a factor 100 compared to using a Central Processing Unit only. In this work I describe some strategies to speed up the classical N-body problem using GPUs. I show some features of the N-body code HiGPUs as template code. In this context, I also give some hints on the parallel implementation of a regularization method and I introduce the code HiGPUs-R. Although the main application of this work concerns astrophysics, some of the presented techniques are of general validity and can be applied to other branches of physics such as electrodynamics and QCD.

  14. Snore related signals processing in a private cloud computing system.

    Science.gov (United States)

    Qian, Kun; Guo, Jian; Xu, Huijie; Zhu, Zhaomeng; Zhang, Gongxuan

    2014-09-01

    Snore related signals (SRS) have been demonstrated to carry important information about the obstruction site and degree in the upper airway of Obstructive Sleep Apnea-Hypopnea Syndrome (OSAHS) patients in recent years. To make this acoustic signal analysis method more accurate and robust, big SRS data processing is inevitable. As an emerging concept and technology, cloud computing has motivated numerous researchers and engineers to exploit applications both in academic and industry field, which could have an ability to implement a huge blue print in biomedical engineering. Considering the security and transferring requirement of biomedical data, we designed a system based on private cloud computing to process SRS. Then we set the comparable experiments of processing a 5-hour audio recording of an OSAHS patient by a personal computer, a server and a private cloud computing system to demonstrate the efficiency of the infrastructure we proposed.

  15. Simulation and Improvement of the Processing Subsystem of the Manchester Dataflow Computer

    Institute of Scientific and Technical Information of China (English)

    1995-01-01

    The Manchester dataflow computer is a famous dynamic dataflow computer.It is centralized in architecture and simple in organization.Its overhead for communication and scheduling is very small.Its efficiency comes down,when processing elements in the processing subsystem increase.Several articles evaluated its performance and presented improved methods.The authors studied its processing subsystem and carried out the simulation.The simulation results show that the efficiency of the processing subsystem drops dramatically when average instruction execution microcycles become less and the maximum instruction execution rate is nearly attained.Two improved methods are presented to oversome the disadvantage.The improved processing subsystem with a cheap distributor made up of a bus and a two-level fixed priority circuit possesses almost full efficiency no matter whether the average nstruction execution microcycles number is large or small and even if the maximum instruction execution rate is approached.

  16. An Investigation of the Artifacts and Process of Constructing Computers Games about Environmental Science in a Fifth Grade Classroom

    Science.gov (United States)

    Baytak, Ahmet; Land, Susan M.

    2011-01-01

    This study employed a case study design (Yin, "Case study research, design and methods," 2009) to investigate the processes used by 5th graders to design and develop computer games within the context of their environmental science unit, using the theoretical framework of "constructionism." Ten fifth graders designed computer games using "Scratch"…

  17. Quantum computation and the physical computation level of biological information processing

    OpenAIRE

    Castagnoli, Giuseppe

    2009-01-01

    On the basis of introspective analysis, we establish a crucial requirement for the physical computation basis of consciousness: it should allow processing a significant amount of information together at the same time. Classical computation does not satisfy the requirement. At the fundamental physical level, it is a network of two body interactions, each the input-output transformation of a universal Boolean gate. Thus, it cannot process together at the same time more than the three bit input ...

  18. Computers in Public Schools: Changing the Image with Image Processing.

    Science.gov (United States)

    Raphael, Jacqueline; Greenberg, Richard

    1995-01-01

    The kinds of educational technologies selected can make the difference between uninspired, rote computer use and challenging learning experiences. University of Arizona's Image Processing for Teaching Project has worked with over 1,000 teachers to develop image-processing techniques that provide students with exciting, open-ended opportunities for…

  19. Splash, pop, sizzle: Information processing with phononic computing

    Directory of Open Access Journals (Sweden)

    Sophia R. Sklan

    2015-05-01

    Full Text Available Phonons, the quanta of mechanical vibration, are important to the transport of heat and sound in solid materials. Recent advances in the fundamental control of phonons (phononics have brought into prominence the potential role of phonons in information processing. In this review, the many directions of realizing phononic computing and information processing are examined. Given the relative similarity of vibrational transport at different length scales, the related fields of acoustic, phononic, and thermal information processing are all included, as are quantum and classical computer implementations. Connections are made between the fundamental questions in phonon transport and phononic control and the device level approach to diodes, transistors, memory, and logic.

  20. Computer Processing Of Tunable-Diode-Laser Spectra

    Science.gov (United States)

    May, Randy D.

    1991-01-01

    Tunable-diode-laser spectrometer measuring transmission spectrum of gas operates under control of computer, which also processes measurement data. Measurements in three channels processed into spectra. Computer controls current supplied to tunable diode laser, stepping it through small increments of wavelength while processing spectral measurements at each step. Program includes library of routines for general manipulation and plotting of spectra, least-squares fitting of direct-transmission and harmonic-absorption spectra, and deconvolution for determination of laser linewidth and for removal of instrumental broadening of spectral lines.

  1. Neuromotor recovery from stroke: Computational models at central, functional, and muscle synergy level

    Directory of Open Access Journals (Sweden)

    Maura eCasadio

    2013-08-01

    Full Text Available Computational models of neuromotor recovery after a stroke might help to unveil the underlying physiological mechanisms and might suggest how to make recovery faster and more effective. At least in principle, these models could serve: (i To provide testable hypotheses on the nature of recovery; (ii To predict the recovery of individual patients; (iii To design patient-specific ’optimal’ therapy, by setting the treatment variables for maximizing the amount of recovery or for achieving a better generalization of the learned abilities across different tasks.Here we review the state of the art of computational models for neuromotor recovery through exercise, and their implications for treatment. We show that to properly account for the computational mechanisms of neuromotor recovery, multiple levels of description need to be taken into account. The review specifically covers models of recovery at central, functional and muscle synergy level.

  2. Quantum information processing in nanostructures Quantum optics; Quantum computing

    CERN Document Server

    Reina-Estupinan, J H

    2002-01-01

    Since information has been regarded os a physical entity, the field of quantum information theory has blossomed. This brings novel applications, such as quantum computation. This field has attracted the attention of numerous researchers with backgrounds ranging from computer science, mathematics and engineering, to the physical sciences. Thus, we now have an interdisciplinary field where great efforts are being made in order to build devices that should allow for the processing of information at a quantum level, and also in the understanding of the complex structure of some physical processes at a more basic level. This thesis is devoted to the theoretical study of structures at the nanometer-scale, 'nanostructures', through physical processes that mainly involve the solid-state and quantum optics, in order to propose reliable schemes for the processing of quantum information. Initially, the main results of quantum information theory and quantum computation are briefly reviewed. Next, the state-of-the-art of ...

  3. Applications of evolutionary computation in image processing and pattern recognition

    CERN Document Server

    Cuevas, Erik; Perez-Cisneros, Marco

    2016-01-01

    This book presents the use of efficient Evolutionary Computation (EC) algorithms for solving diverse real-world image processing and pattern recognition problems. It provides an overview of the different aspects of evolutionary methods in order to enable the reader in reaching a global understanding of the field and, in conducting studies on specific evolutionary techniques that are related to applications in image processing and pattern recognition. It explains the basic ideas of the proposed applications in a way that can also be understood by readers outside of the field. Image processing and pattern recognition practitioners who are not evolutionary computation researchers will appreciate the discussed techniques beyond simple theoretical tools since they have been adapted to solve significant problems that commonly arise on such areas. On the other hand, members of the evolutionary computation community can learn the way in which image processing and pattern recognition problems can be translated into an...

  4. Rationale awareness for quality assurance in iterative human computation processes

    CERN Document Server

    Xiao, Lu

    2012-01-01

    Human computation refers to the outsourcing of computation tasks to human workers. It offers a new direction for solving a variety of problems and calls for innovative ways of managing human computation processes. The majority of human computation tasks take a parallel approach, whereas the potential of an iterative approach, i.e., having workers iteratively build on each other's work, has not been sufficiently explored. This study investigates whether and how human workers' awareness of previous workers' rationales affects the performance of the iterative approach in a brainstorming task and a rating task. Rather than viewing this work as a conclusive piece, the author believes that this research endeavor is just the beginning of a new research focus that examines and supports meta-cognitive processes in crowdsourcing activities.

  5. Finite Element Analysis in Concurrent Processing: Computational Issues

    Science.gov (United States)

    Sobieszczanski-Sobieski, Jaroslaw; Watson, Brian; Vanderplaats, Garrett

    2004-01-01

    The purpose of this research is to investigate the potential application of new methods for solving large-scale static structural problems on concurrent computers. It is well known that traditional single-processor computational speed will be limited by inherent physical limits. The only path to achieve higher computational speeds lies through concurrent processing. Traditional factorization solution methods for sparse matrices are ill suited for concurrent processing because the null entries get filled, leading to high communication and memory requirements. The research reported herein investigates alternatives to factorization that promise a greater potential to achieve high concurrent computing efficiency. Two methods, and their variants, based on direct energy minimization are studied: a) minimization of the strain energy using the displacement method formulation; b) constrained minimization of the complementary strain energy using the force method formulation. Initial results indicated that in the context of the direct energy minimization the displacement formulation experienced convergence and accuracy difficulties while the force formulation showed promising potential.

  6. Paleoseismic targets, seismic hazard, and urban areas in the Central and Eastern United States

    Science.gov (United States)

    Wheeler, R.L.

    2008-01-01

    Published geologic information from the central and eastern United States identifies 83 faults, groups of sand blows, named seismic zones, and other geological features as known or suspected products of Quaternary tectonic faulting. About one fifth of the features are known to contain faulted Quaternary materials or seismically induced liquefaction phenomena, but the origin and associated seismic hazard of most of the other features remain uncertain. Most of the features are in or near large urban areas. The largest cluster of features is in the Boston-Washington urban corridor (2005 estimated population: 50 million). The proximity of most features to populous areas identifies paleoseismic targets with potential to impact urban-hazard estimates.

  7. Extensive diversity of Trypanosoma cruzi discrete typing units circulating in Triatoma dimidiata from central Veracruz, Mexico.

    Science.gov (United States)

    Ramos-Ligonio, Angel; Torres-Montero, Jesús; López-Monteon, Aracely; Dumonteil, Eric

    2012-10-01

    Chagas disease (or American trypanosomiasis) is a parasitic disease of major public health importance, caused by Trypanosoma cruzi, which presents extensive genetic diversity. The parasite has been classified into six lineages or discrete typing units (TcI to TcVI) and we performed here the molecular characterization of the strains present in Triatoma dimidiata, the main vector in central Veracruz, Mexico. Unexpectedly, TcI only represented 9/33 strains identified (27%), and we reported for the first time the presence of TcII, TcIII, TcIV and TcV strains in Mexico, at a relatively high frequency (13-27% each). Our observations indicate a much greater diversity of T. cruzi DTUs than previously estimated in at least part of Mexico. These results have important implications for the understanding of the phylogeography of T. cruzi DTUs and the epidemiology of Chagas disease in North America.

  8. Design of Central Management & Control Unit for Onboard High-Speed Data Handling System

    Institute of Scientific and Technical Information of China (English)

    LI Yan-qin; JIN Sheng-zhen; NING Shu-nian

    2007-01-01

    The Main Optical Telescope (MOT) is an important payload of the Space Solar Telescope (SST) with various instruments and observation modes. Its real-time data handling and management and control tasks are arduous. Based on the advanced techniques of foreign countries, an improved structure of onboard data handling systems feasible for SST, is proposed. This article concentrated on the development of a Central Management & Control Unit (MCU) based on FPGA and DSP. Through reconfigurating the FPGA and DSP programs, the prototype could perform different tasks.Thus the inheritability of the whole system is improved. The completed dual-channel prototype proves that the system meets all requirements of the MOT. Its high reliability and safety features also meet the requirements under harsh conditions such as mine detection.

  9. Optimal location of centralized biodigesters for small dairy farms: A case study from the United States

    Directory of Open Access Journals (Sweden)

    Deep Mukherjee

    2015-06-01

    Full Text Available Anaerobic digestion technology is available for converting livestock waste to bio-energy, but its potential is far from fully exploited in the United States because the technology has a scale effect. Utilization of the centralized anaerobic digester (CAD concept could make the technology economically feasible for smaller dairy farms. An interdisciplinary methodology to determine the cost minimizing location, size, and number of CAD facilities in a rural dairy region with mostly small farms is described. This study employs land suitability analysis, operations research model and Geographical Information System (GIS tools to evaluate the environmental, social, and economic constraints in selecting appropriate sites for CADs in Windham County, Connecticut. Results indicate that overall costs are lower if the CADs are of larger size and are smaller in number.

  10. The process group approach to reliable distributed computing

    Science.gov (United States)

    Birman, Kenneth P.

    1992-01-01

    The difficulty of developing reliable distribution software is an impediment to applying distributed computing technology in many settings. Experience with the ISIS system suggests that a structured approach based on virtually synchronous process groups yields systems that are substantially easier to develop, exploit sophisticated forms of cooperative computation, and achieve high reliability. Six years of research on ISIS, describing the model, its implementation challenges, and the types of applications to which ISIS has been applied are reviewed.

  11. Intelligent Computational Systems. Opening Remarks: CFD Application Process Workshop

    Science.gov (United States)

    VanDalsem, William R.

    1994-01-01

    This discussion will include a short review of the challenges that must be overcome if computational physics technology is to have a larger impact on the design cycles of U.S. aerospace companies. Some of the potential solutions to these challenges may come from the information sciences fields. A few examples of potential computational physics/information sciences synergy will be presented, as motivation and inspiration for the Improving The CFD Applications Process Workshop.

  12. Genetic Variation of Sclerotinia sclerotiorum from Multiple Crops in the North Central United States.

    Science.gov (United States)

    Aldrich-Wolfe, Laura; Travers, Steven; Nelson, Berlin D

    2015-01-01

    Sclerotinia sclerotiorum is an important pathogen of numerous crops in the North Central region of the United States. The objective of this study was to examine the genetic diversity of 145 isolates of the pathogen from multiple hosts in the region. Mycelial compatibility groups (MCG) and microsatellite haplotypes were determined and analyzed for standard estimates of population genetic diversity and the importance of host and distance for genetic variation was examined. MCG tests indicated there were 49 different MCGs in the population and 52 unique microsatellite haplotypes were identified. There was an association between MCG and haplotype such that isolates belonging to the same MCG either shared identical haplotypes or differed at no more than 2 of the 12 polymorphic loci. For the majority of isolates, there was a one-to-one correspondence between MCG and haplotype. Eleven MCGs shared haplotypes. A single haplotype was found to be prevalent throughout the region. The majority of genetic variation in the isolate collection was found within rather than among host crops, suggesting little genetic divergence of S. sclerotiorum among hosts. There was only weak evidence of isolation by distance. Pairwise population comparisons among isolates from canola, dry bean, soybean and sunflower suggested that gene flow between host-populations is more common for some crops than others. Analysis of linkage disequilibrium in the isolates from the four major crops indicated primarily clonal reproduction, but also evidence of genetic recombination for isolates from canola and sunflower. Accordingly, genetic diversity was highest for populations from canola and sunflower. Distribution of microsatellite haplotypes across the study region strongly suggest that specific haplotypes of S. sclerotiorum are often found on multiple crops, movement of individual haplotypes among crops is common and host identity is not a barrier to gene flow for S. sclerotiorum in the north central United

  13. VisMatchmaker: Cooperation of the User and the Computer in Centralized Matching Adjustment.

    Science.gov (United States)

    Law, Po-Ming; Wu, Wenchao; Zheng, Yixian; Qu, Huamin

    2017-01-01

    Centralized matching is a ubiquitous resource allocation problem. In a centralized matching problem, each agent has a preference list ranking the other agents and a central planner is responsible for matching the agents manually or with an algorithm. While algorithms can find a matching which optimizes some performance metrics, they are used as a black box and preclude the central planner from applying his domain knowledge to find a matching which aligns better with the user tasks. Furthermore, the existing matching visualization techniques (i.e. bipartite graph and adjacency matrix) fail in helping the central planner understand the differences between matchings. In this paper, we present VisMatchmaker, a visualization system which allows the central planner to explore alternatives to an algorithm-generated matching. We identified three common tasks in the process of matching adjustment: problem detection, matching recommendation and matching evaluation. We classified matching comparison into three levels and designed visualization techniques for them, including the number line view and the stacked graph view. Two types of algorithmic support, namely direct assignment and range search, and their interactive operations are also provided to enable the user to apply his domain knowledge in matching adjustment.

  14. Learner Use of Holistic Language Units in Multimodal, Task-Based Synchronous Computer-Mediated Communication

    Directory of Open Access Journals (Sweden)

    Karina Collentine

    2009-06-01

    Full Text Available Second language acquisition (SLA researchers strive to understand the language and exchanges that learners generate in synchronous computer-mediated communication (SCMC. Doughty and Long (2003 advocate replacing open-ended SCMC with task-based language teaching (TBLT design principles. Since most task-based SCMC (TB-SCMC research addresses an interactionist view (e.g., whether uptake occurs, we know little about holistic language units generated by learners even though research suggests that task demands make TB-SCMC communication notably different from general SCMC communication. This study documents and accounts for discourse-pragmatic and sociocultural behaviors learners exhibit in TB-SCMC. To capture a variety of such behaviors, it documents holistic language units produced by intermediate and advanced learners of Spanish during two multimodal, TB-SCMC activities. The study found that simple assertions were most prevalent (a with dyads at the lower level of instruction and (b when dyads had a relatively short amount of time to chat. Additionally, interpersonal, sociocultural behaviors (e.g., joking, off-task discussions were more likely to occur (a amongst dyads at the advanced level and (b when they had relatively more time to chat. Implications explain how tasks might mitigate the potential processing overload that multimodal materials could incur.

  15. Discontinuous Galerkin methods on graphics processing units for nonlinear hyperbolic conservation laws

    CERN Document Server

    Fuhry, Martin; Krivodonova, Lilia

    2016-01-01

    We present a novel implementation of the modal discontinuous Galerkin (DG) method for hyperbolic conservation laws in two dimensions on graphics processing units (GPUs) using NVIDIA's Compute Unified Device Architecture (CUDA). Both flexible and highly accurate, DG methods accommodate parallel architectures well as their discontinuous nature produces element-local approximations. High performance scientific computing suits GPUs well, as these powerful, massively parallel, cost-effective devices have recently included support for double-precision floating point numbers. Computed examples for Euler equations over unstructured triangle meshes demonstrate the effectiveness of our implementation on an NVIDIA GTX 580 device. Profiling of our method reveals performance comparable to an existing nodal DG-GPU implementation for linear problems.

  16. One central oscillatory drive is compatible with experimental motor unit behaviour in essential and Parkinsonian tremor

    Science.gov (United States)

    Dideriksen, Jakob L.; Gallego, Juan A.; Holobar, Ales; Rocon, Eduardo; Pons, Jose L.; Farina, Dario

    2015-08-01

    Objective. Pathological tremors are symptomatic to several neurological disorders that are difficult to differentiate and the way by which central oscillatory networks entrain tremorogenic contractions is unknown. We considered the alternative hypotheses that tremor arises from one oscillator (at the tremor frequency) or, as suggested by recent findings from the superimposition of two separate inputs (at the tremor frequency and twice that frequency). Approach. Assuming one central oscillatory network we estimated analytically the relative amplitude of the harmonics of the tremor frequency in the motor neuron output for different temporal behaviors of the oscillator. Next, we analyzed the bias in the relative harmonics amplitude introduced by superimposing oscillations at twice the tremor frequency. These findings were validated using experimental measurements of wrist angular velocity and surface electromyography (EMG) from 22 patients (11 essential tremor, 11 Parkinson’s disease). The ensemble motor unit action potential trains identified from the EMG represented the neural drive to the muscles. Main results. The analytical results showed that the relative power of the tremor harmonics in the analytical models of the neural drive was determined by the variability and duration of the tremor bursts and the presence of the second oscillator biased this power towards higher values. The experimental findings accurately matched the analytical model assuming one oscillator, indicating a negligible functional role of secondary oscillatory inputs. Furthermore, a significant difference in the relative power of harmonics in the neural drive was found across the patient groups, suggesting a diagnostic value of this measure (classification accuracy: 86%). This diagnostic power decreased substantially when estimated from limb acceleration or the EMG. Signficance. The results indicate that the neural drive in pathological tremor is compatible with one central network

  17. Distributed trace using central performance counter memory

    Science.gov (United States)

    Satterfield, David L.; Sexton, James C.

    2013-01-22

    A plurality of processing cores, are central storage unit having at least memory connected in a daisy chain manner, forming a daisy chain ring layout on an integrated chip. At least one of the plurality of processing cores places trace data on the daisy chain connection for transmitting the trace data to the central storage unit, and the central storage unit detects the trace data and stores the trace data in the memory co-located in with the central storage unit.

  18. Congestion estimation technique in the optical network unit registration process.

    Science.gov (United States)

    Kim, Geunyong; Yoo, Hark; Lee, Dongsoo; Kim, Youngsun; Lim, Hyuk

    2016-07-01

    We present a congestion estimation technique (CET) to estimate the optical network unit (ONU) registration success ratio for the ONU registration process in passive optical networks. An optical line terminal (OLT) estimates the number of collided ONUs via the proposed scheme during the serial number state. The OLT can obtain congestion level among ONUs to be registered such that this information may be exploited to change the size of a quiet window to decrease the collision probability. We verified the efficiency of the proposed method through simulation and experimental results.

  19. Product- and Process Units in the CRITT Translation Process Research Database

    DEFF Research Database (Denmark)

    Carl, Michael

    The first version of the "Translation Process Research Database" (TPR DB v1.0) was released In August 2012, containing logging data of more than 400 translation and text production sessions. The current version of the TPR DB, (v1.4), contains data from more than 940 sessions, which represents more...... than 300 hours of text production. The database provides the raw logging data, as well as Tables of pre-processed product- and processing units. The TPR-DB includes various types of simple and composed product and process units that are intended to support the analysis and modelling of human text...... reception, production, and translation processes. In this talk I describe some of the functions and features of the TPR-DB v1.4, and how they can be deployed in empirical human translation process research....

  20. Kinds of damage that could result from a great earthquake in the central United States

    Science.gov (United States)

    Hooper, M.G.; Algermissen, S.T.

    1985-01-01

    In the winter of 1811-12 a series of three great earthquakes occurred in the New Madrid, Missouri seismic zone in the central United States. In addition to the three principal shocks, at least 15 other earthquakes of intensity VIII or more occurred within a year of the first large earthquake on December 16, 1811. The three main shocks were felt over the entire eastern United States. They were strong enough to cause minor damage cause minor damage as far away as Indiana and Ohio on the north, the Carolinas on the east, and southern Mississippi to the south. They were strong enough to cause severe or structural damage in parts of Missouri, Illinois, Indiana, Kentucky, Tennessee, Mississippi, and Arkansas. A later section in this article describes what happened in the epicentral region. Fortunately, few people lived in the severely shaken area in 1811; that is not the case today. What would happen if a series of earthquakes as large and numerous as the "New Madrid" earthquakes were to occur in the New Madrid seismic zone today?

  1. Business Process Quality Computation: Computing Non-Functional Requirements to Improve Business Processes

    NARCIS (Netherlands)

    Heidari, F.

    2015-01-01

    Business process modelling is an important part of system design. When designing or redesigning a business process, stakeholders specify, negotiate, and agree on business requirements to be satisfied, including non-functional requirements that concern the quality of the business process. This thesis

  2. Active microchannel fluid processing unit and method of making

    Science.gov (United States)

    Bennett, Wendy D [Kennewick, WA; Martin, Peter M [Kennewick, WA; Matson, Dean W [Kennewick, WA; Roberts, Gary L [West Richland, WA; Stewart, Donald C [Richland, WA; Tonkovich, Annalee Y [Pasco, WA; Zilka, Jennifer L [Pasco, WA; Schmitt, Stephen C [Dublin, OH; Werner, Timothy M [Columbus, OH

    2001-01-01

    The present invention is an active microchannel fluid processing unit and method of making, both relying on having (a) at least one inner thin sheet; (b) at least one outer thin sheet; (c) defining at least one first sub-assembly for performing at least one first unit operation by stacking a first of the at least one inner thin sheet in alternating contact with a first of the at least one outer thin sheet into a first stack and placing an end block on the at least one inner thin sheet, the at least one first sub-assembly having at least a first inlet and a first outlet; and (d) defining at least one second sub-assembly for performing at least one second unit operation either as a second flow path within the first stack or by stacking a second of the at least one inner thin sheet in alternating contact with second of the at least one outer thin sheet as a second stack, the at least one second sub-assembly having at least a second inlet and a second outlet.

  3. Computational simulations and experimental validation of a furnace brazing process

    Energy Technology Data Exchange (ETDEWEB)

    Hosking, F.M.; Gianoulakis, S.E.; Malizia, L.A.

    1998-12-31

    Modeling of a furnace brazing process is described. The computational tools predict the thermal response of loaded hardware in a hydrogen brazing furnace to programmed furnace profiles. Experiments were conducted to validate the model and resolve computational uncertainties. Critical boundary conditions that affect materials and processing response to the furnace environment were determined. {open_quotes}Global{close_quotes} and local issues (i.e., at the furnace/hardware and joint levels, respectively) are discussed. The ability to accurately simulate and control furnace conditions is examined.

  4. Advanced Investigation and Comparative Study of Graphics Processing Unit-queries Countered

    Directory of Open Access Journals (Sweden)

    A. Baskar

    2014-10-01

    Full Text Available GPU, Graphics Processing Unit, is the buzz word ruling the market these days. What is that and how has it gained that much importance is what to be answered in this research work. The study has been constructed with full attention paid towards answering the following question. What is a GPU? How is it different from a CPU? How good/bad it is computationally when comparing to CPU? Can GPU replace CPU, or it is a day dream? How significant is arrival of APU (Accelerated Processing Unit in market? What tools are needed to make GPU work? What are the improvement/focus areas for GPU to stand in the market? All the above questions are discussed and answered well in this study with relevant explanations.

  5. BarraCUDA - a fast short read sequence aligner using graphics processing units

    Directory of Open Access Journals (Sweden)

    Klus Petr

    2012-01-01

    Full Text Available Abstract Background With the maturation of next-generation DNA sequencing (NGS technologies, the throughput of DNA sequencing reads has soared to over 600 gigabases from a single instrument run. General purpose computing on graphics processing units (GPGPU, extracts the computing power from hundreds of parallel stream processors within graphics processing cores and provides a cost-effective and energy efficient alternative to traditional high-performance computing (HPC clusters. In this article, we describe the implementation of BarraCUDA, a GPGPU sequence alignment software that is based on BWA, to accelerate the alignment of sequencing reads generated by these instruments to a reference DNA sequence. Findings Using the NVIDIA Compute Unified Device Architecture (CUDA software development environment, we ported the most computational-intensive alignment component of BWA to GPU to take advantage of the massive parallelism. As a result, BarraCUDA offers a magnitude of performance boost in alignment throughput when compared to a CPU core while delivering the same level of alignment fidelity. The software is also capable of supporting multiple CUDA devices in parallel to further accelerate the alignment throughput. Conclusions BarraCUDA is designed to take advantage of the parallelism of GPU to accelerate the alignment of millions of sequencing reads generated by NGS instruments. By doing this, we could, at least in part streamline the current bioinformatics pipeline such that the wider scientific community could benefit from the sequencing technology. BarraCUDA is currently available from http://seqbarracuda.sf.net

  6. Graphical processing unit implementation of an integrated shape-based active contour: Application to digital pathology

    Directory of Open Access Journals (Sweden)

    Sahirzeeshan Ali

    2011-01-01

    Full Text Available Commodity graphics hardware has become a cost-effective parallel platform to solve many general computational problems. In medical imaging and more so in digital pathology, segmentation of multiple structures on high-resolution images, is often a complex and computationally expensive task. Shape-based level set segmentation has recently emerged as a natural solution to segmenting overlapping and occluded objects. However the flexibility of the level set method has traditionally resulted in long computation times and therefore might have limited clinical utility. The processing times even for moderately sized images could run into several hours of computation time. Hence there is a clear need to accelerate these segmentations schemes. In this paper, we present a parallel implementation of a computationally heavy segmentation scheme on a graphical processing unit (GPU. The segmentation scheme incorporates level sets with shape priors to segment multiple overlapping nuclei from very large digital pathology images. We report a speedup of 19× compared to multithreaded C and MATLAB-based implementations of the same scheme, albeit with slight reduction in accuracy. Our GPU-based segmentation scheme was rigorously and quantitatively evaluated for the problem of nuclei segmentation and overlap resolution on digitized histopathology images corresponding to breast and prostate biopsy tissue specimens.

  7. BarraCUDA - a fast short read sequence aligner using graphics processing units

    LENUS (Irish Health Repository)

    Klus, Petr

    2012-01-13

    Abstract Background With the maturation of next-generation DNA sequencing (NGS) technologies, the throughput of DNA sequencing reads has soared to over 600 gigabases from a single instrument run. General purpose computing on graphics processing units (GPGPU), extracts the computing power from hundreds of parallel stream processors within graphics processing cores and provides a cost-effective and energy efficient alternative to traditional high-performance computing (HPC) clusters. In this article, we describe the implementation of BarraCUDA, a GPGPU sequence alignment software that is based on BWA, to accelerate the alignment of sequencing reads generated by these instruments to a reference DNA sequence. Findings Using the NVIDIA Compute Unified Device Architecture (CUDA) software development environment, we ported the most computational-intensive alignment component of BWA to GPU to take advantage of the massive parallelism. As a result, BarraCUDA offers a magnitude of performance boost in alignment throughput when compared to a CPU core while delivering the same level of alignment fidelity. The software is also capable of supporting multiple CUDA devices in parallel to further accelerate the alignment throughput. Conclusions BarraCUDA is designed to take advantage of the parallelism of GPU to accelerate the alignment of millions of sequencing reads generated by NGS instruments. By doing this, we could, at least in part streamline the current bioinformatics pipeline such that the wider scientific community could benefit from the sequencing technology. BarraCUDA is currently available from http:\\/\\/seqbarracuda.sf.net

  8. The Computer-Aided Analytic Process Model. Operations Handbook for the Analytic Process Model Demonstration Package

    Science.gov (United States)

    1986-01-01

    Research Note 86-06 THE COMPUTER-AIDED ANALYTIC PROCESS MODEL : OPERATIONS HANDBOOK FOR THE ANALYTIC PROCESS MODEL DE ONSTRATION PACKAGE Ronald G...ic Process Model ; Operations Handbook; Tutorial; Apple; Systems Taxonomy Mod--l; Training System; Bradl1ey infantry Fighting * Vehicle; BIFV...8217. . . . . . . .. . . . . . . . . . . . . . . . * - ~ . - - * m- .. . . . . . . item 20. Abstract -continued companion volume-- "The Analytic Process Model for

  9. The certification process of the LHCb distributed computing software

    CERN Document Server

    CERN. Geneva

    2015-01-01

    DIRAC contains around 200 thousand lines of python code, and LHCbDIRAC around 120 thousand. The testing process for each release consists of a number of steps, that includes static code analysis, unit tests, integration tests, regression tests, system tests. We dubbed the full p...

  10. The Impact of Mild Central Auditory Processing Disorder on School Performance during Adolescence

    Science.gov (United States)

    Heine, Chyrisse; Slone, Michelle

    2008-01-01

    Central Auditory Processing (CAP) difficulties have attained increasing recognition leading to escalating rates of referrals for evaluation. Recognition of the association between (Central) Auditory Processing Disorder ((C)APD) and language, learning, and literacy difficulties has resulted in increased referrals and detection in school-aged…

  11. Conceptual Design for the Pilot-Scale Plutonium Oxide Processing Unit in the Radiochemical Processing Laboratory

    Energy Technology Data Exchange (ETDEWEB)

    Lumetta, Gregg J.; Meier, David E.; Tingey, Joel M.; Casella, Amanda J.; Delegard, Calvin H.; Edwards, Matthew K.; Jones, Susan A.; Rapko, Brian M.

    2014-08-05

    This report describes a conceptual design for a pilot-scale capability to produce plutonium oxide for use as exercise and reference materials, and for use in identifying and validating nuclear forensics signatures associated with plutonium production. This capability is referred to as the Pilot-scale Plutonium oxide Processing Unit (P3U), and it will be located in the Radiochemical Processing Laboratory at the Pacific Northwest National Laboratory. The key unit operations are described, including plutonium dioxide (PuO2) dissolution, purification of the Pu by ion exchange, precipitation, and conversion to oxide by calcination.

  12. Computational fluid dynamics evaluation of liquid food thermal process in a brick shaped package

    Directory of Open Access Journals (Sweden)

    Pedro Esteves Duarte Augusto

    2012-03-01

    Full Text Available Food processes must ensure safety and high-quality products for a growing demand consumer creating the need for better knowledge of its unit operations. The Computational Fluid Dynamics (CFD has been widely used for better understanding the food thermal processes, and it is one of the safest and most frequently used methods for food preservation. However, there is no single study in the literature describing thermal process of liquid foods in a brick shaped package. The present study evaluated such process and the influence of its orientation on the process lethality. It demonstrated the potential of using CFD to evaluate thermal processes of liquid foods and the importance of rheological characterization and convection in thermal processing of liquid foods. It also showed that packaging orientation does not result in different sterilization values during thermal process of the evaluated fluids in the brick shaped package.

  13. Evidence of a sensory processing unit in the mammalian macula

    Science.gov (United States)

    Chimento, T. C.; Ross, M. D.

    1996-01-01

    We cut serial sections through the medial part of the rat vestibular macula for transmission electron microscopic (TEM) examination, computer-assisted 3-D reconstruction, and compartmental modeling. The ultrastructural research showed that many primary vestibular neurons have an unmyelinated segment, often branched, that extends between the heminode (putative site of the spike initiation zone) and the expanded terminal(s) (calyx, calyces). These segments, termed the neuron branches, and the calyces frequently have spine-like processes of various dimensions with bouton endings that morphologically are afferent, efferent, or reciprocal to other macular neural elements. The major questions posed by this study were whether small details of morphology, such as the size and location of neuronal processes or synapses, could influence the output of a vestibular afferent, and whether a knowledge of morphological details could guide the selection of values for simulation parameters. The conclusions from our simulations are (1) values of 5.0 k omega cm2 for membrane resistivity and 1.0 nS for synaptic conductance yield simulations that best match published physiological results; (2) process morphology has little effect on orthodromic spread of depolarization from the head (bouton) to the spike initiation zone (SIZ); (3) process morphology has no effect on antidromic spread of depolarization to the process head; (4) synapses do not sum linearly; (5) synapses are electrically close to the SIZ; and (6) all whole-cell simulations should be run with an active SIZ.

  14. Bioinformation processing a primer on computational cognitive science

    CERN Document Server

    Peterson, James K

    2016-01-01

    This book shows how mathematics, computer science and science can be usefully and seamlessly intertwined. It begins with a general model of cognitive processes in a network of computational nodes, such as neurons, using a variety of tools from mathematics, computational science and neurobiology. It then moves on to solve the diffusion model from a low-level random walk point of view. It also demonstrates how this idea can be used in a new approach to solving the cable equation, in order to better understand the neural computation approximations. It introduces specialized data for emotional content, which allows a brain model to be built using MatLab tools, and also highlights a simple model of cognitive dysfunction.

  15. Non-parallel processing: Gendered attrition in academic computer science

    Science.gov (United States)

    Cohoon, Joanne Louise Mcgrath

    2000-10-01

    This dissertation addresses the issue of disproportionate female attrition from computer science as an instance of gender segregation in higher education. By adopting a theoretical framework from organizational sociology, it demonstrates that the characteristics and processes of computer science departments strongly influence female retention. The empirical data identifies conditions under which women are retained in the computer science major at comparable rates to men. The research for this dissertation began with interviews of students, faculty, and chairpersons from five computer science departments. These exploratory interviews led to a survey of faculty and chairpersons at computer science and biology departments in Virginia. The data from these surveys are used in comparisons of the computer science and biology disciplines, and for statistical analyses that identify which departmental characteristics promote equal attrition for male and female undergraduates in computer science. This three-pronged methodological approach of interviews, discipline comparisons, and statistical analyses shows that departmental variation in gendered attrition rates can be explained largely by access to opportunity, relative numbers, and other characteristics of the learning environment. Using these concepts, this research identifies nine factors that affect the differential attrition of women from CS departments. These factors are: (1) The gender composition of enrolled students and faculty; (2) Faculty turnover; (3) Institutional support for the department; (4) Preferential attitudes toward female students; (5) Mentoring and supervising by faculty; (6) The local job market, starting salaries, and competitiveness of graduates; (7) Emphasis on teaching; and (8) Joint efforts for student success. This work contributes to our understanding of the gender segregation process in higher education. In addition, it contributes information that can lead to effective solutions for an

  16. Developments in medical image processing and computational vision

    CERN Document Server

    Jorge, Renato

    2015-01-01

    This book presents novel and advanced topics in Medical Image Processing and Computational Vision in order to solidify knowledge in the related fields and define their key stakeholders. It contains extended versions of selected papers presented in VipIMAGE 2013 – IV International ECCOMAS Thematic Conference on Computational Vision and Medical Image, which took place in Funchal, Madeira, Portugal, 14-16 October 2013.  The twenty-two chapters were written by invited experts of international recognition and address important issues in medical image processing and computational vision, including: 3D vision, 3D visualization, colour quantisation, continuum mechanics, data fusion, data mining, face recognition, GPU parallelisation, image acquisition and reconstruction, image and video analysis, image clustering, image registration, image restoring, image segmentation, machine learning, modelling and simulation, object detection, object recognition, object tracking, optical flow, pattern recognition, pose estimat...

  17. Coal Gasification Processes for Retrofitting Military Central Heating Plants: Overview

    Science.gov (United States)

    1992-11-01

    inorganic material remainitig after coal is completely combusted. At high temperatures, the ash will melt and clinkers may form. The ash composition ...was employed until 1941 by more than 9000 producers worldwide. Units installed in 1933 and 1948 are currently operating in South Africa . In 1980...Springs. South Africa Vaal Potteries Ltd. 1 8.5 Bituminous Operational Meyerton, South Africa Union Steel Corporation 2 10 Bituminous Operational

  18. Improving management decision processes through centralized communication linkages

    Science.gov (United States)

    Simanton, D. F.; Garman, J. R.

    1985-01-01

    Information flow is a critical element to intelligent and timely decision-making. At NASA's Johnson Space Center the flow of information is being automated through the use of a centralized backbone network. The theoretical basis of this network, its implications to the horizontal and vertical flow of information, and the technical challenges involved in its implementation are the focus of this paper. The importance of the use of common tools among programs and some future concerns related to file transfer, graphics transfer, and merging of voice and data are also discussed.

  19. Analysis of an Abrupt Rainstorm Process in Central Hunan Province

    Institute of Scientific and Technical Information of China (English)

    2011-01-01

    [Objective] The aim is to expound the abrupt rainstorm in the central Hunan Province on May 6 in 2010.[Method] By dint of NCEP 1°×1° reanalysis data,routine observation data,auto-station precipitation and FY-2C satellite data,the large-scale circulation background and physical condition during the large rainstorm period from the night on May 5 to 6 in 2010 were analyzed.The large scale environment,meso-scale characteristics and potential causes for the formation of large precipitation were revealed.By dint ...

  20. Computer-Aided Process Model For Carbon/Phenolic Materials

    Science.gov (United States)

    Letson, Mischell A.; Bunker, Robert C.

    1996-01-01

    Computer program implements thermochemical model of processing of carbon-fiber/phenolic-matrix composite materials into molded parts of various sizes and shapes. Directed toward improving fabrication of rocket-engine-nozzle parts, also used to optimize fabrication of other structural components, and material-property parameters changed to apply to other materials. Reduces costs by reducing amount of laboratory trial and error needed to optimize curing processes and to predict properties of cured parts.

  1. A new perspective on the 1930s mega-heat waves across central United States

    Science.gov (United States)

    Cowan, Tim; Hegerl, Gabi

    2016-04-01

    The unprecedented hot and dry conditions that plagued contiguous United States during the 1930s caused widespread devastation for many local communities and severely dented the emerging economy. The heat extremes experienced during the aptly named Dust Bowl decade were not isolated incidences, but part of a tendency towards warm summers over the central United States in the early 1930s, and peaked in the boreal summer 1936. Using high-quality daily maximum and minimum temperature observations from more than 880 Global Historical Climate Network stations across the United States and southern Canada, we assess the record breaking heat waves in the 1930s Dust Bowl decade. A comparison is made to more recent heat waves that have occurred during the latter half of the 20th century (i.e., in a warming world), both averaged over selected years and across decades. We further test the ability of coupled climate models to simulate mega-heat waves (i.e. most extreme events) across the United States in a pre-industrial climate without the impact of any long-term anthropogenic warming. Well-established heat wave metrics based on the temperature percentile threshold exceedances over three or more consecutive days are used to describe variations in the frequency, duration, amplitude and timing of the events. Casual factors such as drought severity/soil moisture deficits in the lead up to the heat waves (interannual), as well as the concurrent synoptic conditions (interdiurnal) and variability in Pacific and Atlantic sea surface temperatures (decadal) are also investigated. Results suggest that while each heat wave summer in the 1930s exhibited quite unique characteristics in terms of their timing, duration, amplitude, and regional clustering, a common factor in the Dust Bowl decade was the high number of consecutive dry seasons, as measured by drought indicators such as the Palmer Drought Severity and Standardised Precipitation indices, that preceded the mega-heat waves. This

  2. Tutorial: Signal Processing in Brain-Computer Interfaces

    NARCIS (Netherlands)

    Garcia Molina, G.

    2010-01-01

    Research in Electroencephalogram (EEG) based Brain-Computer Interfaces (BCIs) has been considerably expanding during the last few years. Such an expansion owes to a large extent to the multidisciplinary and challenging nature of BCI research. Signal processing undoubtedly constitutes an essential co

  3. Proceedings Second International Worshop on Computational Models for Cell Processes

    CERN Document Server

    Back, Ralph-Johan; de Vink, Erik

    2009-01-01

    The second international workshop on Computational Models for Cell Processes (ComProc 2009) took place on November 3, 2009 at the Eindhoven University of Technology, in conjunction with Formal Methods 2009. The workshop was jointly organized with the EC-MOAN project. This volume contains the final versions of all contributions accepted for presentation at the workshop.

  4. Moulding process characterization of paper bottles using computed tomography

    DEFF Research Database (Denmark)

    Saxena, Prateek; Bissacco, Giuliano

    2016-01-01

    The paper presents an approach of evaluating the moulding process for production of paper bottlesusing Computed Tomography (CT). Moulded Pulp Products (MPP) are made of a formed, dewateredand dried mixture of pulp fibers and water. Modern industrial pulp moulding is datedback to the year 1903 whe...

  5. Computer simulation program is adaptable to industrial processes

    Science.gov (United States)

    Schultz, F. E.

    1966-01-01

    The Reaction kinetics ablation program /REKAP/, developed to simulate ablation of various materials, provides mathematical formulations for computer programs which can simulate certain industrial processes. The programs are based on the use of nonsymmetrical difference equations that are employed to solve complex partial differential equation systems.

  6. Computational Models of Relational Processes in Cognitive Development

    Science.gov (United States)

    Halford, Graeme S.; Andrews, Glenda; Wilson, William H.; Phillips, Steven

    2012-01-01

    Acquisition of relational knowledge is a core process in cognitive development. Relational knowledge is dynamic and flexible, entails structure-consistent mappings between representations, has properties of compositionality and systematicity, and depends on binding in working memory. We review three types of computational models relevant to…

  7. Quantum computation and the physical computation level of biological information processing

    CERN Document Server

    Castagnoli, Giuseppe

    2009-01-01

    On the basis of introspective analysis, we establish a crucial requirement for the physical computation basis of consciousness: it should allow processing a significant amount of information together at the same time. Classical computation does not satisfy the requirement. At the fundamental physical level, it is a network of two body interactions, each the input-output transformation of a universal Boolean gate. Thus, it cannot process together at the same time more than the three bit input of this gate - many such gates in parallel do not count since the information is not processed together. Quantum computation satisfies the requirement. At the light of our recent explanation of the speed up, quantum measurement of the solution of the problem is analogous to a many body interaction between the parts of a perfect classical machine, whose mechanical constraints represent the problem to be solved. The many body interaction satisfies all the constraints together at the same time, producing the solution in one ...

  8. In-silico design of computational nucleic acids for molecular information processing.

    Science.gov (United States)

    Ramlan, Effirul Ikhwan; Zauner, Klaus-Peter

    2013-05-07

    Within recent years nucleic acids have become a focus of interest for prototype implementations of molecular computing concepts. During the same period the importance of ribonucleic acids as components of the regulatory networks within living cells has increasingly been revealed. Molecular computers are attractive due to their ability to function within a biological system; an application area extraneous to the present information technology paradigm. The existence of natural information processing architectures (predominately exemplified by protein) demonstrates that computing based on physical substrates that are radically different from silicon is feasible. Two key principles underlie molecular level information processing in organisms: conformational dynamics of macromolecules and self-assembly of macromolecules. Nucleic acids support both principles, and moreover computational design of these molecules is practicable. This study demonstrates the simplicity with which one can construct a set of nucleic acid computing units using a new computational protocol. With the new protocol, diverse classes of nucleic acids imitating the complete set of boolean logical operators were constructed. These nucleic acid classes display favourable thermodynamic properties and are significantly similar to the approximation of successful candidates implemented in the laboratory. This new protocol would enable the construction of a network of interconnecting nucleic acids (as a circuit) for molecular information processing.

  9. Corrective Action Plan for Corrective Action Unit 417: Central Nevada Test Area Surface, Nevada

    Energy Technology Data Exchange (ETDEWEB)

    K. Campbell

    2000-04-01

    This Corrective Action Plan provides methods for implementing the approved corrective action alternative as provided in the Corrective Action Decision Document for the Central Nevada Test Area (CNTA), Corrective Action Unit (CAU) 417 (DOE/NV, 1999). The CNTA is located in the Hot Creek Valley in Nye County, Nevada, approximately 137 kilometers (85 miles) northeast of Tonopah, Nevada. The CNTA consists of three separate land withdrawal areas commonly referred to as UC-1, UC-3, and UC-4, all of which are accessible to the public. CAU 417 consists of 34 Corrective Action Sites (CASs). Results of the investigation activities completed in 1998 are presented in Appendix D of the Corrective Action Decision Document (DOE/NV, 1999). According to the results, the only Constituent of Concern at the CNTA is total petroleum hydrocarbons (TPH). Of the 34 CASs, corrective action was proposed for 16 sites in 13 CASs. In fiscal year 1999, a Phase I Work Plan was prepared for the construction of a cover on the UC-4 Mud Pit C to gather information on cover constructibility and to perform site management activities. With Nevada Division of Environmental Protection concurrence, the Phase I field activities began in August 1999. A multi-layered cover using a Geosynthetic Clay Liner as an infiltration barrier was constructed over the UC-4 Mud Pit. Some TPH impacted material was relocated, concrete monuments were installed at nine sites, signs warning of site conditions were posted at seven sites, and subsidence markers were installed on the UC-4 Mud Pit C cover. Results from the field activities indicated that the UC-4 Mud Pit C cover design was constructable and could be used at the UC-1 Central Mud Pit (CMP). However, because of the size of the UC-1 CMP this design would be extremely costly. An alternative cover design, a vegetated cover, is proposed for the UC-1 CMP.

  10. Efficient neighbor list calculation for molecular simulation of colloidal systems using graphics processing units

    Science.gov (United States)

    Howard, Michael P.; Anderson, Joshua A.; Nikoubashman, Arash; Glotzer, Sharon C.; Panagiotopoulos, Athanassios Z.

    2016-06-01

    We present an algorithm based on linear bounding volume hierarchies (LBVHs) for computing neighbor (Verlet) lists using graphics processing units (GPUs) for colloidal systems characterized by large size disparities. We compare this to a GPU implementation of the current state-of-the-art CPU algorithm based on stenciled cell lists. We report benchmarks for both neighbor list algorithms in a Lennard-Jones binary mixture with synthetic interaction range disparity and a realistic colloid solution. LBVHs outperformed the stenciled cell lists for systems with moderate or large size disparity and dilute or semidilute fractions of large particles, conditions typical of colloidal systems.

  11. Fast network centrality analysis using GPUs

    Directory of Open Access Journals (Sweden)

    Shi Zhiao

    2011-05-01

    Full Text Available Abstract Background With the exploding volume of data generated by continuously evolving high-throughput technologies, biological network analysis problems are growing larger in scale and craving for more computational power. General Purpose computation on Graphics Processing Units (GPGPU provides a cost-effective technology for the study of large-scale biological networks. Designing algorithms that maximize data parallelism is the key in leveraging the power of GPUs. Results We proposed an efficient data parallel formulation of the All-Pairs Shortest Path problem, which is the key component for shortest path-based centrality computation. A betweenness centrality algorithm built upon this formulation was developed and benchmarked against the most recent GPU-based algorithm. Speedup between 11 to 19% was observed in various simulated scale-free networks. We further designed three algorithms based on this core component to compute closeness centrality, eccentricity centrality and stress centrality. To make all these algorithms available to the research community, we developed a software package gpu-fan (GPU-based Fast Analysis of Networks for CUDA enabled GPUs. Speedup of 10-50× compared with CPU implementations was observed for simulated scale-free networks and real world biological networks. Conclusions gpu-fan provides a significant performance improvement for centrality computation in large-scale networks. Source code is available under the GNU Public License (GPL at http://bioinfo.vanderbilt.edu/gpu-fan/.

  12. A sampler of useful computational tools for applied geometry, computer graphics, and image processing foundations for computer graphics, vision, and image processing

    CERN Document Server

    Cohen-Or, Daniel; Ju, Tao; Mitra, Niloy J; Shamir, Ariel; Sorkine-Hornung, Olga; Zhang, Hao (Richard)

    2015-01-01

    A Sampler of Useful Computational Tools for Applied Geometry, Computer Graphics, and Image Processing shows how to use a collection of mathematical techniques to solve important problems in applied mathematics and computer science areas. The book discusses fundamental tools in analytical geometry and linear algebra. It covers a wide range of topics, from matrix decomposition to curvature analysis and principal component analysis to dimensionality reduction.Written by a team of highly respected professors, the book can be used in a one-semester, intermediate-level course in computer science. It

  13. A learnable parallel processing architecture towards unity of memory and computing.

    Science.gov (United States)

    Li, H; Gao, B; Chen, Z; Zhao, Y; Huang, P; Ye, H; Liu, L; Liu, X; Kang, J

    2015-08-14

    Developing energy-efficient parallel information processing systems beyond von Neumann architecture is a long-standing goal of modern information technologies. The widely used von Neumann computer architecture separates memory and computing units, which leads to energy-hungry data movement when computers work. In order to meet the need of efficient information processing for the data-driven applications such as big data and Internet of Things, an energy-efficient processing architecture beyond von Neumann is critical for the information society. Here we show a non-von Neumann architecture built of resistive switching (RS) devices named "iMemComp", where memory and logic are unified with single-type devices. Leveraging nonvolatile nature and structural parallelism of crossbar RS arrays, we have equipped "iMemComp" with capabilities of computing in parallel and learning user-defined logic functions for large-scale information processing tasks. Such architecture eliminates the energy-hungry data movement in von Neumann computers. Compared with contemporary silicon technology, adder circuits based on "iMemComp" can improve the speed by 76.8% and the power dissipation by 60.3%, together with a 700 times aggressive reduction in the circuit area.

  14. Parallel processing using an optical delay-based reservoir computer

    Science.gov (United States)

    Van der Sande, Guy; Nguimdo, Romain Modeste; Verschaffelt, Guy

    2016-04-01

    Delay systems subject to delayed optical feedback have recently shown great potential in solving computationally hard tasks. By implementing a neuro-inspired computational scheme relying on the transient response to optical data injection, high processing speeds have been demonstrated. However, reservoir computing systems based on delay dynamics discussed in the literature are designed by coupling many different stand-alone components which lead to bulky, lack of long-term stability, non-monolithic systems. Here we numerically investigate the possibility of implementing reservoir computing schemes based on semiconductor ring lasers. Semiconductor ring lasers are semiconductor lasers where the laser cavity consists of a ring-shaped waveguide. SRLs are highly integrable and scalable, making them ideal candidates for key components in photonic integrated circuits. SRLs can generate light in two counterpropagating directions between which bistability has been demonstrated. We demonstrate that two independent machine learning tasks , even with different nature of inputs with different input data signals can be simultaneously computed using a single photonic nonlinear node relying on the parallelism offered by photonics. We illustrate the performance on simultaneous chaotic time series prediction and a classification of the Nonlinear Channel Equalization. We take advantage of different directional modes to process individual tasks. Each directional mode processes one individual task to mitigate possible crosstalk between the tasks. Our results indicate that prediction/classification with errors comparable to the state-of-the-art performance can be obtained even with noise despite the two tasks being computed simultaneously. We also find that a good performance is obtained for both tasks for a broad range of the parameters. The results are discussed in detail in [Nguimdo et al., IEEE Trans. Neural Netw. Learn. Syst. 26, pp. 3301-3307, 2015

  15. Polymer Field-Theory Simulations on Graphics Processing Units

    CERN Document Server

    Delaney, Kris T

    2012-01-01

    We report the first CUDA graphics-processing-unit (GPU) implementation of the polymer field-theoretic simulation framework for determining fully fluctuating expectation values of equilibrium properties for periodic and select aperiodic polymer systems. Our implementation is suitable both for self-consistent field theory (mean-field) solutions of the field equations, and for fully fluctuating simulations using the complex Langevin approach. Running on NVIDIA Tesla T20 series GPUs, we find double-precision speedups of up to 30x compared to single-core serial calculations on a recent reference CPU, while single-precision calculations proceed up to 60x faster than those on the single CPU core. Due to intensive communications overhead, an MPI implementation running on 64 CPU cores remains two times slower than a single GPU.

  16. PO*WW*ER mobile treatment unit process hazards analysis

    Energy Technology Data Exchange (ETDEWEB)

    Richardson, R.B.

    1996-06-01

    The objective of this report is to demonstrate that a thorough assessment of the risks associated with the operation of the Rust Geotech patented PO*WW*ER mobile treatment unit (MTU) has been performed and documented. The MTU was developed to treat aqueous mixed wastes at the US Department of Energy (DOE) Albuquerque Operations Office sites. The MTU uses evaporation to separate organics and water from radionuclides and solids, and catalytic oxidation to convert the hazardous into byproducts. This process hazards analysis evaluated a number of accident scenarios not directly related to the operation of the MTU, such as natural phenomena damage and mishandling of chemical containers. Worst case accident scenarios were further evaluated to determine the risk potential to the MTU and to workers, the public, and the environment. The overall risk to any group from operation of the MTU was determined to be very low; the MTU is classified as a Radiological Facility with low hazards.

  17. Integrating post-Newtonian equations on graphics processing units

    Energy Technology Data Exchange (ETDEWEB)

    Herrmann, Frank; Tiglio, Manuel [Department of Physics, Center for Fundamental Physics, and Center for Scientific Computation and Mathematical Modeling, University of Maryland, College Park, MD 20742 (United States); Silberholz, John [Center for Scientific Computation and Mathematical Modeling, University of Maryland, College Park, MD 20742 (United States); Bellone, Matias [Facultad de Matematica, Astronomia y Fisica, Universidad Nacional de Cordoba, Cordoba 5000 (Argentina); Guerberoff, Gustavo, E-mail: tiglio@umd.ed [Facultad de Ingenieria, Instituto de Matematica y Estadistica ' Prof. Ing. Rafael Laguardia' , Universidad de la Republica, Montevideo (Uruguay)

    2010-02-07

    We report on early results of a numerical and statistical study of binary black hole inspirals. The two black holes are evolved using post-Newtonian approximations starting with initially randomly distributed spin vectors. We characterize certain aspects of the distribution shortly before merger. In particular we note the uniform distribution of black hole spin vector dot products shortly before merger and a high correlation between the initial and final black hole spin vector dot products in the equal-mass, maximally spinning case. More than 300 million simulations were performed on graphics processing units, and we demonstrate a speed-up of a factor 50 over a more conventional CPU implementation. (fast track communication)

  18. Computer-Aided Multiscale Modelling for Chemical Process Engineering

    DEFF Research Database (Denmark)

    Morales Rodriguez, Ricardo; Gani, Rafiqul

    2007-01-01

    T) for model translation, analysis and solution. The integration of ModDev, MoT and ICAS or any other external software or process simulator (using COM-Objects) permits the generation of different models and/or process configurations for purposes of simulation, design and analysis. Consequently, it is possible......Chemical processes are generally modeled through monoscale approaches, which, while not adequate, satisfy a useful role in product-process design. In this case, use of a multi-dimensional and multi-scale model-based approach has importance in product-process development. A computer-aided framework...... for model generation, analysis, solution and implementation is necessary for the development and application of the desired model-based approach for product-centric process design/analysis. This goal is achieved through the combination of a system for model development (ModDev), and a modelling tool (Mo...

  19. Test bank to accompany Computers data and processing

    CERN Document Server

    Deitel, Harvey M

    1980-01-01

    Test Bank to Accompany Computers and Data Processing provides a variety of questions from which instructors can easily custom tailor exams appropriate for their particular courses. This book contains over 4000 short-answer questions that span the full range of topics for introductory computing course.This book is organized into five parts encompassing 19 chapters. This text provides a very large number of questions so that instructors can produce different exam testing essentially the same topics in succeeding semesters. Three types of questions are included in this book, including multiple ch

  20. Computational molecular engineering as an emerging technology in process engineering

    CERN Document Server

    Horsch, Martin; Vrabec, Jadran; Hasse, Hans

    2013-01-01

    The present level of development of molecular force field methods is assessed from the point of view of simulation-based engineering, outlining the immediate perspective for further development and highlighting the newly emerging discipline of Computational Molecular Engineering (CME) which makes basic research in soft matter physics fruitful for industrial applications. Within the coming decade, major breakthroughs can be reached if a research focus is placed on processes at interfaces, combining aspects where an increase in the accessible length and time scales due to massively parallel high-performance computing will lead to particularly significant improvements.

  1. Students' Beliefs about Mobile Devices vs. Desktop Computers in South Korea and the United States

    Science.gov (United States)

    Sung, Eunmo; Mayer, Richard E.

    2012-01-01

    College students in the United States and in South Korea completed a 28-item multidimensional scaling (MDS) questionnaire in which they rated the similarity of 28 pairs of multimedia learning materials on a 10-point scale (e.g., narrated animation on a mobile device Vs. movie clip on a desktop computer) and a 56-item semantic differential…

  2. Late-Stage Ductile Deformation in Xiongdian-Suhe HP Metamorphic Unit, North-Western Dabie Shan, Central China

    Institute of Scientific and Technical Information of China (English)

    Suo Shutian; Zhong Zengqiu; Zhou Hanwen; You Zhendong

    2004-01-01

    New structural and petrological data unveil a very complicated ductile deformation history of the Xiongdian-Suhe HP metamorphic unit, north-western Dabie Shan, central China. The fine-grained symplectic amphibolite-facies assemblage and coronal structure enveloping eclogite-facies garnet, omphacite and phengite etc., representing strain-free decompression and retrogressive metamorphism, are considered as the main criteria to distinguish between the early-stage deformation under HP metamorphic conditions related to the continental deep subduction and collision, and the late-stage deformation under amphibolite to greenschist-facies conditions occurred in the post-eclogite exhumation processes. Two late-stages of widely developed, sequential ductile deformations D3 and D4, are recognized on the basis of penetrative fabrics and mineral aggregates in the Xiongdian-Suhe HP metamorphic unit, which shows clear, regionally, consistent overprinting relationships. D3 fabrics are best preserved in the Suhe tract of low post-D3 deformation intensity and characterized by steeply dipping layered mylonitic amphibolites associated with doubly vergent folds. They are attributed to a phase of tectonism linked to the initial exhumation of the HP rocks and involved crustal shortening with the development of upright structures and the widespread emplacement of garnet-bearing granites and felsic dikes. D4 structures are attributed to the main episode of ductile extension (D14) with a gently dipping foliation to the north and common intrafolial, recumbent folds in the Xiongdian tract, followed by normal sense top-to-the north ductile shearing (D24) along an important tectonic boundary, the so-called Majiawa-Hexiwan fault (MHF), the westward continuation of the Balifan-Mozitan-Xiaotian fault (BMXF) of the northern Dabie Shan. It is indicated that the two stages of ductile deformation observed in the Xiongdian-Suhe HP metamorphic unit, reflecting the post-eclogite compressional or extrusion

  3. Viking image processing. [digital stereo imagery and computer mosaicking

    Science.gov (United States)

    Green, W. B.

    1977-01-01

    The paper discusses the camera systems capable of recording black and white and color imagery developed for the Viking Lander imaging experiment. Each Viking Lander image consisted of a matrix of numbers with 512 rows and an arbitrary number of columns up to a maximum of about 9,000. Various techniques were used in the processing of the Viking Lander images, including: (1) digital geometric transformation, (2) the processing of stereo imagery to produce three-dimensional terrain maps, and (3) computer mosaicking of distinct processed images. A series of Viking Lander images is included.

  4. Computer image processing - The Viking experience. [digital enhancement techniques

    Science.gov (United States)

    Green, W. B.

    1977-01-01

    Computer processing of digital imagery from the Viking mission to Mars is discussed, with attention given to subjective enhancement and quantitative processing. Contrast stretching and high-pass filtering techniques of subjective enhancement are described; algorithms developed to determine optimal stretch and filtering parameters are also mentioned. In addition, geometric transformations to rectify the distortion of shapes in the field of view and to alter the apparent viewpoint of the image are considered. Perhaps the most difficult problem in quantitative processing of Viking imagery was the production of accurate color representations of Orbiter and Lander camera images.

  5. Minimization of entropy production in separate and connected process units

    Energy Technology Data Exchange (ETDEWEB)

    Roesjorde, Audun

    2004-08-01

    The objective of this thesis was to further develop a methodology for minimizing the entropy production of single and connected chemical process units. When chemical process equipment is designed and operated at the lowest entropy production possible, the energy efficiency of the equipment is enhanced. We have found for single process units that the entropy production could be reduced with up to 20-40%, given the degrees of freedom in the optimization. In processes, our results indicated that even bigger reductions were possible. The states of minimum entropy production were studied and important painter's for obtaining significant reductions in the entropy production were identified. Both from sustain ability and economical viewpoints knowledge of energy efficient design and operation are important. In some of the systems we studied, nonequilibrium thermodynamics was used to model the entropy production. In Chapter 2, we gave a brief introduction to different industrial applications of nonequilibrium thermodynamics. The link between local transport phenomena and overall system description makes nonequilibrium thermodynamics a useful tool for understanding design of chemical process units. We developed the methodology of minimization of entropy production in several steps. First, we analyzed and optimized the entropy production of single units: Two alternative concepts of adiabatic distillation; diabatic and heat-integrated distillation, were analyzed and optimized in Chapter 3 to 5. In diabatic distillation, heat exchange is allowed along the column, and it is this feature that increases the energy efficiency of the distillation column. In Chapter 3, we found how a given area of heat transfer should be optimally distributed among the trays in a column separating a mixture of propylene and propane. The results showed that heat exchange was most important on the trays close to the re boiler and condenser. In Chapter 4 and 5, we studied how the entropy

  6. Well Completion Report for Corrective Action Unit 443 Central Nevada Test Area Nye County, Nevada

    Energy Technology Data Exchange (ETDEWEB)

    None

    2009-12-01

    The drilling program described in this report is part of a new corrective action strategy for Corrective Action Unit (CAU) 443 at the Central Nevada Test Area (CNTA). The drilling program included drilling two boreholes, geophysical well logging, construction of two monitoring/validation (MV) wells with piezometers (MV-4 and MV-5), development of monitor wells and piezometers, recompletion of two existing wells (HTH-1 and UC-1-P-1S), removal of pumps from existing wells (MV-1, MV-2, and MV-3), redevelopment of piezometers associated with existing wells (MV-1, MV-2, and MV-3), and installation of submersible pumps. The new corrective action strategy includes initiating a new 5-year proof-of-concept monitoring period to validate the compliance boundary at CNTA (DOE 2007). The new 5-year proof-of-concept monitoring period begins upon completion of the new monitor wells and collection of samples for laboratory analysis. The new strategy is described in the Corrective Action Decision Document/Corrective Action Plan addendum (DOE 2008a) that the Nevada Division of Environmental Protection approved (NDEP 2008).

  7. Groundwater Monitoring Report Central Nevada Test Area, Corrective Action Unit 443

    Energy Technology Data Exchange (ETDEWEB)

    None

    2008-04-01

    This report presents the 2007 groundwater monitoring results collected by the U.S. Department of Energy (DOE) Office of Legacy Management (LM) for the Central Nevada Test Area (CNTA) Corrective Action Unit (CAU) 443. Responsibility for the environmental site restoration of the CNTA was transferred from the DOE Office of Environmental Management (DOE-EM) to DOE-LM on October 1, 2006. Requirements for CAU 443 are specified in the Federal Facility Agreement and Consent Order (FFACO 2005) entered into by DOE, the U.S. Department of Defense, and the State of Nevada and includes groundwater monitoring in support of site closure. This is the first groundwater monitoring report prepared by DOE-LM for the CNTA The CNTA is located north of U.S. Highway 6, approximately 30 miles north of Warm Springs in Nye County, Nevada (Figure 1). Three emplacement boreholes, UC-1, UC-3, and UC-4, were drilled at the CNTA for underground nuclear weapons testing. The initial underground nuclear test, Project Faultless, was conducted in borehole UC-1 at a depth of 3,199 feet (ft) (975 meters) below ground surface on January 19, 1968. The yield of the Project Faultless test was estimated to be 0.2 to 1 megaton (DOE 2004). The test resulted in a down-dropped fault block visible at land surface (Figure 2). No further testing was conducted at the CNTA, and the site was decommissioned as a testing facility in 1973.

  8. Validity of Drought Indices as Drought Predictors in the South-Central United States

    Science.gov (United States)

    Rohli, R. V.; Bushra, N.; Lam, N.; Zou, L.; Mihunov, V.; Reams, M.; Argote, J.

    2015-12-01

    Drought is among the most insidious types of natural disasters and can have tremendous economic and human health impacts. This research analyzes the relationship between two readily-accessible drought indices - the Palmer Drought Severity Index (PDSI) and Palmer Hydrologic Drought Index (PHDI) - and the damage incurred by such droughts in terms of monetary loss, over the 1975-2010 time period on monthly basis, for five states in the south-central U.S.A. Because drought damage in the Spatial Hazards Events and Losses Database for the United States (SHELDUSTM) is reported at the county level, statistical downscaling techniques were used to estimate the county-level PDSI and PHDI. Correlation analysis using the downscaled indices suggests that although relatively few months contain drought damage reports, in general drought indices can be useful predictors of drought damage at the monthly temporal scale extended to 12 months and at the county-wide spatial scale. The varying time lags between occurrence of drought and reporting of damage, perhaps due to varying resilience to drought intensity and duration by crop types across space, irrigation methods, and adaptation measures of the community to drought varies over space and time, are thought to contribute to weakened correlations. These results present a reminder of the complexities of anticipating the effects of drought but they contribute to the effort to improve our ability to mitigate the effects of incipient drought.

  9. Final design of the Switching Network Units for the JT-60SA Central Solenoid

    Energy Technology Data Exchange (ETDEWEB)

    Lampasi, Alessandro, E-mail: alessandro.lampasi@enea.it [National Agency for New Technologies, Energy and Sustainable Economic Development (ENEA), Frascati (Italy); Coletti, Alberto; Novello, Luca [Fusion for Energy (F4E) Broader Fusion Development Department, Garching (Germany); Matsukawa, Makoto [Japan Atomic Energy Agency, Naka Fusion Institute, Mukouyama, Naka-si, Ibaraki-ken (Japan); Burini, Filippo; Taddia, Giuseppe; Tenconi, Sandro [OCEM Energy Technology, San Giorgio Di Piano (Italy)

    2014-04-15

    This paper describes the approved detailed design of the four Switching Network Units (SNUs) of the superconducting Central Solenoid of JT-60SA, the satellite tokamak that will be built in Naka, Japan, in the framework of the “Broader Approach” cooperation agreement between Europe and Japan. The SNUs can interrupt a current of 20 kA DC in less than 1 ms in order to produce a voltage of 5 kV. Such performance is obtained by inserting an electronic static circuit breaker in parallel to an electromechanical contactor and by matching and coordinating their operations. Any undesired transient overvoltage is limited by an advanced snubber circuit optimized for this application. The SNU resistance values can be adapted to the specific operation scenario. In particular, after successful plasma breakdown, the SNU resistance can be reduced by a making switch. The design choices of the main SNU elements are justified by showing and discussing the performed calculations and simulations. In most cases, the developed design is expected to exceed the performances required by the JT-60SA project.

  10. Ground Motion Prediction Equations for the Central and Eastern United States

    Science.gov (United States)

    Seber, D.; Graizer, V.

    2015-12-01

    New ground motion prediction equations (GMPE) G15 model for the Central and Eastern United States (CEUS) is presented. It is based on the modular filter based approach developed by Graizer and Kalkan (2007, 2009) for active tectonic environment in the Western US (WUS). The G15 model is based on the NGA-East database for the horizontal peak ground acceleration and 5%-damped pseudo spectral acceleration RotD50 component (Goulet et al., 2014). In contrast to active tectonic environment the database for the CEUS is not sufficient for creating purely empirical GMPE covering the range of magnitudes and distances required for seismic hazard assessments. Recordings in NGA-East database are sparse and cover mostly range of Mindustry (Vs=2800 m/s). The number of model predictors is limited to a few measurable parameters: moment magnitude M, closest distance to fault rupture plane R, average shear-wave velocity in the upper 30 m of the geological profile VS30, and anelastic attenuation factor Q0. Incorporating anelastic attenuation Q0 as an input parameter allows adjustments based on the regional crustal properties. The model covers the range of magnitudes 4.010 Hz) and is within the range of other models for frequencies lower than 2.5 Hz

  11. Molecular epidemiology of Acinetobacter baumannii in central intensive care unit in Kosova teaching hospital

    Directory of Open Access Journals (Sweden)

    Lul Raka

    2009-12-01

    Full Text Available Infections caused by bacteria of genus Acinetobacter pose a significant health care challenge worldwide. Information on molecular epidemiological investigation of outbreaks caused by Acinetobacter species in Kosova is lacking. The present investigation was carried out to enlight molecular epidemiology of Acinetobacterbaumannii in the Central Intensive Care Unit (CICU of a University hospital in Kosova using pulse field gel electrophoresis (PFGE. During March - July 2006, A. baumannii was isolated from 30 patients, of whom 22 were infected and 8 were colonised. Twenty patients had ventilator-associated pneumonia, one patient had meningitis, and two had coinfection with bloodstream infection and surgical site infection. The most common diagnoses upon admission to the ICU were politrauma and cerebral hemorrhage. Bacterial isolates were most frequently recovered from endotracheal aspirate (86.7%. First isolation occurred, on average, on day 8 following admission (range 1-26 days. Genotype analysis of A. baumannii isolates identified nine distinct PFGE patterns, with predominance of PFGE clone E represented by isolates from 9 patients. Eight strains were resistant to carbapenems. The genetic relatedness of Acinetobacter baumannii was high, indicating cross-transmission within the ICU setting. These results emphasize the need for measures to prevent nosocomial transmission of A. baumannii in ICU.

  12. A computational model of the integration of landmarks and motion in the insect central complex

    Science.gov (United States)

    Sabo, Chelsea; Vasilaki, Eleni; Barron, Andrew B.; Marshall, James A. R.

    2017-01-01

    The insect central complex (CX) is an enigmatic structure whose computational function has evaded inquiry, but has been implicated in a wide range of behaviours. Recent experimental evidence from the fruit fly (Drosophila melanogaster) and the cockroach (Blaberus discoidalis) has demonstrated the existence of neural activity corresponding to the animal’s orientation within a virtual arena (a neural ‘compass’), and this provides an insight into one component of the CX structure. There are two key features of the compass activity: an offset between the angle represented by the compass and the true angular position of visual features in the arena, and the remapping of the 270° visual arena onto an entire circle of neurons in the compass. Here we present a computational model which can reproduce this experimental evidence in detail, and predicts the computational mechanisms that underlie the data. We predict that both the offset and remapping of the fly’s orientation onto the neural compass can be explained by plasticity in the synaptic weights between segments of the visual field and the neurons representing orientation. Furthermore, we predict that this learning is reliant on the existence of neural pathways that detect rotational motion across the whole visual field and uses this rotation signal to drive the rotation of activity in a neural ring attractor. Our model also reproduces the ‘transitioning’ between visual landmarks seen when rotationally symmetric landmarks are presented. This model can provide the basis for further investigation into the role of the central complex, which promises to be a key structure for understanding insect behaviour, as well as suggesting approaches towards creating fully autonomous robotic agents. PMID:28241061

  13. On the Computational Complexity of Degenerate Unit Distance Representations of Graphs

    Science.gov (United States)

    Horvat, Boris; Kratochvíl, Jan; Pisanski, Tomaž

    Some graphs admit drawings in the Euclidean k-space in such a (natural) way, that edges are represented as line segments of unit length. Such embeddings are called k-dimensional unit distance representations. The embedding is strict if the distances of points representing nonadjacent pairs of vertices are different than 1. When two non-adjacent vertices are drawn in the same point, we say that the representation is degenerate. Computational complexity of nondegenerate embeddings has been studied before. We initiate the study of the computational complexity of (possibly) degenerate embeddings. In particular we prove that for every k ≥ 2, deciding if an input graph has a (possibly) degenerate k-dimensional unit distance representation is NP-hard.

  14. Accessible high performance computing solutions for near real-time image processing for time critical applications

    Science.gov (United States)

    Bielski, Conrad; Lemoine, Guido; Syryczynski, Jacek

    2009-09-01

    High Performance Computing (HPC) hardware solutions such as grid computing and General Processing on a Graphics Processing Unit (GPGPU) are now accessible to users with general computing needs. Grid computing infrastructures in the form of computing clusters or blades are becoming common place and GPGPU solutions that leverage the processing power of the video card are quickly being integrated into personal workstations. Our interest in these HPC technologies stems from the need to produce near real-time maps from a combination of pre- and post-event satellite imagery in support of post-disaster management. Faster processing provides a twofold gain in this situation: 1. critical information can be provided faster and 2. more elaborate automated processing can be performed prior to providing the critical information. In our particular case, we test the use of the PANTEX index which is based on analysis of image textural measures extracted using anisotropic, rotation-invariant GLCM statistics. The use of this index, applied in a moving window, has been shown to successfully identify built-up areas in remotely sensed imagery. Built-up index image masks are important input to the structuring of damage assessment interpretation because they help optimise the workload. The performance of computing the PANTEX workflow is compared on two different HPC hardware architectures: (1) a blade server with 4 blades, each having dual quad-core CPUs and (2) a CUDA enabled GPU workstation. The reference platform is a dual CPU-quad core workstation and the PANTEX workflow total computing time is measured. Furthermore, as part of a qualitative evaluation, the differences in setting up and configuring various hardware solutions and the related software coding effort is presented.

  15. Geomorphology of the central Red Sea Rift: Determining spreading processes

    Science.gov (United States)

    Augustin, Nico; van der Zwan, Froukje M.; Devey, Colin W.; Ligi, Marco; Kwasnitschka, Tom; Feldens, Peter; Bantan, Rashad A.; Basaham, Ali S.

    2016-12-01

    Continental rifting and ocean basin formation is occurring today in the Red Sea, providing a possible modern analogue for the creation of mid-ocean ridges. Yet many of the seafloor features observed along the axis of the Red Sea appear anomalous compared to ancient and modern examples of mid-ocean ridges in other parts of the world, making it unclear, until recently, whether the Red Sea is truly analogous. Recent work suggests that the main morphological differences between the Red Sea Rift (RSR) and other mid-ocean ridges are due to the presence and movement of giant, submarine salt flows, which blanket large portions of the rift valley and thereby the oceanic crust. Using ship-based, high-resolution multibeam bathymetry of the central RSR between 16.5°N and 23°N we focus here on the RSR volcanic terrains not covered by salt and sediments and compare their morphologies to those observed along slow and ultra-slow spreading ridges elsewhere. Regional variations in style and intensity of volcanism can be related to variations in volcanic activity and mantle heat flow. The Red Sea oceanic seafloor shows typical features of mature (ultra)slow-spreading mid-ocean ridges, such as 2nd order discontinuities (overlapping spreading centres) and magma focussing in the segment centres (forming spreading-perpendicular volcanic ridges of thick oceanic crust). The occurrence of melt-salt interaction at locations where salt glaciers blanket the neovolcanic zone, and the absence of large detachment faults are unique features of the central RSR. These features can be related to the young character of the Red Sea and may be applicable to all young oceanic rifts, associated with plumes and/or evaporites. Thus, the RSR falls in line with (ultra)slow-spreading mid-ocean ridges globally, which makes the Red Sea a unique but highly important type example for initiation of slow rifting and seafloor spreading and one of the most interesting targets for future ocean research.

  16. The University Next Door: Developing a Centralized Unit That Strategically Cultivates Community Engagement at an Urban University

    Science.gov (United States)

    Holton, Valerie L.; Early, Jennifer L.; Resler, Meghan; Trussell, Audrey; Howard, Catherine

    2016-01-01

    Using Kotter's model of change as a framework, this case study will describe the structure and efforts of a centralized unit within an urban, research university to deepen and extend the institutionalization of community engagement. The change model will be described along with details about the implemented strategies and practices that fall…

  17. EFFECT OF SOWING DATE OF TRITICALE ON SEASONAL HERBAGE PRODUCTION IN THE CENTRAL APPALACHIAN HIGHLANDS OF THE UNITED STATES

    Science.gov (United States)

    Mixed perennial, cool-season species are the dominant components of pastures in the central Appalachian Region of the United States. Forage production from such pastures is often limited during hot, dry summer months and cool, early and late season periods. We studied forage production and stand d...

  18. Computer aided analysis, simulation and optimisation of thermal sterilisation processes.

    Science.gov (United States)

    Narayanan, C M; Banerjee, Arindam

    2013-04-01

    Although thermal sterilisation is a widely employed industrial process, little work is reported in the available literature including patents on the mathematical analysis and simulation of these processes. In the present work, software packages have been developed for computer aided optimum design of thermal sterilisation processes. Systems involving steam sparging, jacketed heating/cooling, helical coils submerged in agitated vessels and systems that employ external heat exchangers (double pipe, shell and tube and plate exchangers) have been considered. Both batch and continuous operations have been analysed and simulated. The dependence of del factor on system / operating parameters such as mass or volume of substrate to be sterilised per batch, speed of agitation, helix diameter, substrate to steam ratio, rate of substrate circulation through heat exchanger and that through holding tube have been analysed separately for each mode of sterilisation. Axial dispersion in the holding tube has also been adequately accounted for through an appropriately defined axial dispersion coefficient. The effect of exchanger characteristics/specifications on the system performance has also been analysed. The multiparameter computer aided design (CAD) software packages prepared are thus highly versatile in nature and they permit to make the most optimum choice of operating variables for the processes selected. The computed results have been compared with extensive data collected from a number of industries (distilleries, food processing and pharmaceutical industries) and pilot plants and satisfactory agreement has been observed between the two, thereby ascertaining the accuracy of the CAD softwares developed. No simplifying assumptions have been made during the analysis and the design of associated heating / cooling equipment has been performed utilising the most updated design correlations and computer softwares.

  19. Computer Science Teacher Professional Development in the United States: A Review of Studies Published between 2004 and 2014

    Science.gov (United States)

    Menekse, Muhsin

    2015-01-01

    While there has been a remarkable interest to make computer science a core K-12 academic subject in the United States, there is a shortage of K-12 computer science teachers to successfully implement computer sciences courses in schools. In order to enhance computer science teacher capacity, training programs have been offered through teacher…

  20. A Fast MHD Code for Gravitationally Stratified Media using Graphical Processing Units: SMAUG

    Indian Academy of Sciences (India)

    M. K. Griffiths; V. Fedun; R.Erdélyi

    2015-03-01

    Parallelization techniques have been exploited most successfully by the gaming/graphics industry with the adoption of graphical processing units (GPUs), possessing hundreds of processor cores. The opportunity has been recognized by the computational sciences and engineering communities, who have recently harnessed successfully the numerical performance of GPUs. For example, parallel magnetohydrodynamic (MHD) algorithms are important for numerical modelling of highly inhomogeneous solar, astrophysical and geophysical plasmas. Here, we describe the implementation of SMAUG, the Sheffield Magnetohydrodynamics Algorithm Using GPUs. SMAUG is a 1–3D MHD code capable of modelling magnetized and gravitationally stratified plasma. The objective of this paper is to present the numerical methods and techniques used for porting the code to this novel and highly parallel compute architecture. The methods employed are justified by the performance benchmarks and validation results demonstrating that the code successfully simulates the physics for a range of test scenarios including a full 3D realistic model of wave propagation in the solar atmosphere.

  1. Future evolution of the Fast TracKer (FTK) processing unit

    CERN Document Server

    Gentsos, C; The ATLAS collaboration; Giannetti, P; Magalotti, D; Nikolaidis, S

    2014-01-01

    The Fast Tracker (FTK) processor [1] for the ATLAS experiment has a computing core made of 128 Processing Units that reconstruct tracks in the silicon detector in a ~100 μsec deep pipeline. The track parameter resolution provided by FTK enables the HLT trigger to identify efficiently and reconstruct significant samples of fermionic Higgs decays. Data processing speed is achieved with custom VLSI pattern recognition, linearized track fitting executed inside modern FPGAs, pipelining, and parallel processing. One large FPGA executes full resolution track fitting inside low resolution candidate tracks found by a set of 16 custom Asic devices, called Associative Memories (AM chips) [2]. The FTK dual structure, based on the cooperation of VLSI dedicated AM and programmable FPGAs, is maintained to achieve further technology performance, miniaturization and integration of the current state of the art prototypes. This allows to fully exploit new applications within and outside the High Energy Physics field. We plan t...

  2. Computational Tools for Accelerating Carbon Capture Process Development

    Energy Technology Data Exchange (ETDEWEB)

    Miller, David

    2013-01-01

    The goals of the work reported are: to develop new computational tools and models to enable industry to more rapidly develop and deploy new advanced energy technologies; to demonstrate the capabilities of the CCSI Toolset on non-proprietary case studies; and to deploy the CCSI Toolset to industry. Challenges of simulating carbon capture (and other) processes include: dealing with multiple scales (particle, device, and whole process scales); integration across scales; verification, validation, and uncertainty; and decision support. The tools cover: risk analysis and decision making; validated, high-fidelity CFD; high-resolution filtered sub-models; process design and optimization tools; advanced process control and dynamics; process models; basic data sub-models; and cross-cutting integration tools.

  3. Computer aided microbial safety design of food processes.

    Science.gov (United States)

    Schellekens, M; Martens, T; Roberts, T A; Mackey, B M; Nicolaï, B M; Van Impe, J F; De Baerdemaeker, J

    1994-12-01

    To reduce the time required for product development, to avoid expensive experimental tests, and to quantify safety risks for fresh products and the consequence of processing there is a growing interest in computer aided food process design. This paper discusses the application of hybrid object-oriented and rule-based expert system technology to represent the data and knowledge of microbial experts and food engineers. Finite element models for heat transfer calculation routines, microbial growth and inactivation models and texture kinetics are combined with food composition data, thermophysical properties, process steps and expert knowledge on type and quantity of microbial contamination. A prototype system has been developed to evaluate changes in food composition, process steps and process parameters on microbiological safety and textual quality of foods.

  4. First International Conference Multimedia Processing, Communication and Computing Applications

    CERN Document Server

    Guru, Devanur

    2013-01-01

    ICMCCA 2012 is the first International Conference on Multimedia Processing, Communication and Computing Applications and the theme of the Conference is chosen as ‘Multimedia Processing and its Applications’. Multimedia processing has been an active research area contributing in many frontiers of today’s science and technology. This book presents peer-reviewed quality papers on multimedia processing, which covers a very broad area of science and technology. The prime objective of the book is to familiarize readers with the latest scientific developments that are taking place in various fields of multimedia processing and is widely used in many disciplines such as Medical Diagnosis, Digital Forensic, Object Recognition, Image and Video Analysis, Robotics, Military, Automotive Industries, Surveillance and Security, Quality Inspection, etc. The book will assist the research community to get the insight of the overlapping works which are being carried out across the globe at many medical hospitals and instit...

  5. Recognition of oral spelling is diagnostic of the central reading processes.

    Science.gov (United States)

    Schubert, Teresa; McCloskey, Michael

    2015-01-01

    The task of recognition of oral spelling (stimulus: "C-A-T", response: "cat") is often administered to individuals with acquired written language disorders, yet there is no consensus about the underlying cognitive processes. We adjudicate between two existing hypotheses: Recognition of oral spelling uses central reading processes, or recognition of oral spelling uses central spelling processes in reverse. We tested the recognition of oral spelling and spelling to dictation abilities of a single individual with acquired dyslexia and dysgraphia. She was impaired relative to matched controls in spelling to dictation but unimpaired in recognition of oral spelling. Recognition of oral spelling for exception words (e.g., colonel) and pronounceable nonwords (e.g., larth) was intact. Our results were predicted by the hypothesis that recognition of oral spelling involves the central reading processes. We conclude that recognition of oral spelling is a useful tool for probing the integrity of the central reading processes.

  6. The ATLAS Fast TracKer Processing Units

    CERN Document Server

    Krizka, Karol; The ATLAS collaboration

    2016-01-01

    The Fast Tracker is a hardware upgrade to the ATLAS trigger and data-acquisition system, with the goal of providing global track reconstruction by the start of the High Level Trigger starts. The Fast Tracker can process incoming data from the whole inner detector at full first level trigger rate, up to 100 kHz, using custom electronic boards. At the core of the system is a Processing Unit installed in a VMEbus crate, formed by two sets of boards: the Associative Memory Board and a powerful rear transition module called the Auxiliary card, while the second set is the Second Stage board. The associative memories perform the pattern matching looking for correlations within the incoming data, compatible with track candidates at coarse resolution. The pattern matching task is performed using custom application specific integrated circuits, called associative memory chips. The auxiliary card prepares the input and reject bad track candidates obtained from from the Associative Memory Board using the full precision a...

  7. The ATLAS Fast Tracker Processing Units - track finding and fitting

    CERN Document Server

    Krizka, Karol; The ATLAS collaboration; Ancu, Lucian Stefan; Andreani, Alessandro; Annovi, Alberto; Beccherle, Roberto; Beretta, Matteo; Biesuz, Nicolo Vladi; Bogdan, Mircea Arghir; Bryant, Patrick; Calabro, Domenico; Citraro, Saverio; Crescioli, Francesco; Dell'Orso, Mauro; Donati, Simone; Gentsos, Christos; Giannetti, Paola; Gkaitatzis, Stamatios; Gramling, Johanna

    2016-01-01

    The Fast Tracker is a hardware upgrade to the ATLAS trigger and data-acquisition system, with the goal of providing global track reconstruction by the start of the High Level Trigger starts. The Fast Tracker can process incoming data from the whole inner detector at full first level trigger rate, up to 100 kHz, using custom electronic boards. At the core of the system is a Processing Unit installed in a VMEbus crate, formed by two sets of boards: the Associative Memory Board and a powerful rear transition module called the Auxiliary card, while the second set is the Second Stage board. The associative memories perform the pattern matching looking for correlations within the incoming data, compatible with track candidates at coarse resolution. The pattern matching task is performed using custom application specific integrated circuits, called associative memory chips. The auxiliary card prepares the input and reject bad track candidates obtained from from the Associative Memory Board using the full precision a...

  8. A distributed computing model for telemetry data processing

    Science.gov (United States)

    Barry, Matthew R.; Scott, Kevin L.; Weismuller, Steven P.

    1994-05-01

    We present a new approach to distributing processed telemetry data among spacecraft flight controllers within the control centers at NASA's Johnson Space Center. This approach facilitates the development of application programs which integrate spacecraft-telemetered data and ground-based synthesized data, then distributes this information to flight controllers for analysis and decision-making. The new approach combines various distributed computing models into one hybrid distributed computing model. The model employs both client-server and peer-to-peer distributed computing models cooperating to provide users with information throughout a diverse operations environment. Specifically, it provides an attractive foundation upon which we are building critical real-time monitoring and control applications, while simultaneously lending itself to peripheral applications in playback operations, mission preparations, flight controller training, and program development and verification. We have realized the hybrid distributed computing model through an information sharing protocol. We shall describe the motivations that inspired us to create this protocol, along with a brief conceptual description of the distributed computing models it employs. We describe the protocol design in more detail, discussing many of the program design considerations and techniques we have adopted. Finally, we describe how this model is especially suitable for supporting the implementation of distributed expert system applications.

  9. A distributed computing model for telemetry data processing

    Science.gov (United States)

    Barry, Matthew R.; Scott, Kevin L.; Weismuller, Steven P.

    1994-01-01

    We present a new approach to distributing processed telemetry data among spacecraft flight controllers within the control centers at NASA's Johnson Space Center. This approach facilitates the development of application programs which integrate spacecraft-telemetered data and ground-based synthesized data, then distributes this information to flight controllers for analysis and decision-making. The new approach combines various distributed computing models into one hybrid distributed computing model. The model employs both client-server and peer-to-peer distributed computing models cooperating to provide users with information throughout a diverse operations environment. Specifically, it provides an attractive foundation upon which we are building critical real-time monitoring and control applications, while simultaneously lending itself to peripheral applications in playback operations, mission preparations, flight controller training, and program development and verification. We have realized the hybrid distributed computing model through an information sharing protocol. We shall describe the motivations that inspired us to create this protocol, along with a brief conceptual description of the distributed computing models it employs. We describe the protocol design in more detail, discussing many of the program design considerations and techniques we have adopted. Finally, we describe how this model is especially suitable for supporting the implementation of distributed expert system applications.

  10. Using real time process measurements to reduce catheter related bloodstream infections in the intensive care unit

    Science.gov (United States)

    Wall, R; Ely, E; Elasy, T; Dittus, R; Foss, J; Wilkerson, K; Speroff, T

    2005-01-01

    

Problem: Measuring a process of care in real time is essential for continuous quality improvement (CQI). Our inability to measure the process of central venous catheter (CVC) care in real time prevented CQI efforts aimed at reducing catheter related bloodstream infections (CR-BSIs) from these devices. Design: A system was developed for measuring the process of CVC care in real time. We used these new process measurements to continuously monitor the system, guide CQI activities, and deliver performance feedback to providers. Setting: Adult medical intensive care unit (MICU). Key measures for improvement: Measured process of CVC care in real time; CR-BSI rate and time between CR-BSI events; and performance feedback to staff. Strategies for change: An interdisciplinary team developed a standardized, user friendly nursing checklist for CVC insertion. Infection control practitioners scanned the completed checklists into a computerized database, thereby generating real time measurements for the process of CVC insertion. Armed with these new process measurements, the team optimized the impact of a multifaceted intervention aimed at reducing CR-BSIs. Effects of change: The new checklist immediately provided real time measurements for the process of CVC insertion. These process measures allowed the team to directly monitor adherence to evidence-based guidelines. Through continuous process measurement, the team successfully overcame barriers to change, reduced the CR-BSI rate, and improved patient safety. Two years after the introduction of the checklist the CR-BSI rate remained at a historic low. Lessons learnt: Measuring the process of CVC care in real time is feasible in the ICU. When trying to improve care, real time process measurements are an excellent tool for overcoming barriers to change and enhancing the sustainability of efforts. To continually improve patient safety, healthcare organizations should continually measure their key clinical processes in real

  11. COMPUTATIONALLY INTELLIGENT MODELLING AND CONTROL OF FLUIDIZED BED COMBUSTION PROCESS

    Directory of Open Access Journals (Sweden)

    Ivan T Ćirić

    2011-01-01

    Full Text Available In this paper modelling and control approaches for fluidized bed combustion process have been considered, that are based on the use of computational intelligence. Proposed adaptive neuro-fuzzy-genetic modelling and intelligent control strategies provide for efficient combining of available expert knowledge with experimental data. Firstly, based on the qualitative information on the desulphurization process, models of the SO2 emission in fluidized bed combustion have been developed, which provides for economical and efficient reduction of SO2 in FBC by estimation of optimal process parameters and by design of intelligent control systems based on defined emission models. Also, efficient fuzzy nonlinear FBC process modelling strategy by combining several linearized combustion models has been presented. Finally, fuzzy and conventional process control systems for fuel flow and primary air flow regulation based on developed models and optimized by genetic algorithms have also been developed. Obtained results indicate that computationally intelligent approach can be successfully applied for modelling and control of complex fluidized bed combustion process.

  12. GPU-Based FFT Computation for Multi-Gigabit WirelessHD Baseband Processing

    Directory of Open Access Journals (Sweden)

    Nicholas Hinitt

    2010-01-01

    Full Text Available The next generation Graphics Processing Units (GPUs are being considered for non-graphics applications. Millimeter wave (60 Ghz wireless networks that are capable of multi-gigabit per second (Gbps transfer rates require a significant baseband throughput. In this work, we consider the baseband of WirelessHD, a 60 GHz communications system, which can provide a data rate of up to 3.8 Gbps over a short range wireless link. Thus, we explore the feasibility of achieving gigabit baseband throughput using the GPUs. One of the most computationally intensive functions commonly used in baseband communications, the Fast Fourier Transform (FFT algorithm, is implemented on an NVIDIA GPU using their general-purpose computing platform called the Compute Unified Device Architecture (CUDA. The paper, first, investigates the implementation of an FFT algorithm using the GPU hardware and exploiting the computational capability available. It then outlines the limitations discovered and the methods used to overcome these challenges. Finally a new algorithm to compute FFT is proposed, which reduces interprocessor communication. It is further optimized by improving memory access, enabling the processing rate to exceed 4 Gbps, achieving a processing time of a 512-point FFT in less than 200 ns using a two-GPU solution.

  13. New computed radiography processing condition for whole-spine radiography.

    Science.gov (United States)

    Sasagawa, Takeshi; Kunogi, Junichi; Masuyama, Shigeru; Ogihara, Satoshi; Takeuchi, Yosuke; Takeshita, Yujiro; Kamiya, Naokazu; Murakami, Hideki; Tsuchiya, Hiroyuki

    2011-12-06

    Computed radiography (CR) has many advantages compared with conventional radiographs, especially in image processing. Although CR is being used in chest radiography and mammography, it has not been applied to spine imaging. The purposes of this study were to formulate a set of new CR processing parameters and to test whether the resultant whole-spine radiographs visualized the spine more clearly than conventional images. This study included 29 patients who underwent whole-spine radiographs. We used 3 image processing methods to improve the clarity of whole-spine radiographs: gradation processing, dynamic range control processing, and multi-objective frequency processing. Radiograph definition was evaluated using vertebrae sampled from each region of the whole spine, specifically C4, C7, T8, T12, and L3; evaluation of the lateral view also included the sacral spine and femoral head. Image definition was assessed using a 3-point grading system. The conventional and processed CR images (both frontal and lateral views) were evaluated by 5 spine surgeons. In all spinal regions on both frontal and lateral views, the processed images showed statistically significantly better clarity than the corresponding conventional images, especially at T12, L3, the sacral spine, and the femoral head on the lateral view. Our set of new CR processing parameters can improve the clarity of whole-spine radiographs compared with conventional images. The greatest advantage of image processing was that it enabled clear depiction of the thoracolumbar junction, lumbar vertebrae, sacrum, and femoral head in the lateral view.

  14. Meshing scheme in the computation of spontaneous fault rupture process

    Institute of Scientific and Technical Information of China (English)

    LIU Qi-ming; HEN Xiao-fei

    2008-01-01

    The choice of spatial grid size has been being a crucial issue in all kinds of numerical algorithms. By using BIEM (Boundary Integral Equation Method) to calculate the rupture process of a planar fault embedded in an isotropic and homogeneous full space with simple discretization scheme, this paper focuses on what grid size should be applied to control the error as well as maintaining the computing efficiency for different parameter combinations of (Dc, Te), where Dc is the critical slip-weakening dis-tance and Te is the initial stress on the fault plane. We have preliminarily found the way of properly choosing the spatial grid size, which is of great significance in the computation of seismic source rup-ture process with BIEM.

  15. Solar physics applications of computer graphics and image processing

    Science.gov (United States)

    Altschuler, M. D.

    1985-01-01

    Computer graphics devices coupled with computers and carefully developed software provide new opportunities to achieve insight into the geometry and time evolution of scalar, vector, and tensor fields and to extract more information quickly and cheaply from the same image data. Two or more different fields which overlay in space can be calculated from the data (and the physics), then displayed from any perspective, and compared visually. The maximum regions of one field can be compared with the gradients of another. Time changing fields can also be compared. Images can be added, subtracted, transformed, noise filtered, frequency filtered, contrast enhanced, color coded, enlarged, compressed, parameterized, and histogrammed, in whole or section by section. Today it is possible to process multiple digital images to reveal spatial and temporal correlations and cross correlations. Data from different observatories taken at different times can be processed, interpolated, and transformed to a common coordinate system.

  16. 1st International Conference on Computer Vision and Image Processing

    CERN Document Server

    Kumar, Sanjeev; Roy, Partha; Sen, Debashis

    2017-01-01

    This edited volume contains technical contributions in the field of computer vision and image processing presented at the First International Conference on Computer Vision and Image Processing (CVIP 2016). The contributions are thematically divided based on their relation to operations at the lower, middle and higher levels of vision systems, and their applications. The technical contributions in the areas of sensors, acquisition, visualization and enhancement are classified as related to low-level operations. They discuss various modern topics – reconfigurable image system architecture, Scheimpflug camera calibration, real-time autofocusing, climate visualization, tone mapping, super-resolution and image resizing. The technical contributions in the areas of segmentation and retrieval are classified as related to mid-level operations. They discuss some state-of-the-art techniques – non-rigid image registration, iterative image partitioning, egocentric object detection and video shot boundary detection. Th...

  17. Synthesis of computational structures for analog signal processing

    CERN Document Server

    Popa, Cosmin Radu

    2011-01-01

    Presents the most important classes of computational structures for analog signal processing, including differential or multiplier structures, squaring or square-rooting circuits, exponential or Euclidean distance structures and active resistor circuitsIntroduces the original concept of the multifunctional circuit, an active structure that is able to implement, starting from the same circuit core, a multitude of continuous mathematical functionsCovers mathematical analysis, design and implementation of a multitude of function generator structures

  18. Heterogeneous arsenic enrichment in meta-sedimentary rocks in central Maine, United States

    Energy Technology Data Exchange (ETDEWEB)

    O' Shea, Beth, E-mail: bethoshea@sandiego.edu [Department of Marine Science and Environmental Studies, University of San Diego, 5998 Alcala Park, San Diego, CA 92110 (United States); Lamont-Doherty Earth Observatory of Columbia University, 61 Route 9W, Palisades, NY 10964 (United States); Stransky, Megan; Leitheiser, Sara [Department of Marine Science and Environmental Studies, University of San Diego, 5998 Alcala Park, San Diego, CA 92110 (United States); Brock, Patrick [School of Earth and Environmental Sciences, Queens College, City University of New York, 65-30 Kissena Blvd., Flushing, NY 11367 (United States); Marvinney, Robert G. [Maine Geological Survey, 93 State House Station, Augusta, ME 04333 (United States); Zheng, Yan [School of Earth and Environmental Sciences, Queens College, City University of New York, 65-30 Kissena Blvd., Flushing, NY 11367 (United States); Lamont-Doherty Earth Observatory of Columbia University, 61 Route 9W, Palisades, NY 10964 (United States)

    2015-02-01

    Arsenic is enriched up to 28 times the average crustal abundance of 4.8 mg kg{sup −1} for meta-sedimentary rocks of two adjacent formations in central Maine, USA where groundwater in the bedrock aquifer frequently contains elevated As levels. The Waterville Formation contains higher arsenic concentrations (mean As 32.9 mg kg{sup −1}, median 12.1 mg kg{sup −1}, n = 38) than the neighboring Vassalboro Group (mean As 19.1 mg kg{sup −1}, median 6.0 mg kg{sup −1}, n = 38). The Waterville Formation is a pelitic meta-sedimentary unit with abundant pyrite either visible or observed by scanning electron microprobe. Concentrations of As and S are strongly correlated (r = 0.88, p < 0.05) in the low grade phyllite rocks, and arsenic is detected up to 1944 mg kg{sup −1} in pyrite measured by electron microprobe. In contrast, statistically significant (p < 0.05) correlations between concentrations of As and S are absent in the calcareous meta-sediments of the Vassalboro Group, consistent with the absence of arsenic-rich pyrite in the protolith. Metamorphism converts the arsenic-rich pyrite to arsenic-poor pyrrhotite (mean As 1 mg kg{sup −1}, n = 15) during de-sulfidation reactions: the resulting metamorphic rocks contain arsenic but little or no sulfur indicating that the arsenic is now in new mineral hosts. Secondary weathering products such as iron oxides may host As, yet the geochemical methods employed (oxidative and reductive leaching) do not conclusively indicate that arsenic is associated only with these. Instead, silicate minerals such as biotite and garnet are present in metamorphic zones where arsenic is enriched (up to 130.8 mg kg{sup −1} As) where S is 0%. Redistribution of already variable As in the protolith during metamorphism and contemporary water–rock interaction in the aquifers, all combine to contribute to a spatially heterogeneous groundwater arsenic distribution in bedrock aquifers. - Highlights: • Arsenic is enriched up to 138 mg kg

  19. Implementation of central venous catheter bundle in an intensive care unit in Kuwait: Effect on central line-associated bloodstream infections.

    Science.gov (United States)

    Salama, Mona F; Jamal, Wafaa; Al Mousa, Haifa; Rotimi, Vincent

    2016-01-01

    Central line-associated bloodstream infection (CLABSIs) is an important healthcare-associated infection in the critical care units. It causes substantial morbidity, mortality and incurs high costs. The use of central venous line (CVL) insertion bundle has been shown to decrease the incidence of CLABSIs. Our aim was to study the impact of CVL insertion bundle on incidence of CLABSI and study the causative microbial agents in an intensive care unit in Kuwait. Surveillance for CLABSI was conducted by trained infection control team using National Health Safety Network (NHSN) case definitions and device days measurement methods. During the intervention period, nursing staff used central line care bundle consisting of (1) hand hygiene by inserter (2) maximal barrier precautions upon insertion by the physician inserting the catheter and sterile drape from head to toe to the patient (3) use of a 2% chlorohexidine gluconate (CHG) in 70% ethanol scrub for the insertion site (4) optimum catheter site selection. (5) Examination of the daily necessity of the central line. During the pre-intervention period, there were 5367 documented catheter-days and 80 CLABSIs, for an incidence density of 14.9 CLABSIs per 1000 catheter-days. After implementation of the interventions, there were 5052 catheter-days and 56 CLABSIs, for an incidence density of 11.08 per 1000 catheter-days. The reduction in the CLABSI/1000 catheter days was not statistically significant (P=0.0859). This study demonstrates that implementation of a central venous catheter post-insertion care bundle was associated with a reduction in CLABSI in an intensive care area setting.

  20. The AMchip04 and the Processing Unit Prototype for the FastTracker

    CERN Document Server

    Andreani, A; The ATLAS collaboration; Beretta, M; Bogdan, M; Citterio, M; Alberti, F; Giannetti, P; Lanza, A; Magalotti, D; Piendibene, M; Shochet, M; Stabile, A; Tang, J; Tompkins, L; Volpi, G

    2012-01-01

    Modern experiments search for extremely rare processes hidden in much larger background levels. As the experiment complexity and the accelerator backgrounds and luminosity increase we need increasingly complex and exclusive selections. We present the first prototype of a new Processing Unit, the core of the FastTracker processor for Atlas, whose computing power is such that a couple of hundreds of them will be able to reconstruct all the tracks with transverse momentum above 1 GeV in the ATLAS events up to Phase II instantaneous luminosities (5×1034 cm-2 s-1) with an event input rate of 100 kHz and a latency below hundreds of microseconds. We plan extremely powerful, very compact and low consumption units for the far future, essential to increase efficiency and purity of the Level 2 selected samples through the intensive use of tracking. This strategy requires massive computing power to minimize the online execution time of complex tracking algorithms. The time consuming pattern recognition problem, generall...

  1. Thermochemical Process Development Unit: Researching Fuels from Biomass, Bioenergy Technologies (Fact Sheet)

    Energy Technology Data Exchange (ETDEWEB)

    2009-01-01

    The Thermochemical Process Development Unit (TCPDU) at the National Renewable Energy Laboratory (NREL) is a unique facility dedicated to researching thermochemical processes to produce fuels from biomass.

  2. Computer-delivered patient simulations in the United States Medical Licensing Examination (USMLE).

    Science.gov (United States)

    Dillon, Gerard F; Clauser, Brian E

    2009-01-01

    To obtain a full and unrestricted license to practice medicine in the United States, students and graduates of the MD-granting US medical schools and of medical schools located outside of the United States must take and pass the United States Medical Licensing Examination. United States Medical Licensing Examination began as a series of paper-and-pencil examinations in the early 1990s and converted to computer-delivery in 1999. With this change to the computerized format came the opportunity to introduce computer-simulated patients, which had been under development at the National Board of Medical Examiners for a number of years. This testing format, called a computer-based case simulation, requires the examinee to manage a simulated patient in simulated time. The examinee can select options for history-taking and physical examination. Diagnostic studies and treatment are ordered via free-text entry, and the examinee controls the advance of simulated time and the location of the patient in the health care setting. Although the inclusion of this format has brought a number of practical, psychometric, and security challenges, its addition has allowed a significant expansion in ways to assess examinees on their diagnostic decision making and therapeutic intervention skills and on developing and implementing a reasonable patient management plan.

  3. Analyzing shelf life of processed cheese by soft computing

    Directory of Open Access Journals (Sweden)

    S. Goyal

    2012-09-01

    Full Text Available Feedforward soft computing multilayer models were developed for analyzing shelf life of processed cheese. The models were trained with 80% of total observations and validated with 20% of the remaining data. Mean Square Error, Root Mean Square Error, Coefficient of Determination and Nash - Sutcliffo Coefficient were used in order to compare the prediction ability of the developed models. From the study, it is concluded that feedforward multilayer models are good in predicting the shelf life of processed cheese stored at 7-8o C.

  4. Multiple-User, Multitasking, Virtual-Memory Computer System

    Science.gov (United States)

    Generazio, Edward R.; Roth, Don J.; Stang, David B.

    1993-01-01

    Computer system designed and programmed to serve multiple users in research laboratory. Provides for computer control and monitoring of laboratory instruments, acquisition and anlaysis of data from those instruments, and interaction with users via remote terminals. System provides fast access to shared central processing units and associated large (from megabytes to gigabytes) memories. Underlying concept of system also applicable to monitoring and control of industrial processes.

  5. Automation of the CFD Process on Distributed Computing Systems

    Science.gov (United States)

    Tejnil, Ed; Gee, Ken; Rizk, Yehia M.

    2000-01-01

    A script system was developed to automate and streamline portions of the CFD process. The system was designed to facilitate the use of CFD flow solvers on supercomputer and workstation platforms within a parametric design event. Integrating solver pre- and postprocessing phases, the fully automated ADTT script system marshalled the required input data, submitted the jobs to available computational resources, and processed the resulting output data. A number of codes were incorporated into the script system, which itself was part of a larger integrated design environment software package. The IDE and scripts were used in a design event involving a wind tunnel test. This experience highlighted the need for efficient data and resource management in all parts of the CFD process. To facilitate the use of CFD methods to perform parametric design studies, the script system was developed using UNIX shell and Perl languages. The goal of the work was to minimize the user interaction required to generate the data necessary to fill a parametric design space. The scripts wrote out the required input files for the user-specified flow solver, transferred all necessary input files to the computational resource, submitted and tracked the jobs using the resource queuing structure, and retrieved and post-processed the resulting dataset. For computational resources that did not run queueing software, the script system established its own simple first-in-first-out queueing structure to manage the workload. A variety of flow solvers were incorporated in the script system, including INS2D, PMARC, TIGER and GASP. Adapting the script system to a new flow solver was made easier through the use of object-oriented programming methods. The script system was incorporated into an ADTT integrated design environment and evaluated as part of a wind tunnel experiment. The system successfully generated the data required to fill the desired parametric design space. This stressed the computational

  6. Computational simulation of multi-strut central lobed injection of hydrogen in a scramjet combustor

    Directory of Open Access Journals (Sweden)

    Gautam Choubey

    2016-09-01

    Full Text Available Multi-strut injection is an approach to increase the overall performance of Scramjet while reducing the risk of thermal choking in a supersonic combustor. Hence computational simulation of Scramjet combustor at Mach 2.5 through multiple central lobed struts (three struts have been presented and discussed in the present research article. The geometry and model used here is slight modification of the DLR (German Aerospace Center scramjet model. Present results show that the presence of three struts injector improves the performance of scramjet combustor as compared to single strut injector. The combustion efficiency is also found to be highest in case of three strut fuel injection system. In order to validate the results, the numerical data for single strut injection is compared with experimental result which is taken from the literature.

  7. A simplified computational memory model from information processing

    Science.gov (United States)

    Zhang, Lanhua; Zhang, Dongsheng; Deng, Yuqin; Ding, Xiaoqian; Wang, Yan; Tang, Yiyuan; Sun, Baoliang

    2016-11-01

    This paper is intended to propose a computational model for memory from the view of information processing. The model, called simplified memory information retrieval network (SMIRN), is a bi-modular hierarchical functional memory network by abstracting memory function and simulating memory information processing. At first meta-memory is defined to express the neuron or brain cortices based on the biology and graph theories, and we develop an intra-modular network with the modeling algorithm by mapping the node and edge, and then the bi-modular network is delineated with intra-modular and inter-modular. At last a polynomial retrieval algorithm is introduced. In this paper we simulate the memory phenomena and functions of memorization and strengthening by information processing algorithms. The theoretical analysis and the simulation results show that the model is in accordance with the memory phenomena from information processing view.

  8. Experimental determination of the segregation process using computer tomography

    Directory of Open Access Journals (Sweden)

    Konstantin Beckmann

    2016-08-01

    Full Text Available Modelling methods such as DEM and CFD are increasingly used for developing high efficient combine cleaning systems. For this purpose it is necessary to verify the complex segregation and separation processes in the combine cleaning system. One way is to determine the segregation and separation function using 3D computer tomography (CT. This method makes it possible to visualize and analyse the movement behaviour of the components of the mixture during the segregation and separation process as well as the derivation of descriptive process parameters. A mechanically excited miniature test rig was designed and built at the company CLAAS Selbstfahrende Erntemaschinen GmbH to achieve this aim. The investigations were carried out at the Fraunhofer Institute for Integrated Circuits IIS. Through the evaluation of the recorded images the segregation process is described visually. A more detailed analysis enabled the development of segregation and separation function based on the different densities of grain and material other than grain.

  9. Computational Approaches for Modeling the Multiphysics in Pultrusion Process

    DEFF Research Database (Denmark)

    Carlone, P.; Baran, Ismet; Hattel, Jesper Henri;

    2013-01-01

    Pultrusion is a continuousmanufacturing process used to produce high strength composite profiles with constant cross section.The mutual interactions between heat transfer, resin flow and cure reaction, variation in the material properties, and stress/distortion evolutions strongly affect...... the process dynamics together with the mechanical properties and the geometrical precision of the final product. In the present work, pultrusion process simulations are performed for a unidirectional (UD) graphite/epoxy composite rod including several processing physics, such as fluid flow, heat transfer......, chemical reaction, and solid mechanics. The pressure increase and the resin flow at the tapered inlet of the die are calculated by means of a computational fluid dynamics (CFD) finite volume model. Several models, based on different homogenization levels and solution schemes, are proposed and compared...

  10. Experimental determination of the segregation process using computer tomography

    Directory of Open Access Journals (Sweden)

    Konstantin Beckmann

    2016-07-01

    Full Text Available Modelling methods such as DEM and CFD are increasingly used for developing high efficient combine cleaning systems. For this purpose it is necessary to verify the complex segregation and separation processes in the combine cleaning system. One way is to determine the segregation and separation function using 3D computer tomography (CT. This method makes it possible to visualize and analyse the movement behaviour of the components of the mixture during the segregation and separation process as well as the derivation of descriptive process parameters. A mechanically excited miniature test rig was designed and built at the company CLAAS Selbstfahrende Erntemaschinen GmbH to achieve this aim. The investigations were carried out at the Fraunhofer Institute for Integrated Circuits IIS. Through the evaluation of the recorded images the segregation process is described visually. A more detailed analysis enabled the development of segregation and separation function based on the different densities of grain and material other than grain.

  11. Microwave processing of a dental ceramic used in computer-aided design/computer-aided manufacturing.

    Science.gov (United States)

    Pendola, Martin; Saha, Subrata

    2015-01-01

    Because of their favorable mechanical properties and natural esthetics, ceramics are widely used in restorative dentistry. The conventional ceramic sintering process required for their use is usually slow, however, and the equipment has an elevated energy consumption. Sintering processes that use microwaves have several advantages compared to regular sintering: shorter processing times, lower energy consumption, and the capacity for volumetric heating. The objective of this study was to test the mechanical properties of a dental ceramic used in computer-aided design/computer-aided manufacturing (CAD/CAM) after the specimens were processed with microwave hybrid sintering. Density, hardness, and bending strength were measured. When ceramic specimens were sintered with microwaves, the processing times were reduced and protocols were simplified. Hardness was improved almost 20% compared to regular sintering, and flexural strength measurements suggested that specimens were approximately 50% stronger than specimens sintered in a conventional system. Microwave hybrid sintering may preserve or improve the mechanical properties of dental ceramics designed for CAD/CAM processing systems, reducing processing and waiting times.

  12. Large-scale analytical Fourier transform of photomask layouts using graphics processing units

    Science.gov (United States)

    Sakamoto, Julia A.

    2015-10-01

    Compensation of lens-heating effects during the exposure scan in an optical lithographic system requires knowledge of the heating profile in the pupil of the projection lens. A necessary component in the accurate estimation of this profile is the total integrated distribution of light, relying on the squared modulus of the Fourier transform (FT) of the photomask layout for individual process layers. Requiring a layout representation in pixelated image format, the most common approach is to compute the FT numerically via the fast Fourier transform (FFT). However, the file size for a standard 26- mm×33-mm mask with 5-nm pixels is an overwhelming 137 TB in single precision; the data importing process alone, prior to FFT computation, can render this method highly impractical. A more feasible solution is to handle layout data in a highly compact format with vertex locations of mask features (polygons), which correspond to elements in an integrated circuit, as well as pattern symmetries and repetitions (e.g., GDSII format). Provided the polygons can decompose into shapes for which analytical FT expressions are possible, the analytical approach dramatically reduces computation time and alleviates the burden of importing extensive mask data. Algorithms have been developed for importing and interpreting hierarchical layout data and computing the analytical FT on a graphics processing unit (GPU) for rapid parallel processing, not assuming incoherent imaging. Testing was performed on the active layer of a 392- μm×297-μm virtual chip test structure with 43 substructures distributed over six hierarchical levels. The factor of improvement in the analytical versus numerical approach for importing layout data, performing CPU-GPU memory transfers, and executing the FT on a single NVIDIA Tesla K20X GPU was 1.6×104, 4.9×103, and 3.8×103, respectively. Various ideas for algorithm enhancements will be discussed.

  13. Heterogeneous arsenic enrichment in meta-sedimentary rocks in central Maine, United States

    Science.gov (United States)

    O’Shea, Beth; Stransky, Megan; Leitheiser, Sara; Brock, Patrick; Marvinney, Robert G.; Zheng, Yan

    2014-01-01

    Arsenic is enriched up to 28 times the average crustal abundance of 4.8 mg kg−1 for meta-sedimentary rocks of two adjacent formations in central Maine, USA where groundwater in the bedrock aquifer frequently contains elevated As levels. The Waterville Formation contains higher arsenic concentrations (mean As 32.9 mg kg−1, median 12.1 mg kg−1, n=36) than the neighboring Vassalboro Group (mean As 19.1 mg kg−1, median 6.0 mg kg−1, n=36). The Waterville Formation is a pelitic meta-sedimentary unit with abundant pyrite either visible or observed by scanning electron microprobe. Concentrations of As and S are strongly correlated (r=0.88, p<0.05) in the low grade phyllite rocks, and arsenic is detected up to 1,944 mg kg−1 in pyrite measured by electron microprobe. In contrast, statistically significant (p<0.05) correlations between concentrations of As and S are absent in the calcareous meta-sediments of the Vassalboro Group, consistent with the absence of arsenic-rich pyrite in the protolith. Metamorphism converts the arsenic-rich pyrite to arsenic-poor pyrrhotite (mean As 1 mg kg−1, n=15) during de-sulfidation reactions: the resulting metamorphic rocks contain arsenic but little or no sulfur indicating that the arsenic is now in new mineral hosts. Secondary weathering products such as iron oxides may host As, yet the geochemical methods employed (oxidative and reductive leaching) do not conclusively indicate that arsenic is associated only with these. Instead, silicate minerals such as biotite and garnet are present in metamorphic zones where arsenic is enriched (up to 130.8 mg kg−1 As) where S is 0%. Redistribution of already variable As in the protolith during metamorphism and contemporary water-rock interaction in the aquifers, all combine to contribute to a spatially heterogeneous groundwater arsenic distribution in bedrock aquifers. PMID:24861530

  14. Structure and integrity of fish assemblages in streams associated to conservation units in Central Brazil

    Directory of Open Access Journals (Sweden)

    Thiago Belisário d'Araújo Couto

    Full Text Available This study aims to characterize the spatial and seasonal distribution of the fish assemblage and evaluate the integrity of streams in a sustainable use area that includes integral protection conservation units in Distrito Federal, Central Brazil (Cerrado biome. For the study, 12 stretches of 8 streams were sampled in 2008 (dry season and 2009 (wet season. For that evaluation was estimated the Physical Habitat Index (PHI, vegetation cover (VC, pH, dissolved oxygen, turbidity, and conductivity. We recorded 22 species, about eight undescribed species, by a total of 2,327 individuals. The most representative families in number of species were Characidae (31.8%, Loricariidae (31.8%, and Crenuchidae (13.6%. Knodus moenkhausii was the most abundant species with 1,476 individuals, added to Astyanax sp., Phalloceros harpagos, and Hasemania sp. they represent over 95% of the total abundance. The species Astyanax sp. (occurring in 79.2% of the stretches and K. moenkhausii (50.0% were considered constant in both seasons. The longitudinal gradient (River Continuum exerts a strong influence on the studied assemblage. According to CCA, the variables that structure the fish assemblage are based on aspects related to water volume and habitat complexity. No seasonal variation in richness, diversity, abundance, and mass were detected. A cluster analysis suggests a separation of species composition between the stretches of higher and lower orders, which was not observed for seasonality. The streams were considered well preserved (mean PHI 82.9±7.5%, but in some stretches were observed anthropogenic influence, detected in the water quality and, mainly, on the riparian vegetation integrity. The exotic species Poecilia reticulata was sampled in the two stretches considered most affected by anthropogenic activities by PHI, conductivity, and VC.

  15. Assessing executive function using a computer game: computational modeling of cognitive processes.

    Science.gov (United States)

    Hagler, Stuart; Jimison, Holly Brugge; Pavel, Misha

    2014-07-01

    Early and reliable detection of cognitive decline is one of the most important challenges of current healthcare. In this project, we developed an approach whereby a frequently played computer game can be used to assess a variety of cognitive processes and estimate the results of the pen-and-paper trail making test (TMT)--known to measure executive function, as well as visual pattern recognition, speed of processing, working memory, and set-switching ability. We developed a computational model of the TMT based on a decomposition of the test into several independent processes, each characterized by a set of parameters that can be estimated from play of a computer game designed to resemble the TMT. An empirical evaluation of the model suggests that it is possible to use the game data to estimate the parameters of the underlying cognitive processes and using the values of the parameters to estimate the TMT performance. Cognitive measures and trends in these measures can be used to identify individuals for further assessment, to provide a mechanism for improving the early detection of neurological problems, and to provide feedback and monitoring for cognitive interventions in the home.

  16. Solar augmentation for process heat with central receiver technology

    Science.gov (United States)

    Kotzé, Johannes P.; du Toit, Philip; Bode, Sebastian J.; Larmuth, James N.; Landman, Willem A.; Gauché, Paul

    2016-05-01

    Coal fired boilers are currently one of the most widespread ways to deliver process heat to industry. John Thompson Boilers (JTB) offer industrial steam supply solutions for industry and utility scale applications in Southern Africa. Transport cost add significant cost to the coal price in locations far from the coal fields in Mpumalanga, Gauteng and Limpopo. The Helio100 project developed a low cost, self-learning, wireless heliostat technology that requires no ground preparation. This is attractive as an augmentation alternative, as it can easily be installed on any open land that a client may have available. This paper explores the techno economic feasibility of solar augmentation for JTB coal fired steam boilers by comparing the fuel savings of a generic 2MW heliostat field at various locations throughout South Africa.

  17. Central pain processing in chronic tension-type headache

    DEFF Research Database (Denmark)

    Lindelof, Kim; Ellrich, Jens; Jensen, Rigmor

    2009-01-01

    ) reflects neuronal excitability due to nociceptive input in the brainstem. The aim of this study was to investigate nociceptive processing at the level of the brainstem in an experimental pain model of CTTH symptoms. METHODS: The effect of conditioning pain, 5 min infusion of hypertonic saline into the neck...... muscles, was investigated in 20 patients with CTTH and 20 healthy controls. In addition, a pilot study with isotonic saline was performed with 5 subjects in each group. The BR was elicited by electrical stimuli with an intensity of four times the pain threshold, with a superficial concentric electrode. We...... measured the BR, sensibility to pressure and electrical pain scores before, during and 25 min after the saline infusion. RESULTS: The pain rating of the electrical stimuli and the pain score of the hypertonic saline infusion were significantly higher in CTTH patients than in healthy volunteers. The primary...

  18. Undergraduate Game Degree Programs in the United Kingdom and United States: A Comparison of the Curriculum Planning Process

    Science.gov (United States)

    McGill, Monica M.

    2010-01-01

    Digital games are marketed, mass-produced, and consumed by an increasing number of people and the game industry is only expected to grow. In response, post-secondary institutions in the United Kingdom (UK) and the United States (US) have started to create game degree programs. Though curriculum theorists provide insight into the process of…

  19. 78 FR 47011 - Software Unit Testing for Digital Computer Software Used in Safety Systems of Nuclear Power Plants

    Science.gov (United States)

    2013-08-02

    ... COMMISSION Software Unit Testing for Digital Computer Software Used in Safety Systems of Nuclear Power Plants..., ``Software Unit Testing for Digital Computer Software Used in Safety Systems of Nuclear Power Plants.'' This... software elements if those systems include software. This RG is one of six RG revisions addressing...

  20. Computer Simulation Methods for Crushing Process in an Jaw Crusher

    Science.gov (United States)

    Il'ich Beloglazov, Ilia; Andreevich Ikonnikov, Dmitrii

    2016-08-01

    One of the trends at modern mining enterprises is the application of combined systems for extraction and transportation of the rock mass. Given technology involves the use the conveyor lines as a continuous link of combined technology. The application of a conveyor transport provides significant reduction of costs for energy resources, increase in labor productivity and process automation. However, the use of a conveyor transport provides for certain requirements for the quality of transported material. The maximum size of the rock mass pieces is one of the basic parameters for it. The crushing plants applies as a coarse crushing followed by crushing the material to the maximum size of piece which possible to use for conveyor transport. It is often represented by jaw crushers. Modelling of crushing process in jaw crushers allows to maximally optimize workflow and increase efficiency of the equipment at the further transportation and processing of rocks. We studied the interaction between walls of the jaw crusher and bulk material by using discrete element method (DEM) in this paper. The article examines the process of modeling by stages. It includes design of the crusher construction in solid and surface modeling system. Modelling of the crushing process based on the experimental data received via the crushing unit BOYD. The process of destruction and particle size distribution in the study was done. Analysis of research results shows a comparability of actual experiment and modeling process.

  1. The ATLAS Fast Tracker Processing Units - input and output data preparation

    CERN Document Server

    Bolz, Arthur Eugen; The ATLAS collaboration

    2016-01-01

    The ATLAS Fast Tracker is a hardware processor built to reconstruct tracks at a rate of up to 100 kHz and provide them to the high level trigger system. The Fast Tracker will allow the trigger to utilize tracking information from the entire detector at an earlier event selection stage than ever before, allowing for more efficient event rejection. The connection of the system from to the detector read-outs and to the high level trigger computing farms are made through custom boards implementing Advanced Telecommunications Computing Technologies standard. The input is processed by the Input Mezzanines and Data Formatter boards, designed to receive and sort the data coming from the Pixel and Semi-conductor Tracker. The Fast Tracker to Level-2 Interface Card connects the system to the computing farm. The Input Mezzanines are 128 boards, performing clustering, placed on the 32 Data Formatter mother boards that sort the information into 64 logical regions required by the downstream processing units. This necessitat...

  2. Contrasting serpentinization processes in the eastern Central Alps

    Science.gov (United States)

    Burkhard, D.J.M.; O'Neil, J.R.

    1988-01-01

    Stable isotope compositions have been determined for serpentinites from between Davos (Arosa-Platta nappe, Switzerland) and the Valmalenco (Italy). ??D and ??18O values (-120 to -60 and 6-10???, respectively) in the Arosa-Platta nappe indicate that serpentinization took place on the continent at relatively low temperatures in the presence of limited amounts of metamorphic fluids that contained a component of meteoric water. One sample of chrysotile has a ??18O value of 13??? providing evidence of high W/R ratios and low formation temperature of lizardite-chrysotile in this area. In contrast, relatively high ??D values (-42 to -34???) and low ??18O values (4.4-7.4???) for serpentine in the eastern part of the Valmalenco suggest a serpentinization process that took place at moderate temperatures in fluids that were dominated by ocean water. The antigorite in the Valmalenco is the first reported example of continental antigorite with an ocean water signature. An amphibole sample from a metasomatically overprinted contact zone to metasediments (??D=-36???) indicates that the metasomatic event also took place in the presence of ocean water. Lower ??D values (-93 to -60???) of serpentines in the western part of the Valmalenco suggest a different alteration history possibly influenced by fluids associated with contact metamorphism. Low water/rock ratios during regional metamorphism (and metasomatism) have to be assumed for both regions. ?? 1988 Springer-Verlag.

  3. Fast direct reconstruction strategy of dynamic fluorescence molecular tomography using graphics processing units

    Science.gov (United States)

    Chen, Maomao; Zhang, Jiulou; Cai, Chuangjian; Gao, Yang; Luo, Jianwen

    2016-06-01

    Dynamic fluorescence molecular tomography (DFMT) is a valuable method to evaluate the metabolic process of contrast agents in different organs in vivo, and direct reconstruction methods can improve the temporal resolution of DFMT. However, challenges still remain due to the large time consumption of the direct reconstruction methods. An acceleration strategy using graphics processing units (GPU) is presented. The procedure of conjugate gradient optimization in the direct reconstruction method is programmed using the compute unified device architecture and then accelerated on GPU. Numerical simulations and in vivo experiments are performed to validate the feasibility of the strategy. The results demonstrate that, compared with the traditional method, the proposed strategy can reduce the time consumption by ˜90% without a degradation of quality.

  4. Acceleration of Early-Photon Fluorescence Molecular Tomography with Graphics Processing Units

    Directory of Open Access Journals (Sweden)

    Xin Wang

    2013-01-01

    Full Text Available Fluorescence molecular tomography (FMT with early-photons can improve the spatial resolution and fidelity of the reconstructed results. However, its computing scale is always large which limits its applications. In this paper, we introduced an acceleration strategy for the early-photon FMT with graphics processing units (GPUs. According to the procedure, the whole solution of FMT was divided into several modules and the time consumption for each module is studied. In this strategy, two most time consuming modules (Gd and W modules were accelerated with GPU, respectively, while the other modules remained coded in the Matlab. Several simulation studies with a heterogeneous digital mouse atlas were performed to confirm the performance of the acceleration strategy. The results confirmed the feasibility of the strategy and showed that the processing speed was improved significantly.

  5. Processing techniques for data from the Kuosheng Unit 1 shakedown safety-relief-valve tests

    Energy Technology Data Exchange (ETDEWEB)

    McCauley, E.W.; Rompel, S.L.; Weaver, H.J.; Altenbach, T.J.

    1982-08-01

    This report describes techniques developed at the Lawrence Livermore National Laobratory, Livermore, CA for processing original data from the Taiwan Power Company's Kuosheng MKIII Unit 1 Safety Relief Valve Shakedown Tests conducted in April/May 1981. The computer codes used, TPSORT, TPPLOT, and TPPSD, form a special evaluation system for treating the data from its original packed binary form to ordered, calibrated ASCII transducer files and then to production of time-history plots, numerical output files, and spectral analyses. Using the data processing techniques described, a convenient means of independently examining and analyzing a unique data base for steam condensation phenomena in the MARKIII wetwell is described. The techniques developed for handling these data are applicable to the treatment of similar, but perhaps differently structured, experiment data sets.

  6. Remote Maintenance Design Guide for Compact Processing Units

    Energy Technology Data Exchange (ETDEWEB)

    Draper, J.V.

    2000-07-13

    Oak Ridge National Laboratory (ORNL) Robotics and Process Systems (RPSD) personnel have extensive experience working with remotely operated and maintained systems. These systems require expert knowledge in teleoperation, human factors, telerobotics, and other robotic devices so that remote equipment may be manipulated, operated, serviced, surveyed, and moved about in a hazardous environment. The RPSD staff has a wealth of experience in this area, including knowledge in the broad topics of human factors, modular electronics, modular mechanical systems, hardware design, and specialized tooling. Examples of projects that illustrate and highlight RPSD's unique experience in remote systems design and application include the following: (1) design of a remote shear and remote dissolver systems in support of U.S. Department of Energy (DOE) fuel recycling research and nuclear power missions; (2) building remotely operated mobile systems for metrology and characterizing hazardous facilities in support of remote operations within those facilities; (3) construction of modular robotic arms, including the Laboratory Telerobotic Manipulator, which was designed for the National Aeronautics and Space Administration (NASA) and the Advanced ServoManipulator, which was designed for the DOE; (4) design of remotely operated laboratories, including chemical analysis and biochemical processing laboratories; (5) construction of remote systems for environmental clean up and characterization, including underwater, buried waste, underground storage tank (UST) and decontamination and dismantlement (D&D) applications. Remote maintenance has played a significant role in fuel reprocessing because of combined chemical and radiological contamination. Furthermore, remote maintenance is expected to play a strong role in future waste remediation. The compact processing units (CPUs) being designed for use in underground waste storage tank remediation are examples of improvements in systems

  7. Global tree network for computing structures enabling global processing operations

    Science.gov (United States)

    Blumrich; Matthias A.; Chen, Dong; Coteus, Paul W.; Gara, Alan G.; Giampapa, Mark E.; Heidelberger, Philip; Hoenicke, Dirk; Steinmacher-Burow, Burkhard D.; Takken, Todd E.; Vranas, Pavlos M.

    2010-01-19

    A system and method for enabling high-speed, low-latency global tree network communications among processing nodes interconnected according to a tree network structure. The global tree network enables collective reduction operations to be performed during parallel algorithm operations executing in a computer structure having a plurality of the interconnected processing nodes. Router devices are included that interconnect the nodes of the tree via links to facilitate performance of low-latency global processing operations at nodes of the virtual tree and sub-tree structures. The global operations performed include one or more of: broadcast operations downstream from a root node to leaf nodes of a virtual tree, reduction operations upstream from leaf nodes to the root node in the virtual tree, and point-to-point message passing from any node to the root node. The global tree network is configurable to provide global barrier and interrupt functionality in asynchronous or synchronized manner, and, is physically and logically partitionable.

  8. Computer-Aided Modeling of Lipid Processing Technology

    DEFF Research Database (Denmark)

    Diaz Tovar, Carlos Axel

    2011-01-01

    the analysis, in terms of their design variables and their impact in the process behavior, of three lipid-related processes has been performed: the solvent recovery section of the extraction of crude soybean oil, the deodorization of palm oil, and the deacidification of soybean oil.......Vegetable oils and fats have an important role in human nutrition and in the chemical industry since they are a source of energy, fat-soluble vitamins, and now also in the production of renewable sources of energy. Nowadays as the consumer preferences for natural products and healthier foods...... this is not the case for the edible oil and biodiesel industries. The oleochemical industry lags behind the chemical industry in terms of thermophysical property modeling and development of computational tools suitable for the design/analysis, and optimization of lipid-related processes. The aim of this work has been...

  9. A Comparison of Dental Chartings Performed at the Joint POW/MIA Accounting Command Central Identification Laboratory and the Kokura Central Identification Unit on Remains Identified from the Korean War.

    Science.gov (United States)

    Shiroma, Calvin Y

    2016-01-01

    During the Korean War, the Office of the Quartermaster General's Graves Registration Service (GRS) was responsible for the recovery, processing, identification, and repatriation of US remains. In January 1951, the GRS established a Central Identification Unit (CIU) at Kokura, Japan. At the Kokura CIU, postmortem dental examinations were performed by the dental technicians. Thirty-nine postmortem dental examinations performed at the CIU were compared to the findings documented in the Forensic Odontology Reports written at the JPAC Central Identification Laboratory (CIL). Differences were noted in 20 comparisons (51%). The majority of the discrepancies was considered negligible and would not alter the JPAC decision to disinter a set of unknown remains. Charting discrepancies that were considered significant included the occasional failure of the Kokura technicians to identify teeth with inter-proximal or esthetic restorations and the misidentification of a mechanically prepared tooth (i.e., tooth prepared for a restoration) as a carious surface.

  10. Classification of bacterial contamination using image processing and distributed computing.

    Science.gov (United States)

    Ahmed, W M; Bayraktar, B; Bhunia, A; Hirleman, E D; Robinson, J P; Rajwa, B

    2013-01-01

    Disease outbreaks due to contaminated food are a major concern not only for the food-processing industry but also for the public at large. Techniques for automated detection and classification of microorganisms can be a great help in preventing outbreaks and maintaining the safety of the nations food supply. Identification and classification of foodborne pathogens using colony scatter patterns is a promising new label-free technique that utilizes image-analysis and machine-learning tools. However, the feature-extraction tools employed for this approach are computationally complex, and choosing the right combination of scatter-related features requires extensive testing with different feature combinations. In the presented work we used computer clusters to speed up the feature-extraction process, which enables us to analyze the contribution of different scatter-based features to the overall classification accuracy. A set of 1000 scatter patterns representing ten different bacterial strains was used. Zernike and Chebyshev moments as well as Haralick texture features were computed from the available light-scatter patterns. The most promising features were first selected using Fishers discriminant analysis, and subsequently a support-vector-machine (SVM) classifier with a linear kernel was used. With extensive testing we were able to identify a small subset of features that produced the desired results in terms of classification accuracy and execution speed. The use of distributed computing for scatter-pattern analysis, feature extraction, and selection provides a feasible mechanism for large-scale deployment of a light scatter-based approach to bacterial classification.

  11. Possibilities of computer application in modern geography teaching process

    Directory of Open Access Journals (Sweden)

    Ivkov-Džigurski Anđelija

    2009-01-01

    Full Text Available Geography is a science that follows modern trends in the development of contemporary science. One of the crucial things that gives teaching process a high quality in the application of modern techniques and methods. Modern organization of the teaching process in primary and secondary schools is unimaginable without innovations. This would mean changes and new elements in all segments of the teaching process. Good organization, innovation and new tendencies in the development of the science can raise the quality of the teaching process, thus enabling the student to study fully and rationally. Innovations should help students develop a dialectic way of thinking when explaining objects, phenomena and processes in nature and society, as well as enable them to notice cause and effect relationships. The application of new methods should provide maximum activity of the students in terms of their research and independent work. Computers are used in many different ways therefore they can be used very rationally in different segments of the teaching process.

  12. The Use of GPUs for Solving the Computed Tomography Problem

    Directory of Open Access Journals (Sweden)

    A.E. Kovtanyuk

    2014-07-01

    Full Text Available Computed tomography (CT is a widespread method used to study the internal structure of objects. The method has applications in medicine, industry and other fields of human activity. In particular, Electronic Imaging, as a species CT, can be used to restore the structure of nanosized objects. Accurate and rapid results are in high demand in modern science. However, there are computational limitations that bound the possible usefulness of CT. On the other hand, the introduction of high-performance calculations using Graphics Processing Units (GPUs provides improving quality and performance of computed tomography investigations. Moreover, parallel computing with GPUs gives significantly higher computation speeds when compared with (Central Processing Units CPUs, because of architectural advantages of the former. In this paper a computed tomography method of recovering the image using parallel computations powered by NVIDIA CUDA technology is considered. The implementation of this approach significantly reduces the required time for solving the CT problem.

  13. In-Core Computation of Geometric Centralities with HyperBall: A Hundred Billion Nodes and Beyond

    CERN Document Server

    Boldi, Paolo

    2013-01-01

    Given a social network, which of its nodes are more central? This question has been asked many times in sociology, psychology and computer science, and a whole plethora of centrality measures (a.k.a. centrality indices, or rankings) were proposed to account for the importance of the nodes of a network. In this paper, we approach the problem of computing geometric centralities, such as closeness and harmonic centrality, on very large graphs; traditionally this task requires an all-pairs shortest-path computation in the exact case, or a number of breadth-first traversals for approximated computations, but these techniques yield very weak statistical guarantees on highly disconnected graphs. We rather assume that the graph is accessed in a semi-streaming fashion, that is, that adjacency lists are scanned almost sequentially, and that a very small amount of memory (in the order of a dozen bytes) per node is available in core memory. We leverage the newly discovered algorithms based on HyperLogLog counters, making...

  14. 77 FR 58576 - Certain Wireless Communication Devices, Portable Music and Data Processing Devices, Computers...

    Science.gov (United States)

    2012-09-21

    ... COMMISSION Certain Wireless Communication Devices, Portable Music and Data Processing Devices, Computers, and... importation of certain wireless communication devices, portable music and data processing devices, computers... after importation of certain wireless communication devices, portable music and data processing...

  15. Clinical and epidemiological study of stress hyperglycemia among medical intensive care unit patients in Central India

    Science.gov (United States)

    Sharma, Jitendra; Chittawar, Sachin; Maniram, Ram Singh; Dubey, T. N.; Singh, Ambrish

    2017-01-01

    Background: Stress hyperglycemia is common in patients presenting at the emergency medical ward and is associated with poor prognosis and increased risk of mortality. Aims and Objective: To study and determine the prevalence and factors associated with stress hyperglycemia. Materials and Methods: A cross-sectional observational study was performed on 536 nondiabetic patients presented to the Intensive Care Unit (ICU) at Gandhi Medical College and allied Hamidia Hospital, Bhopal, between March 31, 2015, and May 28, 2015. A detailed history including demographic profile, presence of chronic disease, history of hospitalization and ICU admission, surgical status, and major reason for ICU admission (i.e., predominant diagnostic category) was collected. Hematological and other parameters based on profile of study population were also analyzed. Results: Out of 536 patients, 109 (20.33%) had stress hyperglycemia. Out of 109 patients with stress hyperglycemia, 87 (16.23%) patients had glycated hemoglobin (HbA1c) <5.7% and 22 (4.10%) patients had HbA1c between 5.7% and 6.4%. Mean age of the study population was 40.27 ± 1.44 years, with male dominance. Mean random blood glucose level was 181.46 ± 3.80 mg/dl. Frequency of stress hyperglycemia was 24.13% in stroke, 19.54% in multiple organ dysfunction syndrome (MODS), 17.24% in chronic kidney disease (CKD), 12.64% in central nervous system (CNS) infection, 8.05% in chronic liver disease (CLD), and 8.05% in seizure patients. Association between stroke and stress hyperglycemia was significant (P = 0.036). Association between hospital stay more than 7 days and stress hyperglycemia was significant in stroke patients (P = 0.0029), CKD patients (P = 0.0036), CLD (P = 0.0099), and MODS patients (P = 0.0328). Conclusions: The factors associated with stress hyperglycemia were stroke, MODS, CKD, CNS infection, CLD, seizure patients, with prolonged hospital stay and expected proportion. PMID:28217513

  16. Corrective Action Decision Document for Corrective Action Unit 417: Central Nevada Test Area Surface, Nevada

    Energy Technology Data Exchange (ETDEWEB)

    U.S. Department of Energy Nevada Operations Office

    1999-04-02

    This Corrective Action Decision Document (CADD) identifies and rationalizes the U.S. Department of Energy, Nevada Operations Office's selection of a recommended corrective action alternative (CAA) appropriate to facilitate the closure of Corrective Action Unit (CAU) 417: Central Nevada Test Area Surface, Nevada, under the Federal Facility Agreement and Consent Order. Located in Hot Creek Valley in Nye County, Nevada, and consisting of three separate land withdrawal areas (UC-1, UC-3, and UC-4), CAU 417 is comprised of 34 corrective action sites (CASs) including 2 underground storage tanks, 5 septic systems, 8 shaker pad/cuttings disposal areas, 1 decontamination facility pit, 1 burn area, 1 scrap/trash dump, 1 outlier area, 8 housekeeping sites, and 16 mud pits. Four field events were conducted between September 1996 and June 1998 to complete a corrective action investigation indicating that the only contaminant of concern was total petroleum hydrocarbon (TPH) which was found in 18 of the CASs. A total of 1,028 samples were analyzed. During this investigation, a statistical approach was used to determine which depth intervals or layers inside individual mud pits and shaker pad areas were above the State action levels for the TPH. Other related field sampling activities (i.e., expedited site characterization methods, surface geophysical surveys, direct-push geophysical surveys, direct-push soil sampling, and rotosonic drilling located septic leachfields) were conducted in this four-phase investigation; however, no further contaminants of concern (COCs) were identified. During and after the investigation activities, several of the sites which had surface debris but no COCs were cleaned up as housekeeping sites, two septic tanks were closed in place, and two underground storage tanks were removed. The focus of this CADD was to identify CAAs which would promote the prevention or mitigation of human exposure to surface and subsurface soils with contaminant

  17. A computational model for the numerical simulation of FSW processes

    OpenAIRE

    Agelet de Saracibar Bosch, Carlos; Chiumenti, Michèle; Santiago, Diego de; Cervera Ruiz, Miguel; Dialami, Narges; Lombera, Guillermo

    2010-01-01

    In this paper a computational model for the numerical simulation of Friction Stir Welding (FSW) processes is presented. FSW is a new method of welding in solid state in which a shouldered tool with a profile probe is rotated and slowly plunged into the joint line between two pieces of sheet or plate material which are butted together. Once the probe has been completely inserted, it is moved with a small tilt angle in the welding direction. Here a quasi-static, thermal transient, mixed mult...

  18. Point to point processing of digital images using parallel computing

    Directory of Open Access Journals (Sweden)

    Eric Olmedo

    2012-05-01

    Full Text Available This paper presents an approach the point to point processing of digital images using parallel computing, particularly for grayscale, brightening, darkening, thresholding and contrast change. The point to point technique applies a transformation to each pixel on image concurrently rather than sequentially. This approach used CUDA as parallel programming tool on a GPU in order to take advantage of all available cores. Preliminary results show that CUDA obtains better results in most of the used filters. Except in the negative filter with lower resolutions images OpenCV obtained better ones, but using images in high resolutions CUDA performance is better.

  19. Computational information geometry for image and signal processing

    CERN Document Server

    Critchley, Frank; Dodson, Christopher

    2017-01-01

    This book focuses on the application and development of information geometric methods in the analysis, classification and retrieval of images and signals. It provides introductory chapters to help those new to information geometry and applies the theory to several applications. This area has developed rapidly over recent years, propelled by the major theoretical developments in information geometry, efficient data and image acquisition and the desire to process and interpret large databases of digital information. The book addresses both the transfer of methodology to practitioners involved in database analysis and in its efficient computational implementation.

  20. INDICATIONS AND COMPLICATIONS OF CENTRAL VENOUS CATHETERIZATION IN CRITICALLY ILL CHILDREN IN INTENSIVE CARE UNIT

    Directory of Open Access Journals (Sweden)

    Shwetal Bhatt

    2012-02-01

    Full Text Available Background: Nothing can be more difficult, time consuming and frustrating than obtaining vascular access in critically ill pediatric patient1 .Central venous catheters are widely used in the care of critically ill patients. Methodology: This paper reviews our experience with central lines in 28 critically ill patients including neonates and non-neonates, in a study period of October 2008 to October 2009. Of the total 28 patients, central venous catheterizations was more in those who were more than a month age and of female sex. Results: The route of insertion was femoral in approximately 89% of our patients and insertion was successful in 24 patients. The most common indication we observed for catheter use was, venous access in shock (37.1% in neonates and for monitoring the central venous pressure (32% in non neonate patients of ARDS with pulmonary edema and Shock. The central line was removed in majority of patients (60% within 24-48hrs of insertion and was kept for maximum of six days in just one patient. Organism most frequently isolated was Acinetobacter. Recommendations made include, use strict aseptic measures by restricted number of skilled operators while inserting and during maintaining central line, routine confirmatory x-ray or fluoroscopy to check the position of central line before catheter use, if possible, use for central pressure monitoring recommended. Conclusion: We concluded that central venous catheterization is a safe and effective measure so we recommend timely and judicious use of percutaneous central venous catheter in paediatric critically ill patients of PICU and NICU. [National J of Med Res 2012; 2(1.000: 85-88

  1. COMPUTING

    CERN Multimedia

    M. Kasemann

    CMS relies on a well functioning, distributed computing infrastructure. The Site Availability Monitoring (SAM) and the Job Robot submission have been very instrumental for site commissioning in order to increase availability of more sites such that they are available to participate in CSA07 and are ready to be used for analysis. The commissioning process has been further developed, including "lessons learned" documentation via the CMS twiki. Recently the visualization, presentation and summarizing of SAM tests for sites has been redesigned, it is now developed by the central ARDA project of WLCG. Work to test the new gLite Workload Management System was performed; a 4 times increase in throughput with respect to LCG Resource Broker is observed. CMS has designed and launched a new-generation traffic load generator called "LoadTest" to commission and to keep exercised all data transfer routes in the CMS PhE-DEx topology. Since mid-February, a transfer volume of about 12 P...

  2. Process of Market Strategy Optimization Using Distributed Computing Systems

    Directory of Open Access Journals (Sweden)

    Nowicki Wojciech

    2015-12-01

    Full Text Available If market repeatability is assumed, it is possible with some real probability to deduct short term market changes by making some calculations. The algorithm, based on logical and statistically reasonable scheme to make decisions about opening or closing position on a market, is called an automated strategy. Due to market volatility, all parameters are changing from time to time, so there is need to constantly optimize them. This article describes a team organization process when researching market strategies. Individual team members are merged into small groups, according to their responsibilities. The team members perform data processing tasks through a cascade organization, providing solutions to speed up work related to the use of remote computing resources. They also work out how to store results in a suitable way, according to the type of task, and facilitate the publication of a large amount of results.

  3. Topics in medical image processing and computational vision

    CERN Document Server

    Jorge, Renato

    2013-01-01

      The sixteen chapters included in this book were written by invited experts of international recognition and address important issues in Medical Image Processing and Computational Vision, including: Object Recognition, Object Detection, Object Tracking, Pose Estimation, Facial Expression Recognition, Image Retrieval, Data Mining, Automatic Video Understanding and Management, Edges Detection, Image Segmentation, Modelling and Simulation, Medical thermography, Database Systems, Synthetic Aperture Radar and Satellite Imagery.   Different applications are addressed and described throughout the book, comprising: Object Recognition and Tracking, Facial Expression Recognition, Image Database, Plant Disease Classification, Video Understanding and Management, Image Processing, Image Segmentation, Bio-structure Modelling and Simulation, Medical Imaging, Image Classification, Medical Diagnosis, Urban Areas Classification, Land Map Generation.   The book brings together the current state-of-the-art in the various mul...

  4. Personal Computer (PC) based image processing applied to fluid mechanics

    Science.gov (United States)

    Cho, Y.-C.; Mclachlan, B. G.

    1987-01-01

    A PC based image processing system was employed to determine the instantaneous velocity field of a two-dimensional unsteady flow. The flow was visualized using a suspension of seeding particles in water, and a laser sheet for illumination. With a finite time exposure, the particle motion was captured on a photograph as a pattern of streaks. The streak pattern was digitized and processed using various imaging operations, including contrast manipulation, noise cleaning, filtering, statistical differencing, and thresholding. Information concerning the velocity was extracted from the enhanced image by measuring the length and orientation of the individual streaks. The fluid velocities deduced from the randomly distributed particle streaks were interpolated to obtain velocities at uniform grid points. For the interpolation a simple convolution technique with an adaptive Gaussian window was used. The results are compared with a numerical prediction by a Navier-Stokes computation.

  5. Design of the Laboratory-Scale Plutonium Oxide Processing Unit in the Radiochemical Processing Laboratory

    Energy Technology Data Exchange (ETDEWEB)

    Lumetta, Gregg J. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Meier, David E. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Tingey, Joel M. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Casella, Amanda J. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Delegard, Calvin H. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Edwards, Matthew K. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Orton, Robert D. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Rapko, Brian M. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Smart, John E. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2015-05-01

    This report describes a design for a laboratory-scale capability to produce plutonium oxide (PuO2) for use in identifying and validating nuclear forensics signatures associated with plutonium production, as well as for use as exercise and reference materials. This capability will be located in the Radiochemical Processing Laboratory at the Pacific Northwest National Laboratory. The key unit operations are described, including PuO2 dissolution, purification of the Pu by ion exchange, precipitation, and re-conversion to PuO2 by calcination.

  6. Manyscale Computing for Sensor Processing in Support of Space Situational Awareness

    Science.gov (United States)

    Schmalz, M.; Chapman, W.; Hayden, E.; Sahni, S.; Ranka, S.

    2014-09-01

    Increasing image and signal data burden associated with sensor data processing in support of space situational awareness implies continuing computational throughput growth beyond the petascale regime. In addition to growing applications data burden and diversity, the breadth, diversity and scalability of high performance computing architectures and their various organizations challenge the development of a single, unifying, practicable model of parallel computation. Therefore, models for scalable parallel processing have exploited architectural and structural idiosyncrasies, yielding potential misapplications when legacy programs are ported among such architectures. In response to this challenge, we have developed a concise, efficient computational paradigm and software called Manyscale Computing to facilitate efficient mapping of annotated application codes to heterogeneous parallel architectures. Our theory, algorithms, software, and experimental results support partitioning and scheduling of application codes for envisioned parallel architectures, in terms of work atoms that are mapped (for example) to threads or thread blocks on computational hardware. Because of the rigor, completeness, conciseness, and layered design of our manyscale approach, application-to-architecture mapping is feasible and scalable for architectures at petascales, exascales, and above. Further, our methodology is simple, relying primarily on a small set of primitive mapping operations and support routines that are readily implemented on modern parallel processors such as graphics processing units (GPUs) and hybrid multi-processors (HMPs). In this paper, we overview the opportunities and challenges of manyscale computing for image and signal processing in support of space situational awareness applications. We discuss applications in terms of a layered hardware architecture (laboratory > supercomputer > rack > processor > component hierarchy). Demonstration applications include

  7. Central limit theorems for smoothed extreme value estimates of Poisson point processes boundaries

    CERN Document Server

    Girard, Stéphane

    2011-01-01

    In this paper, we give sufficient conditions to establish central limit theorems for boundary estimates of Poisson point processes. The considered estimates are obtained by smoothing some bias corrected extreme values of the point process. We show how the smoothing leads Gaussian asymptotic distributions and therefore pointwise confidence intervals. Some new unidimensional and multidimensional examples are provided.

  8. Codigestion of manure and industrial organic waste at centralized biogas plants: process imbalances and limitations

    DEFF Research Database (Denmark)

    Bangsø Nielsen, Henrik; Angelidaki, Irini

    2008-01-01

    The present study focuses on process imbalances in Danish centralized biogas plants treating manure in combination with industrial waste. Collection of process data from various full-scale plants along with a number of interviews showed that imbalances occur frequently. High concentrations...

  9. In-Situ Statistical Analysis of Autotune Simulation Data using Graphical Processing Units

    Energy Technology Data Exchange (ETDEWEB)

    Ranjan, Niloo [ORNL; Sanyal, Jibonananda [ORNL; New, Joshua Ryan [ORNL

    2013-08-01

    Developing accurate building energy simulation models to assist energy efficiency at speed and scale is one of the research goals of the Whole-Building and Community Integration group, which is a part of Building Technologies Research and Integration Center (BTRIC) at Oak Ridge National Laboratory (ORNL). The aim of the Autotune project is to speed up the automated calibration of building energy models to match measured utility or sensor data. The workflow of this project takes input parameters and runs EnergyPlus simulations on Oak Ridge Leadership Computing Facility s (OLCF) computing resources such as Titan, the world s second fastest supercomputer. Multiple simulations run in parallel on nodes having 16 processors each and a Graphics Processing Unit (GPU). Each node produces a 5.7 GB output file comprising 256 files from 64 simulations. Four types of output data covering monthly, daily, hourly, and 15-minute time steps for each annual simulation is produced. A total of 270TB+ of data has been produced. In this project, the simulation data is statistically analyzed in-situ using GPUs while annual simulations are being computed on the traditional processors. Titan, with its recent addition of 18,688 Compute Unified Device Architecture (CUDA) capable NVIDIA GPUs, has greatly extended its capability for massively parallel data processing. CUDA is used along with C/MPI to calculate statistical metrics such as sum, mean, variance, and standard deviation leveraging GPU acceleration. The workflow developed in this project produces statistical summaries of the data which reduces by multiple orders of magnitude the time and amount of data that needs to be stored. These statistical capabilities are anticipated to be useful for sensitivity analysis of EnergyPlus simulations.

  10. 2012 Groundwater Monitoring Report Central Nevada Test Area, Subsurface Corrective Action Unit 443

    Energy Technology Data Exchange (ETDEWEB)

    None

    2013-04-01

    The Central Nevada Test Area was the site of a 0.2- to 1-megaton underground nuclear test in 1968. The surface of the site has been closed, but the subsurface is still in the corrective action process. The corrective action alternative selected for the site was monitoring with institutional controls. Annual sampling and hydraulic head monitoring are conducted as part of the subsurface corrective action strategy. The site is currently in the fourth year of the 5-year proof-of-concept period that is intended to validate the compliance boundary. Analytical results from the 2012 monitoring are consistent with those of previous years. Tritium remains at levels below the laboratory minimum detectable concentration in all wells in the monitoring network. Samples collected from reentry well UC-1-P-2SR, which is not in the monitoring network but was sampled as part of supplemental activities conducted during the 2012 monitoring, indicate concentrations of tritium that are consistent with previous sampling results. This well was drilled into the chimney shortly after the detonation, and water levels continue to rise, demonstrating the very low permeability of the volcanic rocks. Water level data from new wells MV-4 and MV-5 and recompleted well HTH-1RC indicate that hydraulic heads are still recovering from installation and testing. Data from wells MV-4 and MV-5 also indicate that head levels have not yet recovered from the 2011 sampling event during which several thousand gallons of water were purged. It has been recommended that a low-flow sampling method be adopted for these wells to allow head levels to recover to steady-state conditions. Despite the lack of steady-state groundwater conditions, hydraulic head data collected from alluvial wells installed in 2009 continue to support the conceptual model that the southeast-bounding graben fault acts as a barrier to groundwater flow at the site.

  11. Aeromagrnetic study of the midcontinent gravity high of central United States

    Science.gov (United States)

    King, Elizabeth R.; Zietz, Isidore

    1971-01-01

    present Earth's field and differs from it radically in direction. This magnetization was acquired before the flows were tilted into their present positions. A computed magnetic profile shows that a trough of flows with such a magnetization and inward-dipping limbs can account for the observed persistent lows along the western edge of the block, the relatively low magnetic values along the axis of the block, and the large positive anomaly along the eastern side of the block. Flows as much as 1 mi thick near the base of the sequence have a remanent magnetization with a nearly opposite polarity. This reverse polarity has been measured on both sides of Lake Superior and is probably also present farther south, particularly in Iowa where the outer units of the block in an area north of Des Moines give rise to a prominent magnetic low. The axis of this long belt of Keweenawan mafic rocks cuts discordantly through the prevailing east-west-trending fabric of the older Precambrian terrane from southern Kansas to Lake Superior. This belt has several major left-lateral offsets, one of which produces a complete hiatus in the vicinity of the 40th parallel where an east-west transcontinental rift or fracture zone has been proposed. The axial basins of clastic rocks are outlined by linear magnetic anomalies and show a concordant relation to the structure of the mafic flows. These basins are oriented at an angle to the main axis, suggesting that the entire feature originated as a major rift composed of a series of short, linear, en echelon segments with offsets similar to the transform faults characterizing the present mid-ocean rift system. This midcontinent rift may well have been part of a Keweenawan global rift system with initial offsets consisting of transform faults along pre-existing fractures, but apparently it never fully developed laterally into an ocean basin, and the upwelling mafic material was localized along a relatively narrow belt.

  12. Aeromagnetic study of the midcontinent gravity high of central United States

    Science.gov (United States)

    King, Elizabeth R.; Zietz, Isidore

    1971-01-01

    present Earth's field and differs from it radically in direction. This magnetization was acquired before the flows were tilted into their present positions. A computed magnetic profile shows that a trough of flows with such a magnetization and inward-dipping limbs can account for the observed persistent lows along the western edge of the block, the relatively low magnetic values along the axis of the block, and the large positive anomaly along the eastern side of the block. Flows as much as 1 mi thick near the base of the sequence have a remanent magnetization with a nearly opposite polarity. This reverse polarity has been measured on both sides of Lake Superior and is probably also present farther south, particularly in Iowa where the outer units of the block in an area north of Des Moines give rise to a prominent magnetic low. The axis of this long belt of Keweenawan mafic rocks cuts discordantly through the prevailing east-west-trending fabric of the older Precambrian terrane from southern Kansas to Lake Superior. This belt has several major left-lateral offsets, one of which produces a complete hiatus in the vicinity of the 40th parallel where an east-west transcontinental rift or fracture zone has been proposed. The axial basins of clastic rocks are outlined by linear magnetic anomalies and show a concordant relation to the structure of the mafic flows. These basins are oriented at an angle to the main axis, suggesting that the entire feature originated as a major rift composed of a series of short, linear, en echelon segments with offsets similar to the transform faults characterizing the present mid-ocean rift system. This midcontinent rift may well have been part of a Keweenawan global rift system with initial offsets consisting of transform faults along pre-existing fractures, but apparently it never fully developed laterally into an ocean basin, and the upwelling mafic material was localized along a relatively narrow belt.

  13. Accelerating large-scale protein structure alignments with graphics processing units

    Directory of Open Access Journals (Sweden)

    Pang Bin

    2012-02-01

    Full Text Available Abstract Background Large-scale protein structure alignment, an indispensable tool to structural bioinformatics, poses a tremendous challenge on computational resources. To ensure structure alignment accuracy and efficiency, efforts have been made to parallelize traditional alignment algorithms in grid environments. However, these solutions are costly and of limited accessibility. Others trade alignment quality for speedup by using high-level characteristics of structure fragments for structure comparisons. Findings We present ppsAlign, a parallel protein structure Alignment framework designed and optimized to exploit the parallelism of Graphics Processing Units (GPUs. As a general-purpose GPU platform, ppsAlign could take many concurrent methods, such as TM-align and Fr-TM-align, into the parallelized algorithm design. We evaluated ppsAlign on an NVIDIA Tesla C2050 GPU card, and compared it with existing software solutions running on an AMD dual-core CPU. We observed a 36-fold speedup over TM-align, a 65-fold speedup over Fr-TM-align, and a 40-fold speedup over MAMMOTH. Conclusions ppsAlign is a high-performance protein structure alignment tool designed to tackle the computational complexity issues from protein structural data. The solution presented in this paper allows large-scale structure comparisons to be performed using massive parallel computing power of GPU.

  14. Parallel design of JPEG-LS encoder on graphics processing units

    Science.gov (United States)

    Duan, Hao; Fang, Yong; Huang, Bormin

    2012-01-01

    With recent technical advances in graphic processing units (GPUs), GPUs have outperformed CPUs in terms of compute capability and memory bandwidth. Many successful GPU applications to high performance computing have been reported. JPEG-LS is an ISO/IEC standard for lossless image compression which utilizes adaptive context modeling and run-length coding to improve compression ratio. However, adaptive context modeling causes data dependency among adjacent pixels and the run-length coding has to be performed in a sequential way. Hence, using JPEG-LS to compress large-volume hyperspectral image data is quite time-consuming. We implement an efficient parallel JPEG-LS encoder for lossless hyperspectral compression on a NVIDIA GPU using the computer unified device architecture (CUDA) programming technology. We use the block parallel strategy, as well as such CUDA techniques as coalesced global memory access, parallel prefix sum, and asynchronous data transfer. We also show the relation between GPU speedup and AVIRIS block size, as well as the relation between compression ratio and AVIRIS block size. When AVIRIS images are divided into blocks, each with 64×64 pixels, we gain the best GPU performance with 26.3x speedup over its original CPU code.

  15. Evapotranspiration Units for the Diamond Valley Flow System Groundwater Discharge Area, Central Nevada, 2010

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — These data were created as part of a hydrologic study to characterize groundwater budgets and water quality in the Diamond Valley Flow System (DVFS), central Nevada....

  16. Product or waste? Importation and end-of-life processing of computers in Peru.

    Science.gov (United States)

    Kahhat, Ramzy; Williams, Eric

    2009-08-01

    This paper considers the importation of used personal computers (PCs) in Peru and domestic practices in their production, reuse, and end-of-life processing. The empirical pillars of this study are analysis of government data describing trade in used and new computers and surveys and interviews of computer sellers, refurbishers, and recyclers. The United States is the primary source of used PCs imported to Peru. Analysis of shipment value (as measured by trade statistics) shows that 87-88% of imported used computers had a price higher than the ideal recycle value of constituent materials. The official trade in end-of-life computers is thus driven by reuse as opposed to recycling. The domestic reverse supply chain of PCs is well developed with extensive collection, reuse, and recycling. Environmental problems identified include open burning of copper-bearing wires to remove insulation and landfilling of CRT glass. Distinct from informal recycling in China and India, printed circuit boards are usually not recycled domestically but exported to Europe for advanced recycling or to China for (presumably) informal recycling. It is notable that purely economic considerations lead to circuit boards being exported to Europe where environmental standards are stringent, presumably due to higher recovery of precious metals.

  17. CENTRAL ASIA IN THE FOREIGN POLICY OF RUSSIA, THE UNITED STATES, AND THE EUROPEAN UNION

    OpenAIRE

    Omarov, Mels; Omarov, Noor

    2009-01-01

    The Soviet Union left behind a geopolitical vacuum in Central Asia which augmented the interest of outside powers in the region. Indeed, its advantageous geopolitical location, natural riches (oil and gas in particular), as well as transportation potential and the possibility of using it as a bridgehead in the counter-terrorist struggle have transformed Central Asia into one of the most attractive geopolitical areas. The great powers' highly divergent interests have led to their sharp rivalry...

  18. Performance analysis and acceleration of cross-correlation computation using FPGA implementation for digital signal processing

    Science.gov (United States)

    Selma, R.

    2016-09-01

    Paper describes comparison of cross-correlation computation speed of most commonly used computation platforms (CPU, GPU) with an FPGA-based design. It also describes the structure of cross-correlation unit implemented for testing purposes. Speedup of computations was achieved using FPGA-based design, varying between 16 and 5400 times compared to CPU computations and between 3 and 175 times compared to GPU computations.

  19. Computational modelling of a thermoforming process for thermoplastic starch

    Science.gov (United States)

    Szegda, D.; Song, J.; Warby, M. K.; Whiteman, J. R.

    2007-05-01

    Plastic packaging waste currently forms a significant part of municipal solid waste and as such is causing increasing environmental concerns. Such packaging is largely non-biodegradable and is particularly difficult to recycle or to reuse due to its complex composition. Apart from limited recycling of some easily identifiable packaging wastes, such as bottles, most packaging waste ends up in landfill sites. In recent years, in an attempt to address this problem in the case of plastic packaging, the development of packaging materials from renewable plant resources has received increasing attention and a wide range of bioplastic materials based on starch are now available. Environmentally these bioplastic materials also reduce reliance on oil resources and have the advantage that they are biodegradable and can be composted upon disposal to reduce the environmental impact. Many food packaging containers are produced by thermoforming processes in which thin sheets are inflated under pressure into moulds to produce the required thin wall structures. Hitherto these thin sheets have almost exclusively been made of oil-based polymers and it is for these that computational models of thermoforming processes have been developed. Recently, in the context of bioplastics, commercial thermoplastic starch sheet materials have been developed. The behaviour of such materials is influenced both by temperature and, because of the inherent hydrophilic characteristics of the materials, by moisture content. Both of these aspects affect the behaviour of bioplastic sheets during the thermoforming process. This paper describes experimental work and work on the computational modelling of thermoforming processes for thermoplastic starch sheets in an attempt to address the combined effects of temperature and moisture content. After a discussion of the background of packaging and biomaterials, a mathematical model for the deformation of a membrane into a mould is presented, together with its

  20. Sector spanning agrifood process transparency with Direct Computer Mapping

    Directory of Open Access Journals (Sweden)

    Mónika Varga

    2010-11-01

    Full Text Available Agrifood processes are built from multiscale, time-varied networks that span many sectors from cultivation, through animal breeding, food industry and trade to the consumers. The sector spanning traceability has not yet been solved, because neither the“one-step backward, one-step forward” passing of IDs, nor the large sophisticated databases give a feasible solution. In our approach, the transparency of process networks is based on the generic description of dynamic mass balances. The solution of this,apparently more difficult task, makes possible the unified acquisition of the data from the different ERP systems, and the scalable storage of these simplified process models. Inaddition, various task specific intensive parameters (e.g. concentrations, prices, etc. can also be carried with the mass flows. With the knowledge of these structured models, theplanned Agrifood Interoperability Centers can serve tracing and tracking results for the actors and for the public authorities. Our methodology is based on the Direct Computer Mapping of process models. The software is implemented in open source code GNUPrologand C++ languages. In the first, preliminary phase we have studied a couple of consciously different realistic actors, as well as an example for the sector spanning chain,combined from these realistic elements.

  1. Business process reengineering in the centralization of the industrial enterprises management

    Directory of Open Access Journals (Sweden)

    N.I. Chukhray

    2015-09-01

    Full Text Available The aim of the article. One of the important strategic directions of the powerful independent state with a stable economy is the development of the national economy, which requires the use of new upgraded tools that will enable to redesign and improve the production and management activities of the enterprise and make it more productive and competitive, while providing the economy of the financial, labor and other resources. However, this presupposes the use of partial restructuring and in some cases complete restructuring of the business processes. The aim of the article is to study business processes reengineering features at domestic enterprises and the development of business processes centralization algorithm on the example of JSC «Concern-Electron». To achieve this goal, the research identifies the following objectives: to summarize the main approaches to the business processes reengineering on the basis of centralization; to characterize the main stages in the reengineering implementation at industrial enterprise; to propose the selection mechanism of the subsidiary business processes for reengineering; to determine the algorithm of the business processes management centralization. The results of the analysis. The paper summarizes main approaches to business process reengineering based on the centralization; undertakes the characteristic advantages of its use at the industrial enterprises; proposes the stages of reengineering for Ukrainian industrial enterprises. Business process reengineering improves the efficiency of work organization at the JSC «Concern- Electron»: a generalized approach to the centralization of the industrial enterprise management, the algorithm of the business process management centralization that includes identification of the business processes that are duplicated; business process selection for the reengineering using the fuzzy set theory; making managerial decisions on reengineering. Conclusions and

  2. Design of a Distributed Control System Using a Personal Computer and Micro Control Units for Humanoid Robots

    Directory of Open Access Journals (Sweden)

    Mitsuhiro Yamano

    2010-01-01

    Full Text Available Problem statement: Humanoid robots have many motors and sensors and many control methods are used to carry out complicated tasks of the robots. Therefore, efficient control systems are required for the robots. Approach: This study presented a distributed control system using a Personal Computer (PC and Micro Control Units (MCUs for humanoid robots. Distributed control systems have the advantages that parallel processing using multiple computers is possible and cables in the system can be short. For the control of the humanoid robots, required functions of the control system were discussed. Based on the discussion, the hardware of the system including a PC and MCUs was proposed. The system was designed to carry out the process of the robot control efficiently. The system can be expanded easily by increasing the number of MCU boards. The software of the system for feedback control of the motors and the communication between the computers was proposed. Flexible switching of motor control methods can be achieved easily. Results: Experiments were performed to show the effectiveness of the system. The sampling frequency of the whole system can be about 0.5 kHz and that in local MCUs can be about 10 kHz. Control method of the motor can be changed during the motion in an experiment controlling four joints of the robot. Conclusion: The results of the experiments showed that the distributed control system proposed in this study is effective for humanoid robots.

  3. Reproducibility of Mammography Units, Film Processing and Quality Imaging

    Science.gov (United States)

    Gaona, Enrique

    2003-09-01

    The purpose of this study was to carry out an exploratory survey of the problems of quality control in mammography and processors units as a diagnosis of the current situation of mammography facilities. Measurements of reproducibility, optical density, optical difference and gamma index are included. Breast cancer is the most frequently diagnosed cancer and is the second leading cause of cancer death among women in the Mexican Republic. Mammography is a radiographic examination specially designed for detecting breast pathology. We found that the problems of reproducibility of AEC are smaller than the problems of processors units because almost all processors fall outside of the acceptable variation limits and they can affect the mammography quality image and the dose to breast. Only four mammography units agree with the minimum score established by ACR and FDA for the phantom image.

  4. The Environmental Impacts of a Desktop Computer: Influence of Choice of Functional Unit, System Boundary and User Behaviour

    Science.gov (United States)

    Simanovska, J.; Šteina, Māra; Valters, K.; Bažbauers, G.

    2009-01-01

    The pollution prevention during the design phase of products and processes in environmental policy gains its importance over the other, more historically known principle - pollution reduction in the end-of-pipe. This approach requires prediction of potential environmental impacts to be avoided or reduced and a prioritisation of the most efficient areas for action. Currently the most appropriate method for this purpose is life cycle assessment (LCA)- a method for accounting and attributing all environmental impacts which arise during the life time of a product, starting with the production of raw materials and ending with the disposal, or recycling of the wasted product at the end of life. The LCA, however, can be misleading if the performers of the study disregard gaps of information and the limitations of the chosen methodology. During the study we researched the environmental impact of desktop computers, using a simplified LCA method - Indicators' 99, and by developing various scenarios (changing service life, user behaviour, energy supply etc). The study demonstrates that actions for improvements lie in very different areas. The study also concludes that the approach of defining functional unit must be sufficiently flexible in order to avoid discounting areas of potential actions. Therefore, with regard to computers we agree with other authors using the functional unit "one computer" but suggest not to bind this to service life or usage time, but to develop several scenarios varying these parameters. The study also demonstrates the importance of a systemic approach when assessing complex product systems - as more complex the system is, the more broad the scope for potential actions. We conclude that, regarding computers, which belong to energy using and material- intensive products, the measures to reduce environmental impacts lie not only with the producer and user of the particular product, but also with the whole national energy supply and waste management

  5. AN APPROACH TO EFFICIENT FEM SIMULATIONS ON GRAPHICS PROCESSING UNITS USING CUDA

    Directory of Open Access Journals (Sweden)

    Björn Nutti

    2014-04-01

    Full Text Available The paper presents a highly efficient way of simulating the dynamic behavior of deformable objects by means of the finite element method (FEM with computations performed on Graphics Processing Units (GPU. The presented implementation reduces bottlenecks related to memory accesses by grouping the necessary data per node pairs, in contrast to the classical way done per element. This strategy reduces the memory access patterns that are not suitable for the GPU memory architecture. Furthermore, the presented implementation takes advantage of the underlying sparse-block-matrix structure, and it has been demonstrated how to avoid potential bottlenecks in the algorithm. To achieve plausible deformational behavior for large local rotations, the objects are modeled by means of a simplified co-rotational FEM formulation.

  6. Quantum Chemistry for Solvated Molecules on Graphical Processing Units (GPUs)using Polarizable Continuum Models

    CERN Document Server

    Liu, Fang; Kulik, Heather J; Martínez, Todd J

    2015-01-01

    The conductor-like polarization model (C-PCM) with switching/Gaussian smooth discretization is a widely used implicit solvation model in chemical simulations. However, its application in quantum mechanical calculations of large-scale biomolecular systems can be limited by computational expense of both the gas phase electronic structure and the solvation interaction. We have previously used graphical processing units (GPUs) to accelerate the first of these steps. Here, we extend the use of GPUs to accelerate electronic structure calculations including C-PCM solvation. Implementation on the GPU leads to significant acceleration of the generation of the required integrals for C-PCM. We further propose two strategies to improve the solution of the required linear equations: a dynamic convergence threshold and a randomized block-Jacobi preconditioner. These strategies are not specific to GPUs and are expected to be beneficial for both CPU and GPU implementations. We benchmark the performance of the new implementat...

  7. Parallel multigrid solver of radiative transfer equation for photon transport via graphics processing unit

    Science.gov (United States)

    Gao, Hao; Phan, Lan; Lin, Yuting

    2012-09-01

    A graphics processing unit-based parallel multigrid solver for a radiative transfer equation with vacuum boundary condition or reflection boundary condition is presented for heterogeneous media with complex geometry based on two-dimensional triangular meshes or three-dimensional tetrahedral meshes. The computational complexity of this parallel solver is linearly proportional to the degrees of freedom in both angular and spatial variables, while the full multigrid method is utilized to minimize the number of iterations. The overall gain of speed is roughly 30 to 300 fold with respect to our prior multigrid solver, which depends on the underlying regime and the parallelization. The numerical validations are presented with the MATLAB codes at https://sites.google.com/site/rtefastsolver/.

  8. Improving the Quotation Process of an After-Sales Unit

    OpenAIRE

    Matilainen, Janne

    2013-01-01

    The purpose of this study was to model and analyze the quotation process of area managers at a global company. Process improvement requires understanding the fundamentals of the process. The study was conducted a case study. Data comprised of internal documentation of the case company, literature, and semi-structured, themed interviews of process performers and stakeholders. The objective was to produce model of the current state of the process. The focus was to establish a holistic view o...

  9. Unit Operation Experiment Linking Classroom with Industrial Processing

    Science.gov (United States)

    Benson, Tracy J.; Richmond, Peyton C.; LeBlanc, Weldon

    2013-01-01

    An industrial-type distillation column, including appropriate pumps, heat exchangers, and automation, was used as a unit operations experiment to provide a link between classroom teaching and real-world applications. Students were presented with an open-ended experiment where they defined the testing parameters to solve a generalized problem. The…

  10. The emergence of computer science instructional units in American colleges and universities (1950--1975): A history

    Science.gov (United States)

    Conners, Susan Elaine

    The purpose and scope of this dissertation is to investigate the origins and development of academic computer science units in American higher education and examine the intent and structure of their curricula. Specifically the study examines selected undergraduate and graduate curricula that developed from 1950 to 1975. This dissertation examines several of the earliest academic units formed and the issues surrounding their formation. This study examines some of the variety of courses and programs that existed among the early computer science programs. The actual titles of the units varied but they shared a common overreaching goal to study computers. The departments formed in various methods and some units were a subset of other departments. Faculties of these new units were often comprised of faculty members from various other disciplines. This dissertation is an exploration of the connections between a variety of diverse institutions and the new computer science discipline that formed from these early academic roots. While much has been written about the history of hardware and software development and the individual pioneers in the relatively new computer science discipline, the history of the academic units was documented primarily based on individual institutions. This study uses a wider lens to examine the patterns of these early academic units as they formed and became computer science units. The successes of these early pioneers resulted in a proliferation of academic computer programs in the following decades. The curricular debates continue as the number and purposes of these programs continue to expand. This dissertation seeks to provide useful information for future curricular decisions by examining the roots of the academic computer science units.

  11. Quality control and dosimetry in computed tomography units; Controle de qualidade e dosimetria em equipamentos de tomografia computadorizada

    Energy Technology Data Exchange (ETDEWEB)

    Pina, Diana Rodrigues de; Ribeiro, Sergio Marrone [UNESP, Botucatu, SP (Brazil). Faculdade de Medicina], e-mail: drpina@fmb.unesp.br; Duarte, Sergio Barbosa [Centro Brasileiro e Pesquisas Fisicas (CBPF), Rio de Janeiro, RJ (Brazil); Netto, Thomaz Ghilardi [Universidade de Sao Paulo (USP), Ribeirao Preto, SP (Brazil). Hospital das Clinicas. Centro de Ciencias das Imagens e Fisica Medica; Morceli, Jose [UNESP, Botucatu, SP (Brazil). Faculdade de Medicina. Secao de Diagnostico por Imagem; Carbi, Eros Duarte Ortigoso; Costa Neto, Andre; Souza, Rafael Toledo Fernandes de [UNESP, Botucatu, SP (Brazil). Inst. de Biociencias

    2009-05-15

    Objective: Evaluation of equipment conditions and dosimetry in computed tomography services utilizing protocols for head, abdomen, and lumbar spine in adult patients (in three different units) and pediatric patients up to 18 months of age (in one of the units evaluated). Materials and methods: Computed tomography dose index and multiple-scan average dose were estimated in studies of adult patients with three different units. Additionally, entrance surface doses as well as absorbed dose were estimated in head studies for both adult and pediatric patients in a single computed tomography unit. Results: Mechanical quality control tests were performed, demonstrating that computed tomography units comply with the equipment-use specifications established by the current standards. Dosimetry results have demonstrated that the multiplescan average dose values were in excess of up to 109.0% the reference levels, presenting considerable variation amongst the computed tomography units evaluated in the present study. Absorbed doses obtained with pediatric protocols are lower than those with adult patients, presenting a reduction of up to 51.0% in the thyroid gland. Conclusion: The present study has analyzed the operational conditions of three computed tomography units, establishing which parameters should be set for the deployment of a quality control program in the institutions where this study was developed. (author)

  12. Image quality dependence on image processing software in computed radiography

    Directory of Open Access Journals (Sweden)

    Lourens Jochemus Strauss

    2012-06-01

    Full Text Available Background. Image post-processing gives computed radiography (CR a considerable advantage over film-screen systems. After digitisation of information from CR plates, data are routinely processed using manufacturer-specific software. Agfa CR readers use MUSICA software, and an upgrade with significantly different image appearance was recently released: MUSICA2. Aim. This study quantitatively compares the image quality of images acquired without post-processing (flatfield with images processed using these two software packages. Methods. Four aspects of image quality were evaluated. An aluminium step-wedge was imaged using constant mA at tube voltages varying from 40 to 117kV. Signal-to-noise ratios (SNRs and contrast-to-noise Ratios (CNRs were calculated from all steps. Contrast variation with object size was evaluated with visual assessment of images of a Perspex contrast-detail phantom, and an image quality figure (IQF was calculated. Resolution was assessed using modulation transfer functions (MTFs. Results. SNRs for MUSICA2 were generally higher than the other two methods. The CNRs were comparable between the two software versions, although MUSICA2 had slightly higher values at lower kV. The flatfield CNR values were better than those for the processed images. All images showed a decrease in CNRs with tube voltage. The contrast-detail measurements showed that both MUSICA programmes improved the contrast of smaller objects. MUSICA2 was found to give the lowest (best IQF; MTF measurements confirmed this, with values at 3.5 lp/mm of 10% for MUSICA2, 8% for MUSICA and 5% for flatfield. Conclusion. Both MUSICA software packages produced images with better contrast resolution than unprocessed images. MUSICA2 has slightly improved image quality than MUSICA.

  13. Load Balancing in Local Computational Grids within Resource Allocation Process

    Directory of Open Access Journals (Sweden)

    Rouhollah Golmohammadi

    2012-11-01

    Full Text Available A suitable resource allocation method in computational grids should schedule resources in a way that provides the requirements of the users and the resource providers; i.e., the maximum number of tasks should be completed in their time and budget constraints and the received load be distributed equally between resources. This is a decision-making problem, while the scheduler should select a resource from all ones. This process is a multi criteria decision-making problem; because of affect of different properties of resources on this decision. The goal of this decision-making process is balancing the load and completing the tasks in their defined constraints. The proposed algorithm is an analytic hierarchy process based Resource Allocation (ARA method. This method estimates a value for the preference of each resource and then selects the appropriate resource based on the allocated values. The simulations show the ARA method decreases the task failure rate at least 48% and increases the balance factor more than 3.4%.

  14. A Computational Model of Syntactic Processing Ambiguity Resolution from Interpretation

    CERN Document Server

    Niv, M

    1994-01-01

    Syntactic ambiguity abounds in natural language, yet humans have no difficulty coping with it. In fact, the process of ambiguity resolution is almost always unconscious. But it is not infallible, however, as example 1 demonstrates. 1. The horse raced past the barn fell. This sentence is perfectly grammatical, as is evident when it appears in the following context: 2. Two horses were being shown off to a prospective buyer. One was raced past a meadow. and the other was raced past a barn. ... Grammatical yet unprocessable sentences such as 1 are called `garden-path sentences.' Their existence provides an opportunity to investigate the human sentence processing mechanism by studying how and when it fails. The aim of this thesis is to construct a computational model of language understanding which can predict processing difficulty. The data to be modeled are known examples of garden path and non-garden path sentences, and other results from psycholinguistics. It is widely believed that there are two distinct loci...

  15. Co-occurrence of Photochemical and Microbiological Transformation Processes in Open-Water Unit Process Wetlands.

    Science.gov (United States)

    Prasse, Carsten; Wenk, Jannis; Jasper, Justin T; Ternes, Thomas A; Sedlak, David L

    2015-12-15

    The fate of anthropogenic trace organic contaminants in surface waters can be complex due to the occurrence of multiple parallel and consecutive transformation processes. In this study, the removal of five antiviral drugs (abacavir, acyclovir, emtricitabine, lamivudine and zidovudine) via both bio- and phototransformation processes, was investigated in laboratory microcosm experiments simulating an open-water unit process wetland receiving municipal wastewater effluent. Phototransformation was the main removal mechanism for abacavir, zidovudine, and emtricitabine, with half-lives (t1/2,photo) in wetland water of 1.6, 7.6, and 25 h, respectively. In contrast, removal of acyclovir and lamivudine was mainly attributable to slower microbial processes (t1/2,bio = 74 and 120 h, respectively). Identification of transformation products revealed that bio- and phototransformation reactions took place at different moieties. For abacavir and zidovudine, rapid transformation was attributable to high reactivity of the cyclopropylamine and azido moieties, respectively. Despite substantial differences in kinetics of different antiviral drugs, biotransformation reactions mainly involved oxidation of hydroxyl groups to the corresponding carboxylic acids. Phototransformation rates of parent antiviral drugs and their biotransformation products were similar, indicating that prior exposure to microorganisms (e.g., in a wastewater treatment plant or a vegetated wetland) would not affect the rate of transformation of the part of the molecule susceptible to phototransformation. However, phototransformation strongly affected the rates of biotransformation of the hydroxyl groups, which in some cases resulted in greater persistence of phototransformation products.

  16. 75 FR 50695 - Dominican Republic-Central America-United States Free Trade Agreement

    Science.gov (United States)

    2010-08-17

    ..., El Salvador, Guatemala, Honduras, Nicaragua, and the United States signed the Dominican Republic....61.00, HTSUS, and that are products of Canada, Mexico, or Israel. * * * * * (3) Yarn, fabric,...

  17. Simple computation of reaction-diffusion processes on point clouds.

    Science.gov (United States)

    Macdonald, Colin B; Merriman, Barry; Ruuth, Steven J

    2013-06-04

    The study of reaction-diffusion processes is much more complicated on general curved surfaces than on standard Cartesian coordinate spaces. Here we show how to formulate and solve systems of reaction-diffusion equations on surfaces in an extremely simple way, using only the standard Cartesian form of differential operators, and a discrete unorganized point set to represent the surface. Our method decouples surface geometry from the underlying differential operators. As a consequence, it becomes possible to formulate and solve rather general reaction-diffusion equations on general surfaces without having to consider the complexities of differential geometry or sophisticated numerical analysis. To illustrate the generality of the method, computations for surface diffusion, pattern formation, excitable media, and bulk-surface coupling are provided for a variety of complex point cloud surfaces.

  18. Simple computation of reaction–diffusion processes on point clouds

    KAUST Repository

    Macdonald, Colin B.

    2013-05-20

    The study of reaction-diffusion processes is much more complicated on general curved surfaces than on standard Cartesian coordinate spaces. Here we show how to formulate and solve systems of reaction-diffusion equations on surfaces in an extremely simple way, using only the standard Cartesian form of differential operators, and a discrete unorganized point set to represent the surface. Our method decouples surface geometry from the underlying differential operators. As a consequence, it becomes possible to formulate and solve rather general reaction-diffusion equations on general surfaces without having to consider the complexities of differential geometry or sophisticated numerical analysis. To illustrate the generality of the method, computations for surface diffusion, pattern formation, excitable media, and bulk-surface coupling are provided for a variety of complex point cloud surfaces.

  19. The science of computing - The evolution of parallel processing

    Science.gov (United States)

    Denning, P. J.

    1985-01-01

    The present paper is concerned with the approaches to be employed to overcome the set of limitations in software technology which impedes currently an effective use of parallel hardware technology. The process required to solve the arising problems is found to involve four different stages. At the present time, Stage One is nearly finished, while Stage Two is under way. Tentative explorations are beginning on Stage Three, and Stage Four is more distant. In Stage One, parallelism is introduced into the hardware of a single computer, which consists of one or more processors, a main storage system, a secondary storage system, and various peripheral devices. In Stage Two, parallel execution of cooperating programs on different machines becomes explicit, while in Stage Three, new languages will make parallelism implicit. In Stage Four, there will be very high level user interfaces capable of interacting with scientists at the same level of abstraction as scientists do with each other.

  20. Metrics and the effective computational scientist: process, quality and communication.

    Science.gov (United States)

    Baldwin, Eric T

    2012-09-01

    Recent treatments of computational knowledge worker productivity have focused upon the value the discipline brings to drug discovery using positive anecdotes. While this big picture approach provides important validation of the contributions of these knowledge workers, the impact accounts do not provide the granular detail that can help individuals and teams perform better. I suggest balancing the impact-focus with quantitative measures that can inform the development of scientists. Measuring the quality of work, analyzing and improving processes, and the critical evaluation of communication can provide immediate performance feedback. The introduction of quantitative measures can complement the longer term reporting of impacts on drug discovery. These metric data can document effectiveness trends and can provide a stronger foundation for the impact dialogue.