WorldWideScience

Sample records for central processing units computers

  1. Quantum Central Processing Unit and Quantum Algorithm

    Institute of Scientific and Technical Information of China (English)

    王安民

    2002-01-01

    Based on a scalable and universal quantum network, quantum central processing unit, proposed in our previous paper [Chin. Phys. Left. 18 (2001)166], the whole quantum network for the known quantum algorithms,including quantum Fourier transformation, Shor's algorithm and Grover's algorithm, is obtained in a unitied way.

  2. MATLAB Implementation of a Multigrid Solver for Diffusion Problems: : Graphics Processing Unit vs. Central Processing Unit

    OpenAIRE

    2010-01-01

    Graphics Processing Units are immensely powerful processors and for variety applications they outperform the Central Processing Unit, CPU. The recent generations of GPU’s have a flexible architecture than older generations and programming interface more user friendly, which makes them better suited for general purpose programming. A high end GPU can give a desktop computer the same computational power as a small cluster of CPU’s. Speedup of applications by using the GPU has been shown in...

  3. Exploiting Graphics Processing Units for Computational Biology and Bioinformatics

    OpenAIRE

    Payne, Joshua L.; Nicholas A. Sinnott-Armstrong; Jason H Moore

    2010-01-01

    Advances in the video gaming industry have led to the production of low-cost, high-performance graphics processing units (GPUs) that possess more memory bandwidth and computational capability than central processing units (CPUs), the standard workhorses of scientific computing. With the recent release of general-purpose GPUs and Nvidia's GPU programming language, CUDA, graphics engines are being adopted widely in scientific computing applications, particularly in the fields of computational b...

  4. Multiparallel decompression simultaneously using multicore central processing unit and graphic processing unit

    Science.gov (United States)

    Petta, Andrea; Serra, Luigi; De Nino, Maurizio

    2013-01-01

    The discrete wavelet transform (DWT)-based compression algorithm is widely used in many image compression systems. The time-consuming computation of the 9/7 discrete wavelet decomposition and the bit-plane decoding is usually the bottleneck of these systems. In order to perform real-time decompression on a massive bit stream of compressed images continuously down-linked from the satellite, we propose a different graphic processing unit (GPU)-accelerated decoding system. In this system, the GPU and multiple central processing unit (CPU) threads are run in parallel. To obtain the maximum throughput via a different pipeline structure for processing continuous satellite images, an additional balancing algorithm workload has been implemented to distribute the jobs to both CPU and GPU parts to have approximately the same processing speed. Through the pipelined CPU and GPU heterogeneous computing, the entire decoding system approaches a speedup of 15× as compared to its single-threaded CPU counterpart. The proposed channel and source decoding system is able to decompress 1024×1024 satellite images at a speed of 20 frames/s.

  5. Exploiting graphics processing units for computational biology and bioinformatics.

    Science.gov (United States)

    Payne, Joshua L; Sinnott-Armstrong, Nicholas A; Moore, Jason H

    2010-09-01

    Advances in the video gaming industry have led to the production of low-cost, high-performance graphics processing units (GPUs) that possess more memory bandwidth and computational capability than central processing units (CPUs), the standard workhorses of scientific computing. With the recent release of generalpurpose GPUs and NVIDIA's GPU programming language, CUDA, graphics engines are being adopted widely in scientific computing applications, particularly in the fields of computational biology and bioinformatics. The goal of this article is to concisely present an introduction to GPU hardware and programming, aimed at the computational biologist or bioinformaticist. To this end, we discuss the primary differences between GPU and CPU architecture, introduce the basics of the CUDA programming language, and discuss important CUDA programming practices, such as the proper use of coalesced reads, data types, and memory hierarchies. We highlight each of these topics in the context of computing the all-pairs distance between instances in a dataset, a common procedure in numerous disciplines of scientific computing. We conclude with a runtime analysis of the GPU and CPU implementations of the all-pairs distance calculation. We show our final GPU implementation to outperform the CPU implementation by a factor of 1700. PMID:20658333

  6. Fast high-resolution computer-generated hologram computation using multiple graphics processing unit cluster system.

    Science.gov (United States)

    Takada, Naoki; Shimobaba, Tomoyoshi; Nakayama, Hirotaka; Shiraki, Atsushi; Okada, Naohisa; Oikawa, Minoru; Masuda, Nobuyuki; Ito, Tomoyoshi

    2012-10-20

    To overcome the computational complexity of a computer-generated hologram (CGH), we implement an optimized CGH computation in our multi-graphics processing unit cluster system. Our system can calculate a CGH of 6,400×3,072 pixels from a three-dimensional (3D) object composed of 2,048 points in 55 ms. Furthermore, in the case of a 3D object composed of 4096 points, our system is 553 times faster than a conventional central processing unit (using eight threads).

  7. A Universal Quantum Network Quantum Central Processing Unit

    Institute of Scientific and Technical Information of China (English)

    WANG An-Min

    2001-01-01

    A new construction scheme of a universal quantum network which is compatible with the known quantum gate- assembly schemes is proposed. Our quantum network is standard, easy-assemble, reusable, scalable and even potentially programmable. Moreover, we can construct a whole quantum network to implement the generalquantum algorithm and quantum simulation procedure. In the above senses, it is a realization of the quantum central processing unit.

  8. Security central processing unit applications in the protection of nuclear facilities

    International Nuclear Information System (INIS)

    New or upgraded electronic security systems protecting nuclear facilities or complexes will be heavily computer dependent. Proper planning for new systems and the employment of new state-of-the-art 32 bit processors in the processing of subsystem reports are key elements in effective security systems. The processing of subsystem reports represents only a small segment of system overhead. In selecting a security system to meet the current and future needs for nuclear security applications the central processing unit (CPU) applied in the system architecture is the critical element in system performance. New 32 bit technology eliminates the need for program overlays while providing system programmers with well documented program tools to develop effective systems to operate in all phases of nuclear security applications

  9. Architectural and performance considerations for a 10(7)-instruction/sec optoelectronic central processing unit.

    Science.gov (United States)

    Arrathoon, R; Kozaitis, S

    1987-11-01

    Architectural considerations for a multiple-instruction, single-data-based optoelectronic central processing unit operating at 10(7) instructions per second are detailed. Central to the operation of this device is a giant fiber-optic content-addressable memory in a programmable logic array configuration. The design includes four instructions and emphasizes the fan-in and fan-out capabilities of optical systems. Interconnection limitations and scaling issues are examined.

  10. Process as Content in Computer Science Education: Empirical Determination of Central Processes

    Science.gov (United States)

    Zendler, A.; Spannagel, C.; Klaudt, D.

    2008-01-01

    Computer science education should not be based on short-term developments but on content that is observable in multiple domains of computer science, may be taught at every intellectual level, will be relevant in the longer term, and is related to everyday language and/or thinking. Recently, a catalogue of "central concepts" for computer science…

  11. Density functional theory calculation on many-cores hybrid central processing unit-graphic processing unit architectures.

    Science.gov (United States)

    Genovese, Luigi; Ospici, Matthieu; Deutsch, Thierry; Méhaut, Jean-François; Neelov, Alexey; Goedecker, Stefan

    2009-07-21

    We present the implementation of a full electronic structure calculation code on a hybrid parallel architecture with graphic processing units (GPUs). This implementation is performed on a free software code based on Daubechies wavelets. Such code shows very good performances, systematic convergence properties, and an excellent efficiency on parallel computers. Our GPU-based acceleration fully preserves all these properties. In particular, the code is able to run on many cores which may or may not have a GPU associated, and thus on parallel and massive parallel hybrid machines. With double precision calculations, we may achieve considerable speedup, between a factor of 20 for some operations and a factor of 6 for the whole density functional theory code.

  12. Massively Parallel Signal Processing using the Graphics Processing Unit for Real-Time Brain-Computer Interface Feature Extraction.

    Science.gov (United States)

    Wilson, J Adam; Williams, Justin C

    2009-01-01

    The clock speeds of modern computer processors have nearly plateaued in the past 5 years. Consequently, neural prosthetic systems that rely on processing large quantities of data in a short period of time face a bottleneck, in that it may not be possible to process all of the data recorded from an electrode array with high channel counts and bandwidth, such as electrocorticographic grids or other implantable systems. Therefore, in this study a method of using the processing capabilities of a graphics card [graphics processing unit (GPU)] was developed for real-time neural signal processing of a brain-computer interface (BCI). The NVIDIA CUDA system was used to offload processing to the GPU, which is capable of running many operations in parallel, potentially greatly increasing the speed of existing algorithms. The BCI system records many channels of data, which are processed and translated into a control signal, such as the movement of a computer cursor. This signal processing chain involves computing a matrix-matrix multiplication (i.e., a spatial filter), followed by calculating the power spectral density on every channel using an auto-regressive method, and finally classifying appropriate features for control. In this study, the first two computationally intensive steps were implemented on the GPU, and the speed was compared to both the current implementation and a central processing unit-based implementation that uses multi-threading. Significant performance gains were obtained with GPU processing: the current implementation processed 1000 channels of 250 ms in 933 ms, while the new GPU method took only 27 ms, an improvement of nearly 35 times.

  13. Thermal/Heat Transfer Analysis Using a Graphic Processing Unit (GPU) Enabled Computing Environment Project

    Data.gov (United States)

    National Aeronautics and Space Administration — The objective of this project was to use GPU enabled computing to accelerate the analyses of heat transfer and thermal effects. Graphical processing unit (GPU)...

  14. Energy- and cost-efficient lattice-QCD computations using graphics processing units

    Energy Technology Data Exchange (ETDEWEB)

    Bach, Matthias

    2014-07-01

    Quarks and gluons are the building blocks of all hadronic matter, like protons and neutrons. Their interaction is described by Quantum Chromodynamics (QCD), a theory under test by large scale experiments like the Large Hadron Collider (LHC) at CERN and in the future at the Facility for Antiproton and Ion Research (FAIR) at GSI. However, perturbative methods can only be applied to QCD for high energies. Studies from first principles are possible via a discretization onto an Euclidean space-time grid. This discretization of QCD is called Lattice QCD (LQCD) and is the only ab-initio option outside of the high-energy regime. LQCD is extremely compute and memory intensive. In particular, it is by definition always bandwidth limited. Thus - despite the complexity of LQCD applications - it led to the development of several specialized compute platforms and influenced the development of others. However, in recent years General-Purpose computation on Graphics Processing Units (GPGPU) came up as a new means for parallel computing. Contrary to machines traditionally used for LQCD, graphics processing units (GPUs) are a massmarket product. This promises advantages in both the pace at which higher-performing hardware becomes available and its price. CL2QCD is an OpenCL based implementation of LQCD using Wilson fermions that was developed within this thesis. It operates on GPUs by all major vendors as well as on central processing units (CPUs). On the AMD Radeon HD 7970 it provides the fastest double-precision D kernel for a single GPU, achieving 120GFLOPS. D - the most compute intensive kernel in LQCD simulations - is commonly used to compare LQCD platforms. This performance is enabled by an in-depth analysis of optimization techniques for bandwidth-limited codes on GPUs. Further, analysis of the communication between GPU and CPU, as well as between multiple GPUs, enables high-performance Krylov space solvers and linear scaling to multiple GPUs within a single system. LQCD

  15. Real-Time Computation of Parameter Fitting and Image Reconstruction Using Graphical Processing Units

    CERN Document Server

    Locans, Uldis; Suter, Andreas; Fischer, Jannis; Lustermann, Werner; Dissertori, Gunther; Wang, Qiulin

    2016-01-01

    In recent years graphical processing units (GPUs) have become a powerful tool in scientific computing. Their potential to speed up highly parallel applications brings the power of high performance computing to a wider range of users. However, programming these devices and integrating their use in existing applications is still a challenging task. In this paper we examined the potential of GPUs for two different applications. The first application, created at Paul Scherrer Institut (PSI), is used for parameter fitting during data analysis of muSR (muon spin rotation, relaxation and resonance) experiments. The second application, developed at ETH, is used for PET (Positron Emission Tomography) image reconstruction and analysis. Applications currently in use were examined to identify parts of the algorithms in need of optimization. Efficient GPU kernels were created in order to allow applications to use a GPU, to speed up the previously identified parts. Benchmarking tests were performed in order to measure the ...

  16. Fast computation of MadGraph amplitudes on graphics processing unit (GPU)

    Energy Technology Data Exchange (ETDEWEB)

    Hagiwara, K. [KEK Theory Center and Sokendai, Tsukuba (Japan); Kanzaki, J. [KEK and Sokendai, Tsukuba (Japan); Li, Q. [Peking University, Department of Physics and State Key, Laboratory of Nuclear Physics and Technology, Beijing (China); Okamura, N. [International University of Health and Welfare, Department of Radiological Sciences, Ohtawara, Tochigi (Japan); Stelzer, T. [University of Illinois, Department of Physics, Urbana, IL (United States)

    2013-11-15

    Continuing our previous studies on QED and QCD processes, we use the graphics processing unit (GPU) for fast calculations of helicity amplitudes for general Standard Model (SM) processes. Additional HEGET codes to handle all SM interactions are introduced, as well as the program MG2CUDA that converts arbitrary MadGraph generated HELAS amplitudes (FORTRAN) into HEGET codes in CUDA. We test all the codes by comparing amplitudes and cross sections for multi-jet processes at the LHC associated with production of single and double weak bosons, a top-quark pair, Higgs boson plus a weak boson or a top-quark pair, and multiple Higgs bosons via weak-boson fusion, where all the heavy particles are allowed to decay into light quarks and leptons with full spin correlations. All the helicity amplitudes computed by HEGET are found to agree with those computed by HELAS within the expected numerical accuracy, and the cross sections obtained by gBASES, a GPU version of the Monte Carlo integration program, agree with those obtained by BASES (FORTRAN), as well as those obtained by MadGraph. The performance of GPU was over a factor of 10 faster than CPU for all processes except those with the highest number of jets. (orig.)

  17. Optical diagnostics of a single evaporating droplet using fast parallel computing on graphics processing units

    Science.gov (United States)

    Jakubczyk, D.; Migacz, S.; Derkachov, G.; Woźniak, M.; Archer, J.; Kolwas, K.

    2016-09-01

    We report on the first application of the graphics processing units (GPUs) accelerated computing technology to improve performance of numerical methods used for the optical characterization of evaporating microdroplets. Single microdroplets of various liquids with different volatility and molecular weight (glycerine, glycols, water, etc.), as well as mixtures of liquids and diverse suspensions evaporate inside the electrodynamic trap under the chosen temperature and composition of atmosphere. The series of scattering patterns recorded from the evaporating microdroplets are processed by fitting complete Mie theory predictions with gradientless lookup table method. We showed that computations on GPUs can be effectively applied to inverse scattering problems. In particular, our technique accelerated calculations of the Mie scattering theory on a single-core processor in a Matlab environment over 800 times and almost 100 times comparing to the corresponding code in C language. Additionally, we overcame problems of the time-consuming data post-processing when some of the parameters (particularly the refractive index) of an investigated liquid are uncertain. Our program allows us to track the parameters characterizing the evaporating droplet nearly simultaneously with the progress of evaporation.

  18. STRATEGIC BUSINESS UNIT – THE CENTRAL ELEMENT OF THE BUSINESS PORTFOLIO STRATEGIC PLANNING PROCESS

    OpenAIRE

    FLORIN TUDOR IONESCU

    2011-01-01

    Over time, due to changes in the marketing environment, generated by the tightening competition, technological, social and political pressures the companies have adopted a new approach, by which the potential businesses began to be treated as strategic business units. A strategic business unit can be considered a part of a company, a product line within a division, and sometimes a single product or brand. From a strategic perspective, the diversified companies represent a collection of busine...

  19. From Central Guidance Unit to Student Support Services Unit: The Outcome of a Consultation Process in Trinidad and Tobago

    Science.gov (United States)

    Watkins, Marley W.; Hall, Tracey E.; Worrell, Frank C.

    2014-01-01

    In this article, we report on a multiyear consultation project between a consulting team based in the United States and the Ministry of Education in Trinidad and Tobago. The project was initiated with a request for training in counseling for secondary school students but ended with the training of personnel from the Ministry of Education in…

  20. Accelerating the Fourier split operator method via graphics processing units

    CERN Document Server

    Bauke, Heiko

    2010-01-01

    Current generations of graphics processing units have turned into highly parallel devices with general computing capabilities. Thus, graphics processing units may be utilized, for example, to solve time dependent partial differential equations by the Fourier split operator method. In this contribution, we demonstrate that graphics processing units are capable to calculate fast Fourier transforms much more efficiently than traditional central processing units. Thus, graphics processing units render efficient implementations of the Fourier split operator method possible. Performance gains of more than an order of magnitude as compared to implementations for traditional central processing units are reached in the solution of the time dependent Schr\\"odinger equation and the time dependent Dirac equation.

  1. An Investigation Into the Feasibility of Merging Three Technical Processing Operations Into One Central Unit.

    Science.gov (United States)

    Burns, Robert W., Jr.

    Three contiguous schools in the upper midwest--a teacher's training college and a private four-year college in one state, and a land-grant university in another--were studied to see if their libraries could merge one of their major divisions--technical services--into a single administrative unit. Potential benefits from such a merger were felt to…

  2. HAL/SM system functional design specification. [systems analysis and design analysis of central processing units

    Science.gov (United States)

    Ross, C.; Williams, G. P. W., Jr.

    1975-01-01

    The functional design of a preprocessor, and subsystems is described. A structure chart and a data flow diagram are included for each subsystem. Also a group of intermodule interface definitions (one definition per module) is included immediately following the structure chart and data flow for a particular subsystem. Each of these intermodule interface definitions consists of the identification of the module, the function the module is to perform, the identification and definition of parameter interfaces to the module, and any design notes associated with the module. Also described are compilers and computer libraries.

  3. Massively Parallel Signal Processing using the Graphics Processing Unit for Real-Time Brain–Computer Interface Feature Extraction

    OpenAIRE

    J. Adam Wilson; Williams, Justin C.

    2009-01-01

    The clock speeds of modern computer processors have nearly plateaued in the past 5 years. Consequently, neural prosthetic systems that rely on processing large quantities of data in a short period of time face a bottleneck, in that it may not be possible to process all of the data recorded from an electrode array with high channel counts and bandwidth, such as electrocorticographic grids or other implantable systems. Therefore, in this study a method of using the processing capabilities of a ...

  4. Massively parallel signal processing using the graphics processing unit for real-time brain-computer interface feature extraction

    Directory of Open Access Journals (Sweden)

    J. Adam Wilson

    2009-07-01

    Full Text Available The clock speeds of modern computer processors have nearly plateaued in the past five years. Consequently, neural prosthetic systems that rely on processing large quantities of data in a short period of time face a bottleneck, in that it may not be possible to process all of the data recorded from an electrode array with high channel counts and bandwidth, such as electrocorticographic grids or other implantable systems. Therefore, in this study a method of using the processing capabilities of a graphics card (GPU was developed for real-time neural signal processing of a brain-computer interface (BCI. The NVIDIA CUDA system was used to offload processing to the GPU, which is capable of running many operations in parallel, potentially greatly increasing the speed of existing algorithms. The BCI system records many channels of data, which are processed and translated into a control signal, such as the movement of a computer cursor. This signal processing chain involves computing a matrix-matrix multiplication (i.e., a spatial filter, followed by calculating the power spectral density on every channel using an auto-regressive method, and finally classifying appropriate features for control. In this study, the first two computationally-intensive steps were implemented on the GPU, and the speed was compared to both the current implementation and a CPU-based implementation that uses multi-threading. Significant performance gains were obtained with GPU processing: the current implementation processed 1000 channels in 933 ms, while the new GPU method took only 27 ms, an improvement of nearly 35 times.

  5. Distribution of lithostratigraphic units within the central block of Yucca Mountain, Nevada: A three-dimensional computer-based model, Version YMP.R2.0

    International Nuclear Information System (INIS)

    Yucca Mountain, Nevada is underlain by 14.0 to 11.6 Ma volcanic rocks tilted eastward 3 degree to 20 degree and cut by faults that were primarily active between 12.7 and 11.6 Ma. A three-dimensional computer-based model of the central block of the mountain consists of seven structural subblocks composed of six formations and the interstratified-bedded tuffaceous deposits. Rocks from the 12.7 Ma Tiva Canyon Tuff, which forms most of the exposed rocks on the mountain, to the 13.1 Ma Prow Pass Tuff are modeled with 13 surfaces. Modeled units represent single formations such as the Pah Canyon Tuff, grouped units such as the combination of the Yucca Mountain Tuff with the superjacent bedded tuff, and divisions of the Topopah Spring Tuff such as the crystal-poor vitrophyre interval. The model is based on data from 75 boreholes from which a structure contour map at the base of the Tiva Canyon Tuff and isochore maps for each unit are constructed to serve as primary input. Modeling consists of an iterative cycle that begins with the primary structure-contour map from which isochore values of the subjacent model unit are subtracted to produce the structure contour map on the base of the unit. This new structure contour map forms the input for another cycle of isochore subtraction to produce the next structure contour map. In this method of solids modeling, the model units are presented by surfaces (structure contour maps), and all surfaces are stored in the model. Surfaces can be converted to form volumes of model units with additional effort. This lithostratigraphic and structural model can be used for (1) storing data from, and planning future, site characterization activities, (2) preliminary geometry of units for design of Exploratory Studies Facility and potential repository, and (3) performance assessment evaluations

  6. Identification of a site critical for kinase regulation on the central processing unit (CPU) helix of the aspartate receptor.

    Science.gov (United States)

    Trammell, M A; Falke, J J

    1999-01-01

    Ligand binding to the homodimeric aspartate receptor of Escherichia coli and Salmonella typhimurium generates a transmembrane signal that regulates the activity of a cytoplasmic histidine kinase, thereby controlling cellular chemotaxis. This receptor also senses intracellular pH and ambient temperature and is covalently modified by an adaptation system. A specific helix in the cytoplasmic domain of the receptor, helix alpha6, has been previously implicated in the processing of these multiple input signals. While the solvent-exposed face of helix alpha6 possesses adaptive methylation sites known to play a role in kinase regulation, the functional significance of its buried face is less clear. This buried region lies at the subunit interface where helix alpha6 packs against its symmetric partner, helix alpha6'. To test the role of the helix alpha6-helix alpha6' interface in kinase regulation, the present study introduces a series of 13 side-chain substitutions at the Gly 278 position on the buried face of helix alpha6. The substitutions are observed to dramatically alter receptor function in vivo and in vitro, yielding effects ranging from kinase superactivation (11 examples) to complete kinase inhibition (one example). Moreover, four hydrophobic, branched side chains (Val, Ile, Phe, and Trp) lock the kinase in the superactivated state regardless of whether the receptor is occupied by ligand. The observation that most side-chain substitutions at position 278 yield kinase superactivation, combined with evidence that such facile superactivation is rare at other receptor positions, identifies the buried Gly 278 residue as a regulatory hotspot where helix packing is tightly coupled to kinase regulation. Together, helix alpha6 and its packing interactions function as a simple central processing unit (CPU) that senses multiple input signals, integrates these signals, and transmits the output to the signaling subdomain where the histidine kinase is bound. Analogous CPU

  7. Performance evaluation for volumetric segmentation of multiple sclerosis lesions using MATLAB and computing engine in the graphical processing unit (GPU)

    Science.gov (United States)

    Le, Anh H.; Park, Young W.; Ma, Kevin; Jacobs, Colin; Liu, Brent J.

    2010-03-01

    Multiple Sclerosis (MS) is a progressive neurological disease affecting myelin pathways in the brain. Multiple lesions in the white matter can cause paralysis and severe motor disabilities of the affected patient. To solve the issue of inconsistency and user-dependency in manual lesion measurement of MRI, we have proposed a 3-D automated lesion quantification algorithm to enable objective and efficient lesion volume tracking. The computer-aided detection (CAD) of MS, written in MATLAB, utilizes K-Nearest Neighbors (KNN) method to compute the probability of lesions on a per-voxel basis. Despite the highly optimized algorithm of imaging processing that is used in CAD development, MS CAD integration and evaluation in clinical workflow is technically challenging due to the requirement of high computation rates and memory bandwidth in the recursive nature of the algorithm. In this paper, we present the development and evaluation of using a computing engine in the graphical processing unit (GPU) with MATLAB for segmentation of MS lesions. The paper investigates the utilization of a high-end GPU for parallel computing of KNN in the MATLAB environment to improve algorithm performance. The integration is accomplished using NVIDIA's CUDA developmental toolkit for MATLAB. The results of this study will validate the practicality and effectiveness of the prototype MS CAD in a clinical setting. The GPU method may allow MS CAD to rapidly integrate in an electronic patient record or any disease-centric health care system.

  8. Reduction of computing time for seismic applications based on the Helmholtz equation by Graphics Processing Units

    NARCIS (Netherlands)

    Knibbe, H.P.

    2015-01-01

    The oil and gas industry makes use of computational intensive algorithms to provide an image of the subsurface. The image is obtained by sending wave energy into the subsurface and recording the signal required for a seismic wave to reflect back to the surface from the Earth interfaces that may have

  9. Compute-unified device architecture implementation of a block-matching algorithm for multiple graphical processing unit cards

    Science.gov (United States)

    Massanes, Francesc; Cadennes, Marie; Brankov, Jovan G.

    2011-07-01

    We describe and evaluate a fast implementation of a classical block-matching motion estimation algorithm for multiple graphical processing units (GPUs) using the compute unified device architecture computing engine. The implemented block-matching algorithm uses summed absolute difference error criterion and full grid search (FS) for finding optimal block displacement. In this evaluation, we compared the execution time of a GPU and CPU implementation for images of various sizes, using integer and noninteger search grids. The results show that use of a GPU card can shorten computation time by a factor of 200 times for integer and 1000 times for a noninteger search grid. The additional speedup for a noninteger search grid comes from the fact that GPU has built-in hardware for image interpolation. Further, when using multiple GPU cards, the presented evaluation shows the importance of the data splitting method across multiple cards, but an almost linear speedup with a number of cards is achievable. In addition, we compared the execution time of the proposed FS GPU implementation with two existing, highly optimized nonfull grid search CPU-based motion estimations methods, namely implementation of the Pyramidal Lucas Kanade Optical flow algorithm in OpenCV and simplified unsymmetrical multi-hexagon search in H.264/AVC standard. In these comparisons, FS GPU implementation still showed modest improvement even though the computational complexity of FS GPU implementation is substantially higher than non-FS CPU implementation. We also demonstrated that for an image sequence of 720 × 480 pixels in resolution commonly used in video surveillance, the proposed GPU implementation is sufficiently fast for real-time motion estimation at 30 frames-per-second using two NVIDIA C1060 Tesla GPU cards.

  10. Compute-unified device architecture implementation of a block-matching algorithm for multiple graphical processing unit cards.

    Science.gov (United States)

    Massanes, Francesc; Cadennes, Marie; Brankov, Jovan G

    2011-07-01

    In this paper we describe and evaluate a fast implementation of a classical block matching motion estimation algorithm for multiple Graphical Processing Units (GPUs) using the Compute Unified Device Architecture (CUDA) computing engine. The implemented block matching algorithm (BMA) uses summed absolute difference (SAD) error criterion and full grid search (FS) for finding optimal block displacement. In this evaluation we compared the execution time of a GPU and CPU implementation for images of various sizes, using integer and non-integer search grids.The results show that use of a GPU card can shorten computation time by a factor of 200 times for integer and 1000 times for a non-integer search grid. The additional speedup for non-integer search grid comes from the fact that GPU has built-in hardware for image interpolation. Further, when using multiple GPU cards, the presented evaluation shows the importance of the data splitting method across multiple cards, but an almost linear speedup with a number of cards is achievable.In addition we compared execution time of the proposed FS GPU implementation with two existing, highly optimized non-full grid search CPU based motion estimations methods, namely implementation of the Pyramidal Lucas Kanade Optical flow algorithm in OpenCV and Simplified Unsymmetrical multi-Hexagon search in H.264/AVC standard. In these comparisons, FS GPU implementation still showed modest improvement even though the computational complexity of FS GPU implementation is substantially higher than non-FS CPU implementation.We also demonstrated that for an image sequence of 720×480 pixels in resolution, commonly used in video surveillance, the proposed GPU implementation is sufficiently fast for real-time motion estimation at 30 frames-per-second using two NVIDIA C1060 Tesla GPU cards. PMID:22347787

  11. Performance of heterogeneous computing with graphics processing unit and many integrated core for hartree potential calculations on a numerical grid.

    Science.gov (United States)

    Choi, Sunghwan; Kwon, Oh-Kyoung; Kim, Jaewook; Kim, Woo Youn

    2016-09-15

    We investigated the performance of heterogeneous computing with graphics processing units (GPUs) and many integrated core (MIC) with 20 CPU cores (20×CPU). As a practical example toward large scale electronic structure calculations using grid-based methods, we evaluated the Hartree potentials of silver nanoparticles with various sizes (3.1, 3.7, 4.9, 6.1, and 6.9 nm) via a direct integral method supported by the sinc basis set. The so-called work stealing scheduler was used for efficient heterogeneous computing via the balanced dynamic distribution of workloads between all processors on a given architecture without any prior information on their individual performances. 20×CPU + 1GPU was up to ∼1.5 and ∼3.1 times faster than 1GPU and 20×CPU, respectively. 20×CPU + 2GPU was ∼4.3 times faster than 20×CPU. The performance enhancement by CPU + MIC was considerably lower than expected because of the large initialization overhead of MIC, although its theoretical performance is similar with that of CPU + GPU. © 2016 Wiley Periodicals, Inc. PMID:27431905

  12. Performance of heterogeneous computing with graphics processing unit and many integrated core for hartree potential calculations on a numerical grid.

    Science.gov (United States)

    Choi, Sunghwan; Kwon, Oh-Kyoung; Kim, Jaewook; Kim, Woo Youn

    2016-09-15

    We investigated the performance of heterogeneous computing with graphics processing units (GPUs) and many integrated core (MIC) with 20 CPU cores (20×CPU). As a practical example toward large scale electronic structure calculations using grid-based methods, we evaluated the Hartree potentials of silver nanoparticles with various sizes (3.1, 3.7, 4.9, 6.1, and 6.9 nm) via a direct integral method supported by the sinc basis set. The so-called work stealing scheduler was used for efficient heterogeneous computing via the balanced dynamic distribution of workloads between all processors on a given architecture without any prior information on their individual performances. 20×CPU + 1GPU was up to ∼1.5 and ∼3.1 times faster than 1GPU and 20×CPU, respectively. 20×CPU + 2GPU was ∼4.3 times faster than 20×CPU. The performance enhancement by CPU + MIC was considerably lower than expected because of the large initialization overhead of MIC, although its theoretical performance is similar with that of CPU + GPU. © 2016 Wiley Periodicals, Inc.

  13. 中央核处理器的真空热解%The Vacuum Pyrolysis of Central Processing Unit

    Institute of Scientific and Technical Information of China (English)

    王晓雅

    2012-01-01

    The low temperature pyrolysis of an important electronic waste,central processing unit(CPU) was investigated under vacuum condition and was compared with the results of higher temperature pyrolysis.Results showed that the pyrolysis of CPU took place adequately with a high pyrolysis oils yield which was good for the recovery of organics in the CPU and the pins could be separated from the base plates at pyrolysis temperature of 500~700 ℃.When the pyrolysis was carried out at 300~400 ℃,the solder mask of the CPU was pyrolysed and the pins could be separated from the base plates with a relatively intact gold-plated layer.Meanwhile,the pyrolysis oils yield was lower but the composition of the pyrolysis oils was relatively simple which was easy for separation and purification.%在真空条件下对中央核处理器(CPU)这一重要的电子废弃物进行低温热解研究,并对比较高温度下的热解效果。结果表明:500~700℃热解温度下,CPU基板充分裂解,产油率高,有利于CPU中有机物的回收,且针脚可与基板分离完全。低温热解300~400℃条件下,CPU的阻焊层发生裂解,针脚可与基板分离,且针脚镀金层较为完整,产油率相对较低,但液体产物组分较为单一,易于分离提纯。

  14. GPUDePiCt: A Parallel Implementation of a Clustering Algorithm for Computing Degenerate Primers on Graphics Processing Units.

    Science.gov (United States)

    Cickovski, Trevor; Flor, Tiffany; Irving-Sachs, Galen; Novikov, Philip; Parda, James; Narasimhan, Giri

    2015-01-01

    In order to make multiple copies of a target sequence in the laboratory, the technique of Polymerase Chain Reaction (PCR) requires the design of "primers", which are short fragments of nucleotides complementary to the flanking regions of the target sequence. If the same primer is to amplify multiple closely related target sequences, then it is necessary to make the primers "degenerate", which would allow it to hybridize to target sequences with a limited amount of variability that may have been caused by mutations. However, the PCR technique can only allow a limited amount of degeneracy, and therefore the design of degenerate primers requires the identification of reasonably well-conserved regions in the input sequences. We take an existing algorithm for designing degenerate primers that is based on clustering and parallelize it in a web-accessible software package GPUDePiCt, using a shared memory model and the computing power of Graphics Processing Units (GPUs). We test our implementation on large sets of aligned sequences from the human genome and show a multi-fold speedup for clustering using our hybrid GPU/CPU implementation over a pure CPU approach for these sequences, which consist of more than 7,500 nucleotides. We also demonstrate that this speedup is consistent over larger numbers and longer lengths of aligned sequences.

  15. Study on efficiency of time computation in x-ray imaging simulation base on Monte Carlo algorithm using graphics processing unit

    Science.gov (United States)

    Setiani, Tia Dwi; Suprijadi, Haryanto, Freddy

    2016-03-01

    Monte Carlo (MC) is one of the powerful techniques for simulation in x-ray imaging. MC method can simulate the radiation transport within matter with high accuracy and provides a natural way to simulate radiation transport in complex systems. One of the codes based on MC algorithm that are widely used for radiographic images simulation is MC-GPU, a codes developed by Andrea Basal. This study was aimed to investigate the time computation of x-ray imaging simulation in GPU (Graphics Processing Unit) compared to a standard CPU (Central Processing Unit). Furthermore, the effect of physical parameters to the quality of radiographic images and the comparison of image quality resulted from simulation in the GPU and CPU are evaluated in this paper. The simulations were run in CPU which was simulated in serial condition, and in two GPU with 384 cores and 2304 cores. In simulation using GPU, each cores calculates one photon, so, a large number of photon were calculated simultaneously. Results show that the time simulations on GPU were significantly accelerated compared to CPU. The simulations on the 2304 core of GPU were performed about 64 -114 times faster than on CPU, while the simulation on the 384 core of GPU were performed about 20 - 31 times faster than in a single core of CPU. Another result shows that optimum quality of images from the simulation was gained at the history start from 108 and the energy from 60 Kev to 90 Kev. Analyzed by statistical approach, the quality of GPU and CPU images are relatively the same.

  16. Computers and data processing

    CERN Document Server

    Deitel, Harvey M

    1985-01-01

    Computers and Data Processing provides information pertinent to the advances in the computer field. This book covers a variety of topics, including the computer hardware, computer programs or software, and computer applications systems.Organized into five parts encompassing 19 chapters, this book begins with an overview of some of the fundamental computing concepts. This text then explores the evolution of modern computing systems from the earliest mechanical calculating devices to microchips. Other chapters consider how computers present their results and explain the storage and retrieval of

  17. Graphics processing units: more than the way to realistic video games

    OpenAIRE

    GARCIA-SUCERQUIA, JORGE; Trujillo, Carlos

    2011-01-01

    The huge market of the video games has propelled the development of hardware and software focused on making the game environment more realistic. Among such developments are the graphics processing units (GPU), which are intended to alleviate the central processing unit (CPU) of the host computer from the computation that creates “life” for the video games. GPUs reach this goal with the use of multiple computation cores operating on a parallel architecture; these features have made the GPUs at...

  18. Real-space density functional theory on graphical processing units: computational approach and comparison to Gaussian basis set methods

    OpenAIRE

    Andrade, Xavier; Aspuru-Guzik, Alan

    2013-01-01

    We discuss the application of graphical processing units (GPUs) to accelerate real-space density functional theory (DFT) calculations. To make our implementation efficient, we have developed a scheme to expose the data parallelism available in the DFT approach; this is applied to the different procedures required for a real-space DFT calculation. We present results for current-generation GPUs from AMD and Nvidia, which show that our scheme, implemented in the free code Octopus, can reach a su...

  19. Power plant process computer

    International Nuclear Information System (INIS)

    The concept of instrumentation and control in nuclear power plants incorporates the use of process computers for tasks which are on-line in respect to real-time requirements but not closed-loop in respect to closed-loop control. The general scope of tasks is: - alarm annunciation on CRT's - data logging - data recording for post trip reviews and plant behaviour analysis - nuclear data computation - graphic displays. Process computers are used additionally for dedicated tasks such as the aeroball measuring system, the turbine stress evaluator. Further applications are personal dose supervision and access monitoring. (orig.)

  20. Hyperspectral processing in graphical processing units

    Science.gov (United States)

    Winter, Michael E.; Winter, Edwin M.

    2011-06-01

    With the advent of the commercial 3D video card in the mid 1990s, we have seen an order of magnitude performance increase with each generation of new video cards. While these cards were designed primarily for visualization and video games, it became apparent after a short while that they could be used for scientific purposes. These Graphical Processing Units (GPUs) are rapidly being incorporated into data processing tasks usually reserved for general purpose computers. It has been found that many image processing problems scale well to modern GPU systems. We have implemented four popular hyperspectral processing algorithms (N-FINDR, linear unmixing, Principal Components, and the RX anomaly detection algorithm). These algorithms show an across the board speedup of at least a factor of 10, with some special cases showing extreme speedups of a hundred times or more.

  1. Real-space density functional theory on graphical processing units: computational approach and comparison to Gaussian basis set methods

    CERN Document Server

    Andrade, Xavier

    2013-01-01

    We discuss the application of graphical processing units (GPUs) to accelerate real-space density functional theory (DFT) calculations. To make our implementation efficient, we have developed a scheme to expose the data parallelism available in the DFT approach; this is applied to the different procedures required for a real-space DFT calculation. We present results for current-generation GPUs from AMD and Nvidia, which show that our scheme, implemented in the free code OCTOPUS, can reach a sustained performance of up to 90 GFlops for a single GPU, representing an important speed-up when compared to the CPU version of the code. Moreover, for some systems our implementation can outperform a GPU Gaussian basis set code, showing that the real-space approach is a competitive alternative for DFT simulations on GPUs.

  2. Real-Space Density Functional Theory on Graphical Processing Units: Computational Approach and Comparison to Gaussian Basis Set Methods.

    Science.gov (United States)

    Andrade, Xavier; Aspuru-Guzik, Alán

    2013-10-01

    We discuss the application of graphical processing units (GPUs) to accelerate real-space density functional theory (DFT) calculations. To make our implementation efficient, we have developed a scheme to expose the data parallelism available in the DFT approach; this is applied to the different procedures required for a real-space DFT calculation. We present results for current-generation GPUs from AMD and Nvidia, which show that our scheme, implemented in the free code Octopus, can reach a sustained performance of up to 90 GFlops for a single GPU, representing a significant speed-up when compared to the CPU version of the code. Moreover, for some systems, our implementation can outperform a GPU Gaussian basis set code, showing that the real-space approach is a competitive alternative for DFT simulations on GPUs. PMID:26589153

  3. Real-Space Density Functional Theory on Graphical Processing Units: Computational Approach and Comparison to Gaussian Basis Set Methods.

    Science.gov (United States)

    Andrade, Xavier; Aspuru-Guzik, Alán

    2013-10-01

    We discuss the application of graphical processing units (GPUs) to accelerate real-space density functional theory (DFT) calculations. To make our implementation efficient, we have developed a scheme to expose the data parallelism available in the DFT approach; this is applied to the different procedures required for a real-space DFT calculation. We present results for current-generation GPUs from AMD and Nvidia, which show that our scheme, implemented in the free code Octopus, can reach a sustained performance of up to 90 GFlops for a single GPU, representing a significant speed-up when compared to the CPU version of the code. Moreover, for some systems, our implementation can outperform a GPU Gaussian basis set code, showing that the real-space approach is a competitive alternative for DFT simulations on GPUs.

  4. Relativistic hydrodynamics on graphics processing units

    International Nuclear Information System (INIS)

    Hydrodynamics calculations have been successfully used in studies of the bulk properties of the Quark-Gluon Plasma, particularly of elliptic flow and shear viscosity. However, there are areas (for instance event-by-event simulations for flow fluctuations and higher-order flow harmonics studies) where further advancement is hampered by lack of efficient and precise 3+1D program. This problem can be solved by using Graphics Processing Unit (GPU) computing, which offers unprecedented increase of the computing power compared to standard CPU simulations. In this work, we present an implementation of 3+1D ideal hydrodynamics simulations on the Graphics Processing Unit using Nvidia CUDA framework. MUSTA-FORCE (MUlti STAge, First ORder CEntral, with a slope limiter and MUSCL reconstruction) and WENO (Weighted Essentially Non-Oscillating) schemes are employed in the simulations, delivering second (MUSTA-FORCE), fifth and seventh (WENO) order of accuracy. Third order Runge-Kutta scheme was used for integration in the time domain. Our implementation improves the performance by about 2 orders of magnitude compared to a single threaded program. The algorithm tests of 1+1D shock tube and 3+1D simulations with ellipsoidal and Hubble-like expansion are presented

  5. Relativistic hydrodynamics on graphics processing units

    CERN Document Server

    Sikorski, Jan; Porter-Sobieraj, Joanna; Słodkowski, Marcin; Krzyżanowski, Piotr; Książek, Natalia; Duda, Przemysław

    2016-01-01

    Hydrodynamics calculations have been successfully used in studies of the bulk properties of the Quark-Gluon Plasma, particularly of elliptic flow and shear viscosity. However, there are areas (for instance event-by-event simulations for flow fluctuations and higher-order flow harmonics studies) where further advancement is hampered by lack of efficient and precise 3+1D~program. This problem can be solved by using Graphics Processing Unit (GPU) computing, which offers unprecedented increase of the computing power compared to standard CPU simulations. In this work, we present an implementation of 3+1D ideal hydrodynamics simulations on the Graphics Processing Unit using Nvidia CUDA framework. MUSTA-FORCE (MUlti STAge, First ORder CEntral, with a~slope limiter and MUSCL reconstruction) and WENO (Weighted Essentially Non-Oscillating) schemes are employed in the simulations, delivering second (MUSTA-FORCE), fifth and seventh (WENO) order of accuracy. Third order Runge-Kutta scheme was used for integration in the t...

  6. Using compute unified device architecture-enabled graphic processing unit to accelerate fast Fourier transform-based regression Kriging interpolation on a MODIS land surface temperature image

    Science.gov (United States)

    Hu, Hongda; Shu, Hong; Hu, Zhiyong; Xu, Jianhui

    2016-04-01

    Kriging interpolation provides the best linear unbiased estimation for unobserved locations, but its heavy computation limits the manageable problem size in practice. To address this issue, an efficient interpolation procedure incorporating the fast Fourier transform (FFT) was developed. Extending this efficient approach, we propose an FFT-based parallel algorithm to accelerate regression Kriging interpolation on an NVIDIA® compute unified device architecture (CUDA)-enabled graphic processing unit (GPU). A high-performance cuFFT library in the CUDA toolkit was introduced to execute computation-intensive FFTs on the GPU, and three time-consuming processes were redesigned as kernel functions and executed on the CUDA cores. A MODIS land surface temperature 8-day image tile at a resolution of 1 km was resampled to create experimental datasets at eight different output resolutions. These datasets were used as the interpolation grids with different sizes in a comparative experiment. Experimental results show that speedup of the FFT-based regression Kriging interpolation accelerated by GPU can exceed 1000 when processing datasets with large grid sizes, as compared to the traditional Kriging interpolation running on the CPU. These results demonstrate that the combination of FFT methods and GPU-based parallel computing techniques greatly improves the computational performance without loss of precision.

  7. Using compute unified device architecture-enabled graphic processing unit to accelerate fast Fourier transform-based regression Kriging interpolation on a MODIS land surface temperature image

    Science.gov (United States)

    Hu, Hongda; Shu, Hong; Hu, Zhiyong; Xu, Jianhui

    2016-04-01

    Kriging interpolation provides the best linear unbiased estimation for unobserved locations, but its heavy computation limits the manageable problem size in practice. To address this issue, an efficient interpolation procedure incorporating the fast Fourier transform (FFT) was developed. Extending this efficient approach, we propose an FFT-based parallel algorithm to accelerate regression Kriging interpolation on an NVIDIA® compute unified device architecture (CUDA)-enabled graphic processing unit (GPU). A high-performance cuFFT library in the CUDA toolkit was introduced to execute computation-intensive FFTs on the GPU, and three time-consuming processes were redesigned as kernel functions and executed on the CUDA cores. A MODIS land surface temperature 8-day image tile at a resolution of 1 km was resampled to create experimental datasets at eight different output resolutions. These datasets were used as the interpolation grids with different sizes in a comparative experiment. Experimental results show that speedup of the FFT-based regression Kriging interpolation accelerated by GPU can exceed 1000 when processing datasets with large grid sizes, as compared to the traditional Kriging interpolation running on the CPU. These results demonstrate that the combination of FFT methods and GPU-based parallel computing techniques greatly improves the computational performance without loss of precision.

  8. Distributed Computing with Centralized Support Works at Brigham Young.

    Science.gov (United States)

    McDonald, Kelly; Stone, Brad

    1992-01-01

    Brigham Young University (Utah) has addressed the need for maintenance and support of distributed computing systems on campus by implementing a program patterned after a national business franchise, providing the support and training of a centralized administration but allowing each unit to operate much as an independent small business.…

  9. Central Limit Theorem for Nonlinear Hawkes Processes

    CERN Document Server

    Zhu, Lingjiong

    2012-01-01

    Hawkes process is a self-exciting point process with clustering effect whose jump rate depends on its entire past history. It has wide applications in neuroscience, finance and many other fields. Linear Hawkes process has an immigration-birth representation and can be computed more or less explicitly. It has been extensively studied in the past and the limit theorems are well understood. On the contrary, nonlinear Hawkes process lacks the immigration-birth representation and is much harder to analyze. In this paper, we obtain a functional central limit theorem for nonlinear Hawkes process.

  10. Aquarius Digital Processing Unit

    Science.gov (United States)

    Forgione, Joshua; Winkert, George; Dobson, Norman

    2009-01-01

    Three documents provide information on a digital processing unit (DPU) for the planned Aquarius mission, in which a radiometer aboard a spacecraft orbiting Earth is to measure radiometric temperatures from which data on sea-surface salinity are to be deduced. The DPU is the interface between the radiometer and an instrument-command-and-data system aboard the spacecraft. The DPU cycles the radiometer through a programmable sequence of states, collects and processes all radiometric data, and collects all housekeeping data pertaining to operation of the radiometer. The documents summarize the DPU design, with emphasis on innovative aspects that include mainly the following: a) In the radiometer and the DPU, conversion from analog voltages to digital data is effected by means of asynchronous voltage-to-frequency converters in combination with a frequency-measurement scheme implemented in field-programmable gate arrays (FPGAs). b) A scheme to compensate for aging and changes in the temperature of the DPU in order to provide an overall temperature-measurement accuracy within 0.01 K includes a high-precision, inexpensive DC temperature measurement scheme and a drift-compensation scheme that was used on the Cassini radar system. c) An interface among multiple FPGAs in the DPU guarantees setup and hold times.

  11. Fast point-based method of a computer-generated hologram for a triangle-patch model by using a graphics processing unit.

    Science.gov (United States)

    Sugawara, Takuya; Ogihara, Yuki; Sakamoto, Yuji

    2016-01-20

    The point-based method and fast-Fourier-transform-based method are commonly used for calculation methods of computer-generation holograms. This paper proposes a novel fast calculation method for a patch model, which uses the point-based method. The method provides a calculation time that is proportional to the number of patches but not to that of the point light sources. This means that the method is suitable for calculating a wide area covered by patches quickly. Experiments using a graphics processing unit indicated that the proposed method is about 8 times or more faster than the ordinary point-based method.

  12. Fast point-based method of a computer-generated hologram for a triangle-patch model by using a graphics processing unit.

    Science.gov (United States)

    Sugawara, Takuya; Ogihara, Yuki; Sakamoto, Yuji

    2016-01-20

    The point-based method and fast-Fourier-transform-based method are commonly used for calculation methods of computer-generation holograms. This paper proposes a novel fast calculation method for a patch model, which uses the point-based method. The method provides a calculation time that is proportional to the number of patches but not to that of the point light sources. This means that the method is suitable for calculating a wide area covered by patches quickly. Experiments using a graphics processing unit indicated that the proposed method is about 8 times or more faster than the ordinary point-based method. PMID:26835949

  13. 中央维护计算机系统中的故障处理技术%Fault Data Processing Technology Applied in Central Maintenance Computer System

    Institute of Scientific and Technical Information of China (English)

    李文娟; 贺尔铭; 马存宝

    2014-01-01

    Based on knowledge on how failures are generated and processed onboard the aircraft,Implement functions,modeling strategy and experienced procedure of fault data processing including in central maintenance computer system (CMCS)are analyzed.Fault message generation,cascaded fault screening,multiple fault consolidation and flight deck effect (FDE)correlation with maintenance message are present respectively.It is critical that logic equation-based fault isolation technology is discussed for correlation design strategy to fault diag-nosis module by taking foreign advanced airplane model as the example.With the development and maturity of CMCS,maintenance and man-agement mode for commercial vehicle are changed inevitability.%针对飞机故障的产生逻辑和处理方法,详细分析了中央维护计算机系统故障数据处理模块的主要功能,建模思想和处理流程;再现了故障信息产生、级联效应删除、重复故障合并以及飞机驾驶舱效应与维护信息关联的全过程。并以国外先进机型为例,对基于逻辑方程的故障隔离技术进行了深入分析,使得系统设计思路与机载故障诊断模型相结合;随着 CMCS技术的不断发展和成熟,必将对商用飞机传统维护模式和运营模式产生深远的影响。

  14. Sono-leather technology with ultrasound: a boon for unit operations in leather processing - review of our research work at Central Leather Research Institute (CLRI), India.

    Science.gov (United States)

    Sivakumar, Venkatasubramanian; Swaminathan, Gopalaraman; Rao, Paruchuri Gangadhar; Ramasami, Thirumalachari

    2009-01-01

    Ultrasound is a sound wave with a frequency above the human audible range of 16 Hz to 16 kHz. In recent years, numerous unit operations involving physical as well as chemical processes are reported to have been enhanced by ultrasonic irradiation. There have been benefits such as improvement in process efficiency, process time reduction, performing the processes under milder conditions and avoiding the use of some toxic chemicals to achieve cleaner processing. These could be a better way of augmentation for the processes as an advanced technique. The important point here is that ultrasonic irradiation is physical method activation rather than using chemical entities. Detailed studies have been made in the unit operations related to leather such as diffusion rate enhancement through porous leather matrix, cleaning, degreasing, tanning, dyeing, fatliquoring, oil-water emulsification process and solid-liquid tannin extraction from vegetable tanning materials as well as in precipitation reaction in wastewater treatment. The fundamental mechanism involved in these processes is ultrasonic cavitation in liquid media. In addition to this there also exist some process specific mechanisms for the enhancement of the processes. For instance, possible real-time reversible pore-size changes during ultrasound propagation through skin/leather matrix could be a reason for diffusion rate enhancement in leather processing as reported for the first time. Exhaustive scientific research work has been carried out in this area by our group working in Chemical Engineering Division of CLRI and most of these benefits have been proven with publications in valued peer-reviewed international journals. The overall results indicate that about 2-5-fold increase in the process efficiency due to ultrasound under the given process conditions for various unit operations with additional benefits. Scale-up studies are underway for converting these concepts in to a real viable larger scale operation. In

  15. Sono-leather technology with ultrasound: a boon for unit operations in leather processing - review of our research work at Central Leather Research Institute (CLRI), India.

    Science.gov (United States)

    Sivakumar, Venkatasubramanian; Swaminathan, Gopalaraman; Rao, Paruchuri Gangadhar; Ramasami, Thirumalachari

    2009-01-01

    Ultrasound is a sound wave with a frequency above the human audible range of 16 Hz to 16 kHz. In recent years, numerous unit operations involving physical as well as chemical processes are reported to have been enhanced by ultrasonic irradiation. There have been benefits such as improvement in process efficiency, process time reduction, performing the processes under milder conditions and avoiding the use of some toxic chemicals to achieve cleaner processing. These could be a better way of augmentation for the processes as an advanced technique. The important point here is that ultrasonic irradiation is physical method activation rather than using chemical entities. Detailed studies have been made in the unit operations related to leather such as diffusion rate enhancement through porous leather matrix, cleaning, degreasing, tanning, dyeing, fatliquoring, oil-water emulsification process and solid-liquid tannin extraction from vegetable tanning materials as well as in precipitation reaction in wastewater treatment. The fundamental mechanism involved in these processes is ultrasonic cavitation in liquid media. In addition to this there also exist some process specific mechanisms for the enhancement of the processes. For instance, possible real-time reversible pore-size changes during ultrasound propagation through skin/leather matrix could be a reason for diffusion rate enhancement in leather processing as reported for the first time. Exhaustive scientific research work has been carried out in this area by our group working in Chemical Engineering Division of CLRI and most of these benefits have been proven with publications in valued peer-reviewed international journals. The overall results indicate that about 2-5-fold increase in the process efficiency due to ultrasound under the given process conditions for various unit operations with additional benefits. Scale-up studies are underway for converting these concepts in to a real viable larger scale operation. In

  16. Computing betweenness centrality in external memory

    DEFF Research Database (Denmark)

    Arge, Lars; Goodrich, Michael T.; Walderveen, Freek van

    2013-01-01

    Betweenness centrality is one of the most well-known measures of the importance of nodes in a social-network graph. In this paper we describe the first known external-memory and cache-oblivious algorithms for computing betweenness centrality. We present four different external-memory algorithms...... exhibiting various tradeoffs with respect to performance. Two of the algorithms are cache-oblivious. We describe general algorithms for networks with weighted and unweighted edges and a specialized algorithm for networks with small diameters, as is common in social networks exhibiting the “small worlds...

  17. Tandem processes promoted by a hydrogen shift in 6-arylfulvenes bearing acetalic units at ortho position: a combined experimental and computational study.

    Science.gov (United States)

    Alajarin, Mateo; Marin-Luna, Marta; Sanchez-Andrada, Pilar; Vidal, Angel

    2016-01-01

    6-Phenylfulvenes bearing (1,3-dioxolan or dioxan)-2-yl substituents at ortho position convert into mixtures of 4- and 9-(hydroxy)alkoxy-substituted benz[f]indenes as result of cascade processes initiated by a thermally activated hydrogen shift. Structurally related fulvenes with non-cyclic acetalic units afforded mixtures of 4- and 9-alkoxybenz[f]indenes under similar thermal conditions. Mechanistic paths promoted by an initial [1,4]-, [1,5]-, [1,7]- or [1,9]-H shift are conceivable for explaining these conversions. Deuterium labelling experiments exclude the [1,4]-hydride shift as the first step. A computational study scrutinized the reaction channels of these tandem conversions starting by [1,5]-, [1,7]- and [1,9]-H shifts, revealing that this first step is the rate-determining one and that the [1,9]-H shift is the one with the lowest energy barrier. PMID:26977185

  18. Citizens unite for computational immunology!

    Science.gov (United States)

    Belden, Orrin S; Baker, Sarah Catherine; Baker, Brian M

    2015-07-01

    Recruiting volunteers who can provide computational time, programming expertise, or puzzle-solving talent has emerged as a powerful tool for biomedical research. Recent projects demonstrate the potential for such 'crowdsourcing' efforts in immunology. Tools for developing applications, new funding opportunities, and an eager public make crowdsourcing a serious option for creative solutions for computationally-challenging problems. Expanded uses of crowdsourcing in immunology will allow for more efficient large-scale data collection and analysis. It will also involve, inspire, educate, and engage the public in a variety of meaningful ways. The benefits are real - it is time to jump in!

  19. Mobility in process calculi and natural computing

    CERN Document Server

    Aman, Bogdan

    2011-01-01

    The design of formal calculi in which fundamental concepts underlying interactive systems can be described and studied has been a central theme of theoretical computer science in recent decades, while membrane computing, a rule-based formalism inspired by biological cells, is a more recent field that belongs to the general area of natural computing. This is the first book to establish a link between these two research directions while treating mobility as the central topic. In the first chapter the authors offer a formal description of mobility in process calculi, noting the entities that move

  20. Data Sorting Using Graphics Processing Units

    Directory of Open Access Journals (Sweden)

    M. J. Mišić

    2012-06-01

    Full Text Available Graphics processing units (GPUs have been increasingly used for general-purpose computation in recent years. The GPU accelerated applications are found in both scientific and commercial domains. Sorting is considered as one of the very important operations in many applications, so its efficient implementation is essential for the overall application performance. This paper represents an effort to analyze and evaluate the implementations of the representative sorting algorithms on the graphics processing units. Three sorting algorithms (Quicksort, Merge sort, and Radix sort were evaluated on the Compute Unified Device Architecture (CUDA platform that is used to execute applications on NVIDIA graphics processing units. Algorithms were tested and evaluated using an automated test environment with input datasets of different characteristics. Finally, the results of this analysis are briefly discussed.

  1. 2011 floods of the central United States

    Science.gov (United States)

    ,

    2013-01-01

    The Central United States experienced record-setting flooding during 2011, with floods that extended from headwater streams in the Rocky Mountains, to transboundary rivers in the upper Midwest and Northern Plains, to the deep and wide sand-bedded lower Mississippi River. The U.S. Geological Survey (USGS), as part of its mission, collected extensive information during and in the aftermath of the 2011 floods to support scientific analysis of the origins and consequences of extreme floods. The information collected for the 2011 floods, combined with decades of past data, enables scientists and engineers from the USGS to provide syntheses and scientific analyses to inform emergency managers, planners, and policy makers about life-safety, economic, and environmental-health issues surrounding flood hazards for the 2011 floods and future floods like it. USGS data, information, and scientific analyses provide context and understanding of the effect of floods on complex societal issues such as ecosystem and human health, flood-plain management, climate-change adaptation, economic security, and the associated policies enacted for mitigation. Among the largest societal questions is "How do we balance agricultural, economic, life-safety, and environmental needs in and along our rivers?" To address this issue, many scientific questions have to be answered including the following: * How do the 2011 weather and flood conditions compare to the past weather and flood conditions and what can we reasonably expect in the future for flood magnitudes?

  2. Parallel Computers in Signal Processing

    Directory of Open Access Journals (Sweden)

    Narsingh Deo

    1985-07-01

    Full Text Available Signal processing often requires a great deal of raw computing power for which it is important to take a look at parallel computers. The paper reviews various types of parallel computer architectures from the viewpoint of signal and image processing.

  3. THOR Particle Processing Unit PPU

    Science.gov (United States)

    Federica Marcucci, Maria; Bruno, Roberto; Consolini, Giuseppe; D'Amicis, Raffaella; De Lauretis, Marcello; De Marco, Rossana; De Michelis, Paola; Francia, Patrizia; Laurenza, Monica; Materassi, Massimo; Vellante, Massimo; Valentini, Francesco

    2016-04-01

    Turbulence Heating ObserveR (THOR) is the first mission ever flown in space dedicated to plasma turbulence. On board THOR, data collected by the Turbulent Electron Analyser, the Ion Mass Spectrum analyser and the Cold Solar Wind ion analyser instruments will be processed by a common digital processor unit, the Particle Processing Unit (PPU). PPU architecture will be based on the state of the art space flight processors and will be fully redundant, in order to efficiently and safely handle the data from the numerous sensors of the instruments suite. The approach of a common processing unit for particle instruments is very important for the enabling of an efficient management for correlative plasma measurements, also facilitating interoperation with other instruments on the spacecraft. Moreover, it permits technical and programmatic synergies giving the possibility to optimize and save spacecraft resources.

  4. Installation of new Generation General Purpose Computer (GPC) compact unit

    Science.gov (United States)

    1991-01-01

    In the Kennedy Space Center's (KSC's) Orbiter Processing Facility (OPF) high bay 2, Spacecraft Electronics technician Ed Carter (right), wearing clean suit, prepares for (26864) and installs (26865) the new Generation General Purpose Computer (GPC) compact IBM unit in Atlantis', Orbiter Vehicle (OV) 104's, middeck avionics bay as Orbiter Systems Quality Control technician Doug Snider looks on. Both men work for NASA contractor Lockheed Space Operations Company. All three orbiters are being outfitted with the compact IBM unit, which replaces a two-unit earlier generation computer.

  5. 一种机载设备的中央处理单元模块的设计与实现%Design and Implementation of a Central Process Unit in Airborne Equipment

    Institute of Scientific and Technical Information of China (English)

    王俊; 吕俊; 杨宁

    2014-01-01

    The design and implementation of a central process unit in airborne equipment is introduced in this paper. The airborne equipment receives instruction signals from flight control system via RS422 communication, then the central process unit implements controlling, data calculation, A/D conversion, and feedback the result to actuuating mechanism, so that implementing the expected functions of airborne equipment. The equipment has been used in the airborne which proves that this design is referential and practical.%文章介绍了一种机载设备的中央处理单元模块设计与实现。机载设备通过RS422通讯接收飞行控制系统发来的指令信号,中央处理单元完成控制、数据解算、A/D转换等功能,将结果反馈给执行机构,从而实现机载设备的预期功能。本设备已在飞机上使用,使用结果良好,因此具有较强的参考性和实用性。

  6. Magnetohydrodynamics simulations on graphics processing units

    CERN Document Server

    Wong, Hon-Cheng; Feng, Xueshang; Tang, Zesheng

    2009-01-01

    Magnetohydrodynamics (MHD) simulations based on the ideal MHD equations have become a powerful tool for modeling phenomena in a wide range of applications including laboratory, astrophysical, and space plasmas. In general, high-resolution methods for solving the ideal MHD equations are computationally expensive and Beowulf clusters or even supercomputers are often used to run the codes that implemented these methods. With the advent of the Compute Unified Device Architecture (CUDA), modern graphics processing units (GPUs) provide an alternative approach to parallel computing for scientific simulations. In this paper we present, to the authors' knowledge, the first implementation to accelerate computation of MHD simulations on GPUs. Numerical tests have been performed to validate the correctness of our GPU MHD code. Performance measurements show that our GPU-based implementation achieves speedups of 2 (1D problem with 2048 grids), 106 (2D problem with 1024^2 grids), and 43 (3D problem with 128^3 grids), respec...

  7. Representation of Musical Computer Processes

    OpenAIRE

    Fober, Dominique; Orlarey, Yann; Letz, Stéphane

    2014-01-01

    International audience The paper presents a study about the representation of musical computer processes within a music score. The idea is to provide performers with information that could be useful especially in the context of interactive music. The paper starts with a characterization of a musical computer process in order to define the values to be represented. Next it proposes an approach to time representation suitable to asynchronous processes representation.

  8. 核电站数字化反应堆保护系统中央处理器负荷率分析与测试%Analysis and Test of Nuclear Power Plant Reactor Trip Protect System Central Processing Unit Load Function Test

    Institute of Scientific and Technical Information of China (English)

    汪绩宁

    2013-01-01

    There are exact demands about the Central Processing Unit(CPU) load of nuclear power plant reactor trip protect system. This paper first theoretically analyzed the Central Processing Unit(CPU) load of nuclear power plant reactor trip protect system, gave the computational methods, then designed the test method and test equipment. And the real test work was also carried out. The test result is obtained by analyzing the experimental data. The result shows that reactor trip protect system of the Central Processing Unit(CPU) load of nuclear power plant accords with the techno-requirement, and the load of main-control-CPU is higher than the load of standby-CPU.%核电站对数字化反应堆保护系统的中央处理器的负荷率有严格要求。本文首先对核电站数字化反应堆保护系统中央处理器的负荷率进行了理论分析,得出了负荷率计算公式;然后设计了相应的负荷率测试方法与测试装置,完成了实际的测试工作;对测试所得实验数据进行处理,得出测试结果,结果表明数字化反应堆保护系统的中央处理器负荷率符合技术要求,且主控CPU的负荷率比备用CPU负荷率要高。

  9. Optimization Solutions for Improving the Performance of the Parallel Reduction Algorithm Using Graphics Processing Units

    Directory of Open Access Journals (Sweden)

    Ion LUNGU

    2012-01-01

    Full Text Available In this paper, we research, analyze and develop optimization solutions for the parallel reduction function using graphics processing units (GPUs that implement the Compute Unified Device Architecture (CUDA, a modern and novel approach for improving the software performance of data processing applications and algorithms. Many of these applications and algorithms make use of the reduction function in their computational steps. After having designed the function and its algorithmic steps in CUDA, we have progressively developed and implemented optimization solutions for the reduction function. In order to confirm, test and evaluate the solutions' efficiency, we have developed a custom tailored benchmark suite. We have analyzed the obtained experimental results regarding: the comparison of the execution time and bandwidth when using graphic processing units covering the main CUDA architectures (Tesla GT200, Fermi GF100, Kepler GK104 and a central processing unit; the data type influence; the binary operator's influence.

  10. Graphics Processing Units for HEP trigger systems

    Science.gov (United States)

    Ammendola, R.; Bauce, M.; Biagioni, A.; Chiozzi, S.; Cotta Ramusino, A.; Fantechi, R.; Fiorini, M.; Giagu, S.; Gianoli, A.; Lamanna, G.; Lonardo, A.; Messina, A.; Neri, I.; Paolucci, P. S.; Piandani, R.; Pontisso, L.; Rescigno, M.; Simula, F.; Sozzi, M.; Vicini, P.

    2016-07-01

    General-purpose computing on GPUs (Graphics Processing Units) is emerging as a new paradigm in several fields of science, although so far applications have been tailored to the specific strengths of such devices as accelerator in offline computation. With the steady reduction of GPU latencies, and the increase in link and memory throughput, the use of such devices for real-time applications in high-energy physics data acquisition and trigger systems is becoming ripe. We will discuss the use of online parallel computing on GPU for synchronous low level trigger, focusing on CERN NA62 experiment trigger system. The use of GPU in higher level trigger system is also briefly considered.

  11. MASSIVELY PARALLEL LATENT SEMANTIC ANALYSES USING A GRAPHICS PROCESSING UNIT

    Energy Technology Data Exchange (ETDEWEB)

    Cavanagh, J.; Cui, S.

    2009-01-01

    Latent Semantic Analysis (LSA) aims to reduce the dimensions of large term-document datasets using Singular Value Decomposition. However, with the ever-expanding size of datasets, current implementations are not fast enough to quickly and easily compute the results on a standard PC. A graphics processing unit (GPU) can solve some highly parallel problems much faster than a traditional sequential processor or central processing unit (CPU). Thus, a deployable system using a GPU to speed up large-scale LSA processes would be a much more effective choice (in terms of cost/performance ratio) than using a PC cluster. Due to the GPU’s application-specifi c architecture, harnessing the GPU’s computational prowess for LSA is a great challenge. We presented a parallel LSA implementation on the GPU, using NVIDIA® Compute Unifi ed Device Architecture and Compute Unifi ed Basic Linear Algebra Subprograms software. The performance of this implementation is compared to traditional LSA implementation on a CPU using an optimized Basic Linear Algebra Subprograms library. After implementation, we discovered that the GPU version of the algorithm was twice as fast for large matrices (1 000x1 000 and above) that had dimensions not divisible by 16. For large matrices that did have dimensions divisible by 16, the GPU algorithm ran fi ve to six times faster than the CPU version. The large variation is due to architectural benefi ts of the GPU for matrices divisible by 16. It should be noted that the overall speeds for the CPU version did not vary from relative normal when the matrix dimensions were divisible by 16. Further research is needed in order to produce a fully implementable version of LSA. With that in mind, the research we presented shows that the GPU is a viable option for increasing the speed of LSA, in terms of cost/performance ratio.

  12. Grace: a Cross-platform Micromagnetic Simulator On Graphics Processing Units

    CERN Document Server

    Zhu, Ru

    2014-01-01

    A micromagnetic simulator running on graphics processing unit (GPU) is presented. It achieves significant performance boost as compared to previous central processing unit (CPU) simulators, up to two orders of magnitude for large input problems. Different from GPU implementations of other research groups, this simulator is developed with C++ Accelerated Massive Parallelism (C++ AMP) and is hardware platform compatible. It runs on GPU from venders include NVidia, AMD and Intel, which paved the way for fast micromagnetic simulation on both high-end workstations with dedicated graphics cards and low-end personal computers with integrated graphics card. A copy of the simulator software is publicly available.

  13. Tandem processes promoted by a hydrogen shift in 6-arylfulvenes bearing acetalic units at ortho position: a combined experimental and computational study

    OpenAIRE

    ALAJARIN, Mateo; Marin-Luna, Marta; Sanchez-Andrada, Pilar; Vidal, Angel

    2016-01-01

    6-Phenylfulvenes bearing (1,3-dioxolan or dioxan)-2-yl substituents at ortho position convert into mixtures of 4- and 9-(hydroxy)alkoxy-substituted benz[f]indenes as result of cascade processes initiated by a thermally activated hydrogen shift. Structurally related fulvenes with non-cyclic acetalic units afforded mixtures of 4- and 9-alkoxybenz[f]indenes under similar thermal conditions. Mechanistic paths promoted by an initial [1,4]-, [1,5]-, [1,7]- or [1,9]-H shift are conceivable for expla...

  14. Computational Intelligence in Image Processing

    CERN Document Server

    Siarry, Patrick

    2013-01-01

    Computational intelligence based techniques have firmly established themselves as viable, alternate, mathematical tools for more than a decade. They have been extensively employed in many systems and application domains, among these signal processing, automatic control, industrial and consumer electronics, robotics, finance, manufacturing systems, electric power systems, and power electronics. Image processing is also an extremely potent area which has attracted the atten­tion of many researchers who are interested in the development of new computational intelligence-based techniques and their suitable applications, in both research prob­lems and in real-world problems. Part I of the book discusses several image preprocessing algorithms; Part II broadly covers image compression algorithms; Part III demonstrates how computational intelligence-based techniques can be effectively utilized for image analysis purposes; and Part IV shows how pattern recognition, classification and clustering-based techniques can ...

  15. Retinoblastoma protein: a central processing unit

    Indian Academy of Sciences (India)

    M Poznic

    2009-06-01

    The retinoblastoma protein (pRb) is one of the key cell-cycle regulating proteins and its inactivation leads to neoplastic transformation and carcinogenesis. This protein regulates critical G1-to-S phase transition through interaction with the E2F family of cell-cycle transcription factors repressing transcription of genes required for this cell-cycle check-point transition. Its activity is regulated through network sensing intracellular and extracellular signals which block or permit phosphorylation (inactivation) of the Rb protein. Mechanisms of Rb-dependent cell-cycle control have been widely studied over the past couple of decades. However, recently it was found that pRb also regulates apoptosis through the same interaction with E2F transcription factors and that Rb–E2F complexes play a role in regulating the transcription of genes involved in differentiation and development.

  16. Retinoblastoma protein: a central processing unit.

    Science.gov (United States)

    Poznic, M

    2009-06-01

    The retinoblastoma protein (pRb) is one of the key cell-cycle regulating proteins and its inactivation leads to neoplastic transformation and carcinogenesis. This protein regulates critical G1 -to-S phase transition through interaction with the E2F family of cell-cycle transcription factors repressing transcription of genes required for this cell-cycle check-point transition. Its activity is regulated through network sensing intracellular and extracellular signals which block or permit phosphorylation (inactivation) of the Rb protein. Mechanisms of Rb-dependent cell-cycle control have been widely studied over the past couple of decades. However, recently it was found that pRb also regulates apoptosis through the same interaction with E2F transcription factors and that Rb-E2F complexes play a role in regulating the transcription of genes involved in differentiation and development.

  17. Fast blood flow visualization of high-resolution laser speckle imaging data using graphics processing unit.

    Science.gov (United States)

    Liu, Shusen; Li, Pengcheng; Luo, Qingming

    2008-09-15

    Laser speckle contrast analysis (LASCA) is a non-invasive, full-field optical technique that produces two-dimensional map of blood flow in biological tissue by analyzing speckle images captured by CCD camera. Due to the heavy computation required for speckle contrast analysis, video frame rate visualization of blood flow which is essentially important for medical usage is hardly achieved for the high-resolution image data by using the CPU (Central Processing Unit) of an ordinary PC (Personal Computer). In this paper, we introduced GPU (Graphics Processing Unit) into our data processing framework of laser speckle contrast imaging to achieve fast and high-resolution blood flow visualization on PCs by exploiting the high floating-point processing power of commodity graphics hardware. By using GPU, a 12-60 fold performance enhancement is obtained in comparison to the optimized CPU implementations.

  18. HIGH PERFORMANCE PHOTOGRAMMETRIC PROCESSING ON COMPUTER CLUSTERS

    Directory of Open Access Journals (Sweden)

    V. N. Adrov

    2012-07-01

    Full Text Available Most cpu consuming tasks in photogrammetric processing can be done in parallel. The algorithms take independent bits as input and produce independent bits as output. The independence of bits comes from the nature of such algorithms since images, stereopairs or small image blocks parts can be processed independently. Many photogrammetric algorithms are fully automatic and do not require human interference. Photogrammetric workstations can perform tie points measurements, DTM calculations, orthophoto construction, mosaicing and many other service operations in parallel using distributed calculations. Distributed calculations save time reducing several days calculations to several hours calculations. Modern trends in computer technology show the increase of cpu cores in workstations, speed increase in local networks, and as a result dropping the price of the supercomputers or computer clusters that can contain hundreds or even thousands of computing nodes. Common distributed processing in DPW is usually targeted for interactive work with a limited number of cpu cores and is not optimized for centralized administration. The bottleneck of common distributed computing in photogrammetry can be in the limited lan throughput and storage performance, since the processing of huge amounts of large raster images is needed.

  19. CHARACTERISTICS OF FARMLAND LEASING IN THE NORTH CENTRAL UNITED STATES

    OpenAIRE

    Patterson, Brian; Hanson, Steven D.; Robison, Lindon J.

    1998-01-01

    Leasing behavior differs across the North Central United States. Survey data is used to characterize leasing activity in the region. Data is collected on the amount of leased farmland, amount of cash and share leased land, and common output share levels. Factors influencing leasing and arrangements are also identified.

  20. Environmental Engineering Unit Operations and Unit Processes Laboratory Manual.

    Science.gov (United States)

    O'Connor, John T., Ed.

    This manual was prepared for the purpose of stimulating the development of effective unit operations and unit processes laboratory courses in environmental engineering. Laboratory activities emphasizing physical operations, biological, and chemical processes are designed for various educational and equipment levels. An introductory section reviews…

  1. Computer Modelling of Dynamic Processes

    Directory of Open Access Journals (Sweden)

    B. Rybakin

    2000-10-01

    Full Text Available Results of numerical modeling of dynamic problems are summed in the article up. These problems are characteristic for various areas of human activity, in particular for problem solving in ecology. The following problems are considered in the present work: computer modeling of dynamic effects on elastic-plastic bodies, calculation and determination of performances of gas streams in gas cleaning equipment, modeling of biogas formation processes.

  2. The Executive Process, Grade Eight. Resource Unit (Unit III).

    Science.gov (United States)

    Minnesota Univ., Minneapolis. Project Social Studies Curriculum Center.

    This resource unit, developed by the University of Minnesota's Project Social Studies, introduces eighth graders to the executive process. The unit uses case studies of presidential decision making such as the decision to drop the atomic bomb on Hiroshima, the Cuba Bay of Pigs and quarantine decisions, and the Little Rock decision. A case study of…

  3. Computational Material Processing in Microgravity

    Science.gov (United States)

    2005-01-01

    Working with Professor David Matthiesen at Case Western Reserve University (CWRU) a computer model of the DPIMS (Diffusion Processes in Molten Semiconductors) space experiment was developed that is able to predict the thermal field, flow field and concentration profile within a molten germanium capillary under both ground-based and microgravity conditions as illustrated. These models are coupled with a novel nonlinear statistical methodology for estimating the diffusion coefficient from measured concentration values after a given time that yields a more accurate estimate than traditional methods. This code was integrated into a web-based application that has become a standard tool used by engineers in the Materials Science Department at CWRU.

  4. Strategy as Central and Peripheral Processes

    OpenAIRE

    Juul Andersen, Torben; Fredens, Kjeld

    2012-01-01

    Corporate entrepreneurship is deemed essential to uncover opportunities that shape the future strategic path and adapt the firm to environmental change (e.g., Covin and Miles, 1999; Wolcott and Lippitz, 2007). At the same time, rational central processes are important to execute strategic actions in a coordinated manner (e.g., Baum and Wally, 2003; Brews and Hunt, 1999; Goll and Rasheed, 1997). That is, the organization’s adaptive responses and dynamic capabilities are embedded...

  5. Computing with impure numbers - Automatic consistency checking and units conversion using computer algebra

    Science.gov (United States)

    Stoutemyer, D. R.

    1977-01-01

    The computer algebra language MACSYMA enables the programmer to include symbolic physical units in computer calculations, and features automatic detection of dimensionally-inhomogeneous formulas and conversion of inconsistent units in a dimensionally homogeneous formula. Some examples illustrate these features.

  6. Graphics Processing Unit-Based Bioheat Simulation to Facilitate Rapid Decision Making Associated with Cryosurgery Training.

    Science.gov (United States)

    Keelan, Robert; Zhang, Hong; Shimada, Kenji; Rabin, Yoed

    2016-04-01

    This study focuses on the implementation of an efficient numerical technique for cryosurgery simulations on a graphics processing unit as an alternative means to accelerate runtime. This study is part of an ongoing effort to develop computerized training tools for cryosurgery, with prostate cryosurgery as a developmental model. The ability to perform rapid simulations of various test cases is critical to facilitate sound decision making associated with medical training. Consistent with clinical practice, the training tool aims at correlating the frozen region contour and the corresponding temperature field with the target region shape. The current study focuses on the feasibility of graphics processing unit-based computation using C++ accelerated massive parallelism, as one possible implementation. Benchmark results on a variety of computation platforms display between 3-fold acceleration (laptop) and 13-fold acceleration (gaming computer) of cryosurgery simulation, in comparison with the more common implementation on a multicore central processing unit. While the general concept of graphics processing unit-based simulations is not new, its application to phase-change problems, combined with the unique requirements for cryosurgery optimization, represents the core contribution of the current study.

  7. Programming Graphic Processing Units (GPUs)

    OpenAIRE

    Bakke, Glenn Ruben Årthun

    2009-01-01

    In this thesis we do a broad study of the languages, libraries and frameworks for general purpose computations on graphics processors. We have also studied the different graphics processor architectures that has been developed the last decade. 8 example programs in OpenGL, CUDA, MPI and OpenMP has been made to emphasize the mechanisms for parallelization and memory managment. The example programs have been benchmarked and the source lines are been counted. We found out that programs for th...

  8. Real-time imaging implementation of the Army Research Laboratory synchronous impulse reconstruction radar on a graphics processing unit architecture

    Science.gov (United States)

    Park, Song Jun; Nguyen, Lam H.; Shires, Dale R.; Henz, Brian J.

    2009-05-01

    High computing requirements for the synchronous impulse reconstruction (SIRE) radar algorithm present a challenge for near real-time processing, particularly the calculations involved in output image formation. Forming an image requires a large number of parallel and independent floating-point computations. To reduce the processing time and exploit the abundant parallelism of image processing, a graphics processing unit (GPU) architecture is considered for the imaging algorithm. Widely available off the shelf, high-end GPUs offer inexpensive technology that exhibits great capacity of computing power in one card. To address the parallel nature of graphics processing, the GPU architecture is designed for high computational throughput realized through multiple computing resources to target data parallel applications. Due to a leveled or in some cases reduced clock frequency in mainstream single and multi-core general-purpose central processing units (CPUs), GPU computing is becoming a competitive option for compute-intensive radar imaging algorithm prototyping. We describe the translation and implementation of the SIRE radar backprojection image formation algorithm on a GPU platform. The programming model for GPU's parallel computing and hardware-specific memory optimizations are discussed in the paper. A considerable level of speedup is available from the GPU implementation resulting in processing at real-time acquisition speeds.

  9. Significantly reducing registration time in IGRT using graphics processing units

    DEFF Research Database (Denmark)

    Noe, Karsten Østergaard; Denis de Senneville, Baudouin; Tanderup, Kari;

    2008-01-01

    Purpose/Objective For online IGRT, rapid image processing is needed. Fast parallel computations using graphics processing units (GPUs) have recently been made more accessible through general purpose programming interfaces. We present a GPU implementation of the Horn and Schunck method for deforma......). The processing power of GPUs can be used for many image processing tasks in IGRT making it a useful and cost-effecient tool to help us towards online IGRT....

  10. Cupola Furnace Computer Process Model

    Energy Technology Data Exchange (ETDEWEB)

    Seymour Katz

    2004-12-31

    The cupola furnace generates more than 50% of the liquid iron used to produce the 9+ million tons of castings annually. The cupola converts iron and steel into cast iron. The main advantages of the cupola furnace are lower energy costs than those of competing furnaces (electric) and the ability to melt less expensive metallic scrap than the competing furnaces. However the chemical and physical processes that take place in the cupola furnace are highly complex making it difficult to operate the furnace in optimal fashion. The results are low energy efficiency and poor recovery of important and expensive alloy elements due to oxidation. Between 1990 and 2004 under the auspices of the Department of Energy, the American Foundry Society and General Motors Corp. a computer simulation of the cupola furnace was developed that accurately describes the complex behavior of the furnace. When provided with the furnace input conditions the model provides accurate values of the output conditions in a matter of seconds. It also provides key diagnostics. Using clues from the diagnostics a trained specialist can infer changes in the operation that will move the system toward higher efficiency. Repeating the process in an iterative fashion leads to near optimum operating conditions with just a few iterations. More advanced uses of the program have been examined. The program is currently being combined with an ''Expert System'' to permit optimization in real time. The program has been combined with ''neural network'' programs to affect very easy scanning of a wide range of furnace operation. Rudimentary efforts were successfully made to operate the furnace using a computer. References to these more advanced systems will be found in the ''Cupola Handbook''. Chapter 27, American Foundry Society, Des Plaines, IL (1999).

  11. Optimized Laplacian image sharpening algorithm based on graphic processing unit

    Science.gov (United States)

    Ma, Tinghuai; Li, Lu; Ji, Sai; Wang, Xin; Tian, Yuan; Al-Dhelaan, Abdullah; Al-Rodhaan, Mznah

    2014-12-01

    In classical Laplacian image sharpening, all pixels are processed one by one, which leads to large amount of computation. Traditional Laplacian sharpening processed on CPU is considerably time-consuming especially for those large pictures. In this paper, we propose a parallel implementation of Laplacian sharpening based on Compute Unified Device Architecture (CUDA), which is a computing platform of Graphic Processing Units (GPU), and analyze the impact of picture size on performance and the relationship between the processing time of between data transfer time and parallel computing time. Further, according to different features of different memory, an improved scheme of our method is developed, which exploits shared memory in GPU instead of global memory and further increases the efficiency. Experimental results prove that two novel algorithms outperform traditional consequentially method based on OpenCV in the aspect of computing speed.

  12. Empirical Foundation of Central Concepts for Computer Science Education

    Science.gov (United States)

    Zendler, Andreas; Spannagel, Christian

    2008-01-01

    The design of computer science curricula should rely on central concepts of the discipline rather than on technical short-term developments. Several authors have proposed lists of basic concepts or fundamental ideas in the past. However, these catalogs were based on subjective decisions without any empirical support. This article describes the…

  13. New Generation General Purpose Computer (GPC) compact IBM unit

    Science.gov (United States)

    1991-01-01

    New Generation General Purpose Computer (GPC) compact IBM unit replaces a two-unit earlier generation computer. The new IBM unit is documented in table top views alone (S91-26867, S91-26868), with the onboard equipment it supports including the flight deck CRT screen and keypad (S91-26866), and next to the two earlier versions it replaces (S91-26869).

  14. Reflector antenna analysis using physical optics on Graphics Processing Units

    DEFF Research Database (Denmark)

    Borries, Oscar Peter; Sørensen, Hans Henrik Brandenborg; Dammann, Bernd;

    2014-01-01

    The Physical Optics approximation is a widely used asymptotic method for calculating the scattering from electrically large bodies. It requires significant computational work and little memory, and is thus well suited for application on a Graphics Processing Unit. Here, we investigate the perform...... performance of an implementation and demonstrate that while there are some implementational pitfalls, a careful implementation can result in impressive improvements....

  15. The south-central United States magnetic anomaly

    Science.gov (United States)

    Starich, P. J.; Hinze, W. J.; Braile, L. W.

    1985-01-01

    A positive magnetic anomaly, which dominates the MAGSAT scalar field over the south-central United States, results from the superposition of magnetic effects from several geologic sources and tectonic structures in the crust. The highly magnetic basement rocks of this region show good correlation with increased crustal thickness, above average crustal velocity and predominantly negative free-air gravity anomalies, all of which are useful constraints for modeling the magnetic sources. The positive anomaly is composed of two primary elements. The western-most segment is related to middle Proterozoic granite intrusions, rhyolite flows and interspersed metamorphic basement rocks in the Texas panhandle and eastern New Mexico. The anomaly and the magnetic crust are bounded to the west by the north-south striking Rio Grande Rift. The anomaly extends eastward over the Grenville age basement rocks of central Texas, and is terminated to the south and east by the buried extension of the Ouachita System. The northern segment of the anomaly extends eastward across Oklahoma and Arkansas to the Mississippi Embayment. It corresponds to a general positive magnetic region associated with the Wichita Mountains igneous complex in south-central Oklahoma and 1.2 to 1.5 Ga. felsic terrane to the north.

  16. Tensors in image processing and computer vision

    CERN Document Server

    De Luis García, Rodrigo; Tao, Dacheng; Li, Xuelong

    2009-01-01

    Tensor signal processing is an emerging field with important applications to computer vision and image processing. This book presents the developments in this branch of signal processing, offering research and discussions by experts in the area. It is suitable for advanced students working in the area of computer vision and image processing.

  17. Accelerating NBODY6 with Graphics Processing Units

    CERN Document Server

    Nitadori, Keigo

    2012-01-01

    We describe the use of Graphics Processing Units (GPUs) for speeding up the code NBODY6 which is widely used for direct $N$-body simulations. Over the years, the $N^2$ nature of the direct force calculation has proved a barrier for extending the particle number. Following an early introduction of force polynomials and individual time-steps, the calculation cost was first reduced by the introduction of a neighbour scheme. After a decade of GRAPE computers which speeded up the force calculation further, we are now in the era of GPUs where relatively small hardware systems are highly cost-effective. A significant gain in efficiency is achieved by employing the GPU to obtain the so-called regular force which typically involves some 99 percent of the particles, while the remaining local forces are evaluated on the host. However, the latter operation is performed up to 20 times more frequently and may still account for a significant cost. This effort is reduced by parallel SSE/AVX procedures where each interaction ...

  18. Computational and Pharmacological Target of Neurovascular Unit for Drug Design and Delivery

    Directory of Open Access Journals (Sweden)

    Md. Mirazul Islam

    2015-01-01

    Full Text Available The blood-brain barrier (BBB is a dynamic and highly selective permeable interface between central nervous system (CNS and periphery that regulates the brain homeostasis. Increasing evidences of neurological disorders and restricted drug delivery process in brain make BBB as special target for further study. At present, neurovascular unit (NVU is a great interest and highlighted topic of pharmaceutical companies for CNS drug design and delivery approaches. Some recent advancement of pharmacology and computational biology makes it convenient to develop drugs within limited time and affordable cost. In this review, we briefly introduce current understanding of the NVU, including molecular and cellular composition, physiology, and regulatory function. We also discuss the recent technology and interaction of pharmacogenomics and bioinformatics for drug design and step towards personalized medicine. Additionally, we develop gene network due to understand NVU associated transporter proteins interactions that might be effective for understanding aetiology of neurological disorders and new target base protective therapies development and delivery.

  19. Computer program developed for flowsheet calculations and process data reduction

    Science.gov (United States)

    Alfredson, P. G.; Anastasia, L. J.; Knudsen, I. E.; Koppel, L. B.; Vogel, G. J.

    1969-01-01

    Computer program PACER-65, is used for flowsheet calculations and easily adapted to process data reduction. Each unit, vessel, meter, and processing operation in the overall flowsheet is represented by a separate subroutine, which the program calls in the order required to complete an overall flowsheet calculation.

  20. Using Parallel Computing Methods in Business Processes

    OpenAIRE

    Machek, Ondrej; Hejda, Jan

    2012-01-01

    In computer science, engineers deal with the issue how to accelerate the execution of extensive tasks with parallel computing algorithms, which are executed on large network of cooperating processors.The business world forms large networks of business units, too, and in business management, managers often face similar problems. The aim of this paper is to consider the possibilities of using parallel computing methods in business networks. In the first part, weintroduce the issue and make some...

  1. Energy efficiency of computer power supply units - Final report

    Energy Technology Data Exchange (ETDEWEB)

    Aebischer, B. [cepe - Centre for Energy Policy and Economics, Swiss Federal Institute of Technology Zuerich, Zuerich (Switzerland); Huser, H. [Encontrol GmbH, Niederrohrdorf (Switzerland)

    2002-11-15

    This final report for the Swiss Federal Office of Energy (SFOE) takes a look at the efficiency of computer power supply units, which decreases rapidly during average computer use. The background and the purpose of the project are examined. The power supplies for personal computers are discussed and the testing arrangement used is described. Efficiency, power-factor and operating points of the units are examined. Potentials for improvement and measures to be taken are discussed. Also, action to be taken by those involved in the design and operation of such power units is proposed. Finally, recommendations for further work are made.

  2. Parallel particle swarm optimization on a graphics processing unit with application to trajectory optimization

    Science.gov (United States)

    Wu, Q.; Xiong, F.; Wang, F.; Xiong, Y.

    2016-10-01

    In order to reduce the computational time, a fully parallel implementation of the particle swarm optimization (PSO) algorithm on a graphics processing unit (GPU) is presented. Instead of being executed on the central processing unit (CPU) sequentially, PSO is executed in parallel via the GPU on the compute unified device architecture (CUDA) platform. The processes of fitness evaluation, updating of velocity and position of all particles are all parallelized and introduced in detail. Comparative studies on the optimization of four benchmark functions and a trajectory optimization problem are conducted by running PSO on the GPU (GPU-PSO) and CPU (CPU-PSO). The impact of design dimension, number of particles and size of the thread-block in the GPU and their interactions on the computational time is investigated. The results show that the computational time of the developed GPU-PSO is much shorter than that of CPU-PSO, with comparable accuracy, which demonstrates the remarkable speed-up capability of GPU-PSO.

  3. Improved usage of the LOFT process computer

    International Nuclear Information System (INIS)

    This paper describes work recently done to upgrade usage of the plant process computer at the Loss-of-Fluid Test (LOFT) facility. The use of computers to aid reactor operators in understanding plant status and diagnosing plant difficulties is currently being widely studied by the nuclear industry. In this regard, an effort was initiated to improve LOFT process computer usage, since the existing plant process computer has been an available, but only lightly used resource, for aiding LOFT reactor operators. This is a continuing effort and has, to date, produced improvements in data collection, data display for operators, and methods of computer operation

  4. Optimization models of the supply of power structures’ organizational units with centralized procurement

    Directory of Open Access Journals (Sweden)

    Sysoiev Volodymyr

    2013-01-01

    Full Text Available Management of the state power structures’ organizational units for materiel and technical support requires the use of effective tools for supporting decisions, due to the complexity, interdependence, and dynamism of supply in the market economy. The corporate nature of power structures is of particular interest to centralized procurement management, as it provides significant advantages through coordination, eliminating duplication, and economy of scale. This article presents optimization models of the supply of state power structures’ organizational units with centralized procurement, for different levels of simulated materiel and technical support processes. The models allow us to find the most profitable options for state power structures’ organizational supply units in a centre-oriented logistics system in conditions of the changing needs, volume of allocated funds, and logistics costs that accompany the process of supply, by maximizing the provision level of organizational units with necessary material and technical resources for the entire planning period of supply by minimizing the total logistical costs, taking into account the diverse nature and the different priorities of organizational units and material and technical resources.

  5. Sandia`s computer support units: The first three years

    Energy Technology Data Exchange (ETDEWEB)

    Harris, R.N. [Sandia National Labs., Albuquerque, NM (United States). Labs. Computing Dept.

    1997-11-01

    This paper describes the method by which Sandia National Laboratories has deployed information technology to the line organizations and to the desktop as part of the integrated information services organization under the direction of the Chief Information officer. This deployment has been done by the Computer Support Unit (CSU) Department. The CSU approach is based on the principle of providing local customer service with a corporate perspective. Success required an approach that was both customer compelled at times and market or corporate focused in most cases. Above all, a complete solution was required that included a comprehensive method of technology choices and development, process development, technology implementation, and support. It is the authors hope that this information will be useful in the development of a customer-focused business strategy for information technology deployment and support. Descriptions of current status reflect the status as of May 1997.

  6. The Role of Computers in Writing Process

    Science.gov (United States)

    Ulusoy, Mustafa

    2006-01-01

    In this paper, the role of computers in writing process was investigated. Last 25 years of journals were searched to find related articles. Articles and books were classified under prewriting, composing, and revising and editing headings. The review results showed that computers can make writers' job easy in the writing process. In addition,…

  7. New FORTRAN computer programs to acquire and process isotopic mass-spectrometric data

    International Nuclear Information System (INIS)

    The computer programs described in New Computer Programs to Acquire and Process Isotopic Mass Spectrometric Data have been revised. This report describes in some detail the operation of these programs, which acquire and process isotopic mass spectrometric data. Both functional and overall design aspects are addressed. The three basic program units - file manipulation, data acquisition, and data processing - are discussed in turn. Step-by-step instructions are included where appropriate, and each subsection is described in enough detail to give a clear picture of its function. Organization of file structure, which is central to the entire concept, is extensively discussed with the help of numerous tables. Appendices contain flow charts and outline file structure to help a programmer unfamiliar with the programs to alter them with a minimum of lost time

  8. Syllables as Processing Units in Handwriting Production

    Science.gov (United States)

    Kandel, Sonia; Alvarez, Carlos J.; Vallee, Nathalie

    2006-01-01

    This research focused on the syllable as a processing unit in handwriting. Participants wrote, in uppercase letters, words that had been visually presented. The interletter intervals provide information on the timing of motor production. In Experiment 1, French participants wrote words that shared the initial letters but had different syllable…

  9. Graphics processing unit-assisted lossless decompression

    Energy Technology Data Exchange (ETDEWEB)

    Loughry, Thomas A.

    2016-04-12

    Systems and methods for decompressing compressed data that has been compressed by way of a lossless compression algorithm are described herein. In a general embodiment, a graphics processing unit (GPU) is programmed to receive compressed data packets and decompress such packets in parallel. The compressed data packets are compressed representations of an image, and the lossless compression algorithm is a Rice compression algorithm.

  10. Computer-Based Cognitive Tools: Description and Design.

    Science.gov (United States)

    Kennedy, David; McNaught, Carmel

    With computers, tangible tools are represented by the hardware (e.g., the central processing unit, scanners, and video display unit), while intangible tools are represented by the software. There is a special category of computer-based software tools (CBSTs) that have the potential to mediate cognitive processes--computer-based cognitive tools…

  11. Central nervous system infections in the intensive care unit

    Directory of Open Access Journals (Sweden)

    B. Vengamma

    2014-04-01

    Full Text Available Neurological infections constitute an uncommon, but important aetiological cause requiring admission to an intensive care unit (ICU. In addition, health-care associated neurological infections may develop in critically ill patients admitted to an ICU for other indications. Central nervous system infections can develop as complications in ICU patients including post-operative neurosurgical patients. While bacterial infections are the most common cause, mycobacterial and fungal infections are also frequently encountered. Delay in institution of specific treatment is considered to be the single most important poor prognostic factor. Empirical antibiotic therapy must be initiated while awaiting specific culture and sensitivity results. Choice of empirical antimicrobial therapy should take into consideration the most likely pathogens involved, locally prevalent drug-resistance patterns, underlying predisposing, co-morbid conditions, and other factors, such as age, immune status. Further, the antibiotic should adequately penetrate the blood-brain and blood- cerebrospinal fluid barriers. The presence of a focal collection of pus warrants immediate surgical drainage. Following strict aseptic precautions during surgery, hand-hygiene and care of catheters, devices constitute important preventive measures. A high index of clinical suspicion and aggressive efforts at identification of aetiological cause and early institution of specific treatment in patients with neurological infections can be life saving.

  12. Study of multi-programming scheduling of batch processed jobs on third generation computers

    International Nuclear Information System (INIS)

    This research thesis addresses technical aspects of the organisation, management and operation of a computer fleet. The main issue is the search for an appropriate compromise between throughput and turnaround time, i.e. the possibility to increase throughput while taking time constraints of each computing centre into account. The author first presents the different systems and properties of third-generation computers (those developed after 1964). He analyses and discusses problems related to multi-programming for these systems (concept of multi-programming, design issues regarding memory organisation and resource allocation, operational issues regarding memory use, the use of central processing unit, conflict between peripheral resources, and so on). He addresses scheduling issues (presentation of the IBM/370 system, internal and external scheduling techniques), and presents a simulator, its parameters related to the use of resources, and the job generation software. He presents a micro-planning pre-processor, describes its operation, and comments test results

  13. Real-time radar signal processing using GPGPU (general-purpose graphic processing unit)

    Science.gov (United States)

    Kong, Fanxing; Zhang, Yan Rockee; Cai, Jingxiao; Palmer, Robert D.

    2016-05-01

    This study introduces a practical approach to develop real-time signal processing chain for general phased array radar on NVIDIA GPUs(Graphical Processing Units) using CUDA (Compute Unified Device Architecture) libraries such as cuBlas and cuFFT, which are adopted from open source libraries and optimized for the NVIDIA GPUs. The processed results are rigorously verified against those from the CPUs. Performance benchmarked in computation time with various input data cube sizes are compared across GPUs and CPUs. Through the analysis, it will be demonstrated that GPGPUs (General Purpose GPU) real-time processing of the array radar data is possible with relatively low-cost commercial GPUs.

  14. Porting a Hall MHD Code to a Graphic Processing Unit

    Science.gov (United States)

    Dorelli, John C.

    2011-01-01

    We present our experience porting a Hall MHD code to a Graphics Processing Unit (GPU). The code is a 2nd order accurate MUSCL-Hancock scheme which makes use of an HLL Riemann solver to compute numerical fluxes and second-order finite differences to compute the Hall contribution to the electric field. The divergence of the magnetic field is controlled with Dedner?s hyperbolic divergence cleaning method. Preliminary benchmark tests indicate a speedup (relative to a single Nehalem core) of 58x for a double precision calculation. We discuss scaling issues which arise when distributing work across multiple GPUs in a CPU-GPU cluster.

  15. Organization of international market introduction: Can cooperation between central units and local product management influence success

    OpenAIRE

    Baumgarten, Antje; Herstatt, Cornelius; Fantapié Altobelli, Claudia

    2006-01-01

    When organizing international market introductions multinational companies face coordination problems between the leading central organizational unit and local product management. Based on the assumption that international market introductions are initiated and managed by a central unit we examine the impact of cooperation between the central unit and local product management on success. Our survey of 51 international market introductions reveals that the quality of the cooperation with local...

  16. Molecular Dynamics Simulation of Macromolecules Using Graphics Processing Unit

    CERN Document Server

    Xu, Ji; Ge, Wei; Yu, Xiang; Yang, Xiaozhen; Li, Jinghai

    2010-01-01

    Molecular dynamics (MD) simulation is a powerful computational tool to study the behavior of macromolecular systems. But many simulations of this field are limited in spatial or temporal scale by the available computational resource. In recent years, graphics processing unit (GPU) provides unprecedented computational power for scientific applications. Many MD algorithms suit with the multithread nature of GPU. In this paper, MD algorithms for macromolecular systems that run entirely on GPU are presented. Compared to the MD simulation with free software GROMACS on a single CPU core, our codes achieve about 10 times speed-up on a single GPU. For validation, we have performed MD simulations of polymer crystallization on GPU, and the results observed perfectly agree with computations on CPU. Therefore, our single GPU codes have already provided an inexpensive alternative for macromolecular simulations on traditional CPU clusters and they can also be used as a basis to develop parallel GPU programs to further spee...

  17. Data processing in high energy physics and vector processing computers

    International Nuclear Information System (INIS)

    The data handling done in high energy physics in order to extract the results from the large volumes of data collected in typical experiments is a very large consumer of computing capacity. More than 70 vector processing computers have now been installed and many fields of applications have been tried on such computers as the ILLIAC IV, the TI ASC, the CDC STAR-100 and more recently on the CRAY-1, the CDC Cyber 205, the ICL DAP and the CRAY X-MP. This paper attempts to analyze the reasons for the lack of use of these computers in processing results from high energy physics experiments. Little work has been done to look at the possible vectorisation of the large codes in this field, but the motivation to apply vector processing computers in high energy physics data handling may be increasing as the gap between the scalar performance and the vector performance offered by large computers available on the market widens

  18. Central Data Processing System (CDPS) users manual: solar heating and cooling program

    Energy Technology Data Exchange (ETDEWEB)

    1976-09-01

    The Central Data Processing System (CDPS) provides the software and data base management system required to assess the performance of solar heating and cooling systems installed at multiple remote sites. The instrumentation data associated with these systems is collected, processed, and presented in a form which supports continuity of performance evaluation across all applications. The CDPS consists of three major elements: communication interface computer, central data processing computer, and performance evaluation data base. The CDPS Users Manual identifies users of the performance data base, procedures for operation, and guidelines for software maintenance. The manual also defines the output capabilities of the CDPS in support of external users of the system.

  19. Pn Tomography of the Central and Eastern United States

    Science.gov (United States)

    Zhang, Q.; Sandvol, E. A.; Liu, M.

    2005-12-01

    Approximately 44,000 Pn phase readings from the ISC and NEIC catalogs and 750 hand picked arrivals were inverted to map the velocity structure of mantle lithosphere in the Central and Eastern United States (CEUS). Overall we have a high density of ray paths within the active seismic zones in the eastern and southern parts of the CEUS, while ray coverage is relatively poor to the west of Great Lakes as well as along the eastern and southern coastlines of the U.S. The average Pn velocity in the CEUS is approximately 8.03 km/s. High Pn velocities (~8.18 km/s) within the northeastern part of the North American shield are reliable, while the resolution of the velocity image of the American shield around the mid-continent rift (MCR) is relatively low due to the poor ray coverage. Under the East Continent Rift (EC), the northern part of the Reelfoot Rift Zone (RRZ), and the South Oklahoma Aulacogen (SO), we also observe high velocity lithospheric mantle (~8.13-8.18 km/s). Typical Pn velocities (~7.98 km/s) are found between those three high velocity blocks. Low velocities are shown in the northern and southern Appalachians (~7.88-7.98 km/s) as well as the Rio Grande Rift (~7.88 km/s). In the portion of our model with the highest ray density, the Pn azimuthal anisotropy seems to be robust. These fast directions appear to mirror the boundaries of the low Pn velocity zone and parallel the Appalachians down to the southwest.

  20. Five Computational Actions in Information Processing

    Directory of Open Access Journals (Sweden)

    Stefan Vladutescu

    2014-12-01

    Full Text Available This study is circumscribed to the Information Science. The zetetic aim of research is double: a to define the concept of action of information computational processing and b to design a taxonomy of actions of information computational processing. Our thesis is that any information processing is a computational processing. First, the investigation trays to demonstrate that the computati onal actions of information processing or the informational actions are computationalinvestigative configurations for structuring information: clusters of highlyaggregated operations which are carried out in a unitary manner operate convergent and behave like a unique computational device. From a methodological point of view, they are comprised within the category of analytical instruments for the informational processing of raw material, of data, of vague, confused, unstructured informational elements. As internal articulation, the actions are patterns for the integrated carrying out of operations of informational investigation. Secondly, we propose an inventory and a description of five basic informational computational actions: exploring, grouping, anticipation, schematization, inferential structuring. R. S. Wyer and T. K. Srull (2014 speak about "four information processing". We would like to continue with further and future investigation of the relationship between operations, actions, strategies and mechanisms of informational processing.

  1. Fast, multi-channel real-time processing of signals with microsecond latency using graphics processing units.

    Science.gov (United States)

    Rath, N; Kato, S; Levesque, J P; Mauel, M E; Navratil, G A; Peng, Q

    2014-04-01

    Fast, digital signal processing (DSP) has many applications. Typical hardware options for performing DSP are field-programmable gate arrays (FPGAs), application-specific integrated DSP chips, or general purpose personal computer systems. This paper presents a novel DSP platform that has been developed for feedback control on the HBT-EP tokamak device. The system runs all signal processing exclusively on a Graphics Processing Unit (GPU) to achieve real-time performance with latencies below 8 μs. Signals are transferred into and out of the GPU using PCI Express peer-to-peer direct-memory-access transfers without involvement of the central processing unit or host memory. Tests were performed on the feedback control system of the HBT-EP tokamak using forty 16-bit floating point inputs and outputs each and a sampling rate of up to 250 kHz. Signals were digitized by a D-TACQ ACQ196 module, processing done on an NVIDIA GTX 580 GPU programmed in CUDA, and analog output was generated by D-TACQ AO32CPCI modules.

  2. Fast, multi-channel real-time processing of signals with microsecond latency using graphics processing units

    International Nuclear Information System (INIS)

    Fast, digital signal processing (DSP) has many applications. Typical hardware options for performing DSP are field-programmable gate arrays (FPGAs), application-specific integrated DSP chips, or general purpose personal computer systems. This paper presents a novel DSP platform that has been developed for feedback control on the HBT-EP tokamak device. The system runs all signal processing exclusively on a Graphics Processing Unit (GPU) to achieve real-time performance with latencies below 8 μs. Signals are transferred into and out of the GPU using PCI Express peer-to-peer direct-memory-access transfers without involvement of the central processing unit or host memory. Tests were performed on the feedback control system of the HBT-EP tokamak using forty 16-bit floating point inputs and outputs each and a sampling rate of up to 250 kHz. Signals were digitized by a D-TACQ ACQ196 module, processing done on an NVIDIA GTX 580 GPU programmed in CUDA, and analog output was generated by D-TACQ AO32CPCI modules

  3. Rigorous Computation of Fundamental Units in Algebraic Number Fields

    CERN Document Server

    Fontein, Felix

    2010-01-01

    We present an algorithm that unconditionally computes a representation of the unit group of a number field of discriminant $\\Delta_K$, given a full-rank subgroup as input, in asymptotically fewer bit operations than the baby-step giant-step algorithm. If the input is assumed to represent the full unit group, for example, under the assumption of the Generalized Riemann Hypothesis, then our algorithm can unconditionally certify its correctness in expected time $O(\\Delta_K^{n/(4n + 2) + \\epsilon}) = O(\\Delta_K^{1/4 - 1/(8n+4) + \\epsilon})$ where $n$ is the unit rank.

  4. Fast calculation of HELAS amplitudes using graphics processing unit (GPU)

    CERN Document Server

    Hagiwara, K; Okamura, N; Rainwater, D L; Stelzer, T

    2009-01-01

    We use the graphics processing unit (GPU) for fast calculations of helicity amplitudes of physics processes. As our first attempt, we compute $u\\overline{u}\\to n\\gamma$ ($n=2$ to 8) processes in $pp$ collisions at $\\sqrt{s} = 14$TeV by transferring the MadGraph generated HELAS amplitudes (FORTRAN) into newly developed HEGET ({\\bf H}ELAS {\\bf E}valuation with {\\bf G}PU {\\bf E}nhanced {\\bf T}echnology) codes written in CUDA, a C-platform developed by NVIDIA for general purpose computing on the GPU. Compared with the usual CPU programs, we obtain 40-150 times better performance on the GPU.

  5. Controlling Laboratory Processes From A Personal Computer

    Science.gov (United States)

    Will, H.; Mackin, M. A.

    1991-01-01

    Computer program provides natural-language process control from IBM PC or compatible computer. Sets up process-control system that either runs without operator or run by workers who have limited programming skills. Includes three smaller programs. Two of them, written in FORTRAN 77, record data and control research processes. Third program, written in Pascal, generates FORTRAN subroutines used by other two programs to identify user commands with device-driving routines written by user. Also includes set of input data allowing user to define user commands to be executed by computer. Requires personal computer operating under MS-DOS with suitable hardware interfaces to all controlled devices. Also requires FORTRAN 77 compiler and device drivers written by user.

  6. Computing collinear 4-Body Problem central configurations with given masses

    CERN Document Server

    Piña, E

    2011-01-01

    An interesting description of a collinear configuration of four particles is found in terms of two spherical coordinates. An algorithm to compute the four coordinates of particles of a collinear Four-Body central configuration is presented by using an orthocentric tetrahedron, which edge lengths are function of given masses. Each mass is placed at the corresponding vertex of the tetrahedron. The center of mass (and orthocenter) of the tetrahedron is at the origin of coordinates. The initial position of the tetrahedron is placed with two pairs of vertices each in a coordinate plan, the lines joining any pair of them parallel to a coordinate axis, the center of masses of each and the center of mass of the four on one coordinate axis. From this original position the tetrahedron is rotated by two angles around the center of mass until the direction of configuration coincides with one axis of coordinates. The four coordinates of the vertices of the tetrahedron along this direction determine the central configurati...

  7. The impact of centrality on cooperative processes

    CERN Document Server

    Reia, Sandro M; Fontanari, José F

    2016-01-01

    The solution of today's complex problems requires the grouping of task forces whose members are usually connected remotely over long physical distances and different time zones, so the importance of understanding the effects of imposed communication patterns (i.e., who can communicate with whom) on group performance. Here we use an agent-based model to explore the influence of the betweenness centrality of the nodes on the time the group requires to find the global maxima of families of NK-fitness landscapes. The agents cooperate by broadcasting messages informing on their fitness to their neighbors and use this information to copy the more successful agent in their neighborhood. We find that for easy tasks (smooth landscapes) the topology of the communication network has no effect on the performance of the group and that the more central nodes are the most likely to find the global maximum first. For difficult tasks (rugged landscapes), however, we find a positive correlation between the variance of the betw...

  8. Efficient magnetohydrodynamic simulations on graphics processing units with CUDA

    Science.gov (United States)

    Wong, Hon-Cheng; Wong, Un-Hong; Feng, Xueshang; Tang, Zesheng

    2011-10-01

    Magnetohydrodynamic (MHD) simulations based on the ideal MHD equations have become a powerful tool for modeling phenomena in a wide range of applications including laboratory, astrophysical, and space plasmas. In general, high-resolution methods for solving the ideal MHD equations are computationally expensive and Beowulf clusters or even supercomputers are often used to run the codes that implemented these methods. With the advent of the Compute Unified Device Architecture (CUDA), modern graphics processing units (GPUs) provide an alternative approach to parallel computing for scientific simulations. In this paper we present, to the best of the author's knowledge, the first implementation of MHD simulations entirely on GPUs with CUDA, named GPU-MHD, to accelerate the simulation process. GPU-MHD supports both single and double precision computations. A series of numerical tests have been performed to validate the correctness of our code. Accuracy evaluation by comparing single and double precision computation results is also given. Performance measurements of both single and double precision are conducted on both the NVIDIA GeForce GTX 295 (GT200 architecture) and GTX 480 (Fermi architecture) graphics cards. These measurements show that our GPU-based implementation achieves between one and two orders of magnitude of improvement depending on the graphics card used, the problem size, and the precision when comparing to the original serial CPU MHD implementation. In addition, we extend GPU-MHD to support the visualization of the simulation results and thus the whole MHD simulation and visualization process can be performed entirely on GPUs.

  9. Making War Work for Industry: The United Alkali Company's Central Laboratory During World War One.

    Science.gov (United States)

    Reed, Peter

    2015-02-01

    The creation of the Central Laboratory immediately after the United Alkali Company (UAC) was formed in 1890, by amalgamating the Leblanc alkali works in Britain, brought high expectations of repositioning the company by replacing its obsolete Leblanc process plant and expanding its range of chemical products. By 1914, UAC had struggled with few exceptions to adopt new technologies and processes and was still reliant on the Leblanc process. From 1914, the Government would rely heavily on its contribution to the war effort. As a major heavy-chemical manufacturer, UAC produced chemicals for explosives and warfare gases, while also trying to maintain production of many essential chemicals including fertilisers for homeland consumption. UAC's wartime effort was led by the Central Laboratory, working closely with the recently established Engineer's Department to develop new process pathways, build new plant, adapt existing plant, and produce the contracted quantities, all as quickly as possible to meet the changing battlefield demands. This article explores how wartime conditions and demands provided the stimulus for the Central Laboratory's crucial R&D work during World War One.

  10. Computer programmes for mineral processing presentation

    OpenAIRE

    Krstev, Aleksandar; Krstev, Boris; Golomeov, Blagoj; Golomeova, Mirjana

    2009-01-01

    In this paper will be shown computer application of softwares Minteh-5, Minteh-6 and Cyclone in Visual Basic, Visual Studio for presentation of two-products for some closed circuits od grinding-clasifying processes. These methods make possibilities for appropriate, fast and sure presentation of some complex circuits in the mineral processing technologies.

  11. Computer Aided Continuous Time Stochastic Process Modelling

    DEFF Research Database (Denmark)

    Kristensen, N.R.; Madsen, Henrik; Jørgensen, Sten Bay

    2001-01-01

    A grey-box approach to process modelling that combines deterministic and stochastic modelling is advocated for identification of models for model-based control of batch and semi-batch processes. A computer-aided tool designed for supporting decision-making within the corresponding modelling cycle...

  12. Algorithms for image processing and computer vision

    CERN Document Server

    Parker, J R

    2010-01-01

    A cookbook of algorithms for common image processing applications Thanks to advances in computer hardware and software, algorithms have been developed that support sophisticated image processing without requiring an extensive background in mathematics. This bestselling book has been fully updated with the newest of these, including 2D vision methods in content-based searches and the use of graphics cards as image processing computational aids. It's an ideal reference for software engineers and developers, advanced programmers, graphics programmers, scientists, and other specialists wh

  13. 75 FR 27798 - Notice of Issuance of Final Determination Concerning Certain Commodity-Based Clustered Storage Units

    Science.gov (United States)

    2010-05-18

    ... components: A. Hardware 1. A Central Processing Unit (``CPU''), which is used to provide the computing power... by the implanting of the central processing unit on the board because, whereas in Data General use... Commodity-Based Clustered Storage Units AGENCY: U.S. Customs and Border Protection, Department of...

  14. A New Computational Schema for Euphonic Conjunctions in Sanskrit Processing

    CERN Document Server

    Rama, N

    2009-01-01

    Automated language processing is central to the drive to enable facilitated referencing of increasingly available Sanskrit E texts. The first step towards processing Sanskrit text involves the handling of Sanskrit compound words that are an integral part of Sanskrit texts. This firstly necessitates the processing of euphonic conjunctions or sandhis, which are points in words or between words, at which adjacent letters coalesce and transform. The ancient Sanskrit grammarian Panini's codification of the Sanskrit grammar is the accepted authority in the subject. His famed sutras or aphorisms, numbering approximately four thousand, tersely, precisely and comprehensively codify the rules of the grammar, including all the rules pertaining to sandhis. This work presents a fresh new approach to processing sandhis in terms of a computational schema. This new computational model is based on Panini's complex codification of the rules of grammar. The model has simple beginnings and is yet powerful, comprehensive and comp...

  15. Accelerating Radio Astronomy Cross-Correlation with Graphics Processing Units

    CERN Document Server

    Clark, M A; Greenhill, L J

    2011-01-01

    We present a highly parallel implementation of the cross-correlation of time-series data using graphics processing units (GPUs), which is scalable to hundreds of independent inputs and suitable for the processing of signals from "Large-N" arrays of many radio antennas. The computational part of the algorithm, the X-engine, is implementated efficiently on Nvidia's Fermi architecture, sustaining up to 79% of the peak single precision floating-point throughput. We compare performance obtained for hardware- and software-managed caches, observing significantly better performance for the latter. The high performance reported involves use of a multi-level data tiling strategy in memory and use of a pipelined algorithm with simultaneous computation and transfer of data from host to device memory. The speed of code development, flexibility, and low cost of the GPU implementations compared to ASIC and FPGA implementations have the potential to greatly shorten the cycle of correlator development and deployment, for case...

  16. Overview of Central Auditory Processing Deficits in Older Adults.

    Science.gov (United States)

    Atcherson, Samuel R; Nagaraj, Naveen K; Kennett, Sarah E W; Levisee, Meredith

    2015-08-01

    Although there are many reported age-related declines in the human body, the notion that a central auditory processing deficit exists in older adults has not always been clear. Hearing loss and both structural and functional central nervous system changes with advancing age are contributors to how we listen, hear, and process auditory information. Even older adults with normal or near normal hearing sensitivity may exhibit age-related central auditory processing deficits as measured behaviorally and/or electrophysiologically. The purpose of this article is to provide an overview of assessment and rehabilitative approaches for central auditory processing deficits in older adults. It is hoped that the outcome of the information presented here will help clinicians with older adult patients who do not exhibit the typical auditory processing behaviors exhibited by others at the same age and with comparable hearing sensitivity all in the absence of other health-related conditions. PMID:27516715

  17. BitTorrent Processing Unit BPU发展观望

    Institute of Scientific and Technical Information of China (English)

    Zone; 杨原青

    2007-01-01

    在电脑发展的早期,无论是运算处理、还是图形处理、还是输入、输出处理,都由CPU(Central Processing Unit,中央处理器)一力承担,然而随着处理专用化发展,1999年NVIDIA率先将图形处理独立出来,提出了GPU(Graphics Processing unit,绘图处理单元)概念。八年过去,现在GPU已经成为图形处理的中坚力量,并让所玩家耳熟能详。而近期,台湾2家公刊则提出了BPU(BitTorrent Processing Unit,BT处理单元)概念。下面,就让我们一起看看,这款极为新鲜的概念产品。

  18. Loess studies in central United States: Evolution of concepts

    Science.gov (United States)

    Follmer, L.R.

    1996-01-01

    Few words in the realm of earth science have caused more debate than "loess". It is a common term that was first used as a name of a silt deposit before it was defined in a scientific sense. Because this "loose" deposit is easily distinguished from other more coherent deposits, it was recognized as a matter of practical concern and later became the object of much scientific scrutiny. Loess was first recognized along the Rhine Valley in Germany in the 1830s and was first noted in the United States in 1846 along the lower Mississippi River where it later became the center of attention. The use of the name eventually spread around the world, but its use has not been consistently applied. Over the years some interpretations and stratigraphic correlations have been validated, but others have been hotly contested on conceptual grounds and semantic issues. The concept of loess evolved into a complex issue as loess and loess-like deposits were discovered in different parts of the US. The evolution of concepts in the central US developed in four indefinite stages: the eras of (1) discovery and development of hypotheses, (2) conditional acceptance of the eolian origin of loess, (3) "bandwagon" popularity of loess research, and (4) analytical inquiry on the nature of loess. Toward the end of the first era around 1900, the popular opinion on the meaning of the term loess shifted from a lithological sense of loose silt to a lithogenetic sense of eolian silt. However, the dual use of the term fostered a lingering skepticism during the second era that ended in 1944 with an explosion of interest that lasted for more than a decade. In 1944, R.J. Russell proposed and H.N. Fisk defended a new non-eolian, property-based, concept of loess. The eolian advocates reacted with surprise and enthusiasm. Each side used constrained arguments to show their view of the problem, but did not examine the fundamental problem, which was not in the proofs of their hypothesis, but in the definition of

  19. A performance comparison of different graphics processing units running direct N-body simulations

    Science.gov (United States)

    Capuzzo-Dolcetta, R.; Spera, M.

    2013-11-01

    Hybrid computational architectures based on the joint power of Central Processing Units (CPUs) and Graphic Processing Units (GPUs) are becoming popular and powerful hardware tools for a wide range of simulations in biology, chemistry, engineering, physics, etc. In this paper we present a performance comparison of various GPUs available on market when applied to the numerical integration of the classic, gravitational, N-body problem. To do this, we developed an OpenCL version of the parallel code HiGPUs used for these tests, because this portable version is the only apt to work on GPUs of different makes. The main general result is that we confirm the reliability, speed and cheapness of GPUs when applied to the examined kind of problems (i.e. when the forces to evaluate are dependent on the mutual distances, as it happens in gravitational physics and molecular dynamics). More specifically, we find that also the cheap GPUs built to be employed just for gaming applications are very performant in terms of computing speed also in scientific applications and, although with some limitations concerning on-board memory, can be a good choice to build a cheap and efficient machine for scientific applications.

  20. A Performance Comparison of Different Graphics Processing Units Running Direct N-Body Simulations

    CERN Document Server

    Capuzzo-Dolcetta, Roberto

    2013-01-01

    Hybrid computational architectures based on the joint power of Central Processing Units and Graphic Processing Units (GPUs) are becoming popular and powerful hardware tools for a wide range of simulations in biology, chemistry, engineering, physics, etc.. In this paper we present a comparison of performance of various GPUs available on market when applied to the numerical integration of the classic, gravitational, N-body problem. To do this, we developed an OpenCL version of the parallel code (HiGPUs) to use for these tests, because this version is the only apt to work on GPUs of different makes. The main general result is that we confirm the reliability, speed and cheapness of GPUs when applied to the examined kind of problems (i.e. when the forces to evaluate are dependent on the mutual distances, as it happens in gravitational physics and molecular dynamics). More specifically, we find that also the cheap GPUs built to be employed just for gaming applications are very performant in terms of computing speed...

  1. Soft Computing Techniques for Process Control Applications

    Directory of Open Access Journals (Sweden)

    Rahul Malhotra

    2011-09-01

    Full Text Available Technological innovations in soft computing techniques have brought automation capabilities to new levelsof applications. Process control is an important application of any industry for controlling the complexsystem parameters, which can greatly benefit from such advancements. Conventional control theory isbased on mathematical models that describe the dynamic behaviour of process control systems. Due to lackin comprehensibility, conventional controllers are often inferior to the intelligent controllers. Softcomputing techniques provide an ability to make decisions and learning from the reliable data or expert’sexperience. Moreover, soft computing techniques can cope up with a variety of environmental and stabilityrelated uncertainties. This paper explores the different areas of soft computing techniques viz. Fuzzy logic,genetic algorithms and hybridization of two and abridged the results of different process control casestudies. It is inferred from the results that the soft computing controllers provide better control on errorsthan conventional controllers. Further, hybrid fuzzy genetic algorithm controllers have successfullyoptimized the errors than standalone soft computing and conventional techniques.

  2. Heterogeneous Multicore Parallel Programming for Graphics Processing Units

    Directory of Open Access Journals (Sweden)

    Francois Bodin

    2009-01-01

    Full Text Available Hybrid parallel multicore architectures based on graphics processing units (GPUs can provide tremendous computing power. Current NVIDIA and AMD Graphics Product Group hardware display a peak performance of hundreds of gigaflops. However, exploiting GPUs from existing applications is a difficult task that requires non-portable rewriting of the code. In this paper, we present HMPP, a Heterogeneous Multicore Parallel Programming workbench with compilers, developed by CAPS entreprise, that allows the integration of heterogeneous hardware accelerators in a unintrusive manner while preserving the legacy code.

  3. Digital image processing mathematical and computational methods

    CERN Document Server

    Blackledge, J M

    2005-01-01

    This authoritative text (the second part of a complete MSc course) provides mathematical methods required to describe images, image formation and different imaging systems, coupled with the principle techniques used for processing digital images. It is based on a course for postgraduates reading physics, electronic engineering, telecommunications engineering, information technology and computer science. This book relates the methods of processing and interpreting digital images to the 'physics' of imaging systems. Case studies reinforce the methods discussed, with examples of current research

  4. Kernel density estimation using graphical processing unit

    Science.gov (United States)

    Sunarko, Su'ud, Zaki

    2015-09-01

    Kernel density estimation for particles distributed over a 2-dimensional space is calculated using a single graphical processing unit (GTX 660Ti GPU) and CUDA-C language. Parallel calculations are done for particles having bivariate normal distribution and by assigning calculations for equally-spaced node points to each scalar processor in the GPU. The number of particles, blocks and threads are varied to identify favorable configuration. Comparisons are obtained by performing the same calculation using 1, 2 and 4 processors on a 3.0 GHz CPU using MPICH 2.0 routines. Speedups attained with the GPU are in the range of 88 to 349 times compared the multiprocessor CPU. Blocks of 128 threads are found to be the optimum configuration for this case.

  5. Beyond Word Processing: Rhetorical Invention with Computers.

    Science.gov (United States)

    Strickland, James

    In the area of composition, computer assisted instruction (CAI) must move beyond the limited concerns of the current-traditional rhetoric to address the larger issues of writing, become process-centered, and involve active writing rather than answering multiple-choice questions. Researchers cite four major types of interactive CAI, the last of…

  6. Computation of confidence intervals for Poisson processes

    Science.gov (United States)

    Aguilar-Saavedra, J. A.

    2000-07-01

    We present an algorithm which allows a fast numerical computation of Feldman-Cousins confidence intervals for Poisson processes, even when the number of background events is relatively large. This algorithm incorporates an appropriate treatment of the singularities that arise as a consequence of the discreteness of the variable.

  7. Computation of confidence intervals for Poisson processes

    CERN Document Server

    Aguilar-Saavedra, J A

    2000-01-01

    We present an algorithm which allows a fast numerical computation of Feldman-Cousins confidence intervals for Poisson processes, even when the number of background events is relatively large. This algorithm incorporates an appropriate treatment of the singularities that arise as a consequence of the discreteness of the variable.

  8. Parallelizing the Cellular Potts Model on graphics processing units

    Science.gov (United States)

    Tapia, José Juan; D'Souza, Roshan M.

    2011-04-01

    The Cellular Potts Model (CPM) is a lattice based modeling technique used for simulating cellular structures in computational biology. The computational complexity of the model means that current serial implementations restrict the size of simulation to a level well below biological relevance. Parallelization on computing clusters enables scaling the size of the simulation but marginally addresses computational speed due to the limited memory bandwidth between nodes. In this paper we present new data-parallel algorithms and data structures for simulating the Cellular Potts Model on graphics processing units. Our implementations handle most terms in the Hamiltonian, including cell-cell adhesion constraint, cell volume constraint, cell surface area constraint, and cell haptotaxis. We use fine level checkerboards with lock mechanisms using atomic operations to enable consistent updates while maintaining a high level of parallelism. A new data-parallel memory allocation algorithm has been developed to handle cell division. Tests show that our implementation enables simulations of >10 cells with lattice sizes of up to 256 3 on a single graphics card. Benchmarks show that our implementation runs ˜80× faster than serial implementations, and ˜5× faster than previous parallel implementations on computing clusters consisting of 25 nodes. The wide availability and economy of graphics cards mean that our techniques will enable simulation of realistically sized models at a fraction of the time and cost of previous implementations and are expected to greatly broaden the scope of CPM applications.

  9. CSI computer system/remote interface unit acceptance test results

    Science.gov (United States)

    Sparks, Dean W., Jr.

    1992-01-01

    The validation tests conducted on the Control/Structures Interaction (CSI) Computer System (CCS)/Remote Interface Unit (RIU) is discussed. The CCS/RIU consists of a commercially available, Langley Research Center (LaRC) programmed, space flight qualified computer and a flight data acquisition and filtering computer, developed at LaRC. The tests were performed in the Space Structures Research Laboratory (SSRL) and included open loop excitation, closed loop control, safing, RIU digital filtering, and RIU stand alone testing with the CSI Evolutionary Model (CEM) Phase-0 testbed. The test results indicated that the CCS/RIU system is comparable to ground based systems in performing real-time control-structure experiments.

  10. Computer Supported Collaborative Processes in Virtual Organizations

    CERN Document Server

    Paszkiewicz, Zbigniew

    2012-01-01

    In global economy, turbulent organization environment strongly influences organization's operation. Organizations must constantly adapt to changing circumstances and search for new possibilities of gaining competitive advantage. To face this challenge, small organizations base their operation on collaboration within Virtual Organizations (VOs). VO operation is based on collaborative processes. Due to dynamism and required flexibility of collaborative processes, existing business information systems are insufficient to efficiently support them. In this paper a novel method for supporting collaborative processes based on process mining techniques is proposed. The method allows activity patterns in various instances of collaborative processes to be identified and used for recommendation of activities. This provides an opportunity for better computer support of collaborative processes leading to more efficient and effective realization of business goals.

  11. Function Follows Performance in Evolutionary Computational Processing

    DEFF Research Database (Denmark)

    Pasold, Anke; Foged, Isak Worre

    2011-01-01

    architectural projects. At the core lies the formulation of a methodology that is based upon the idea of human and computational selection in accordance with pre-defined performance criteria that can be adapted to different requirements by the mere change of parameter input in order to reach location specific......As the title ‘Function Follows Performance in Evolutionary Computational Processing’ suggests, this paper explores the potentials of employing multiple design and evaluation criteria within one processing model in order to account for a number of performative parameters desired within varied...

  12. Evaluating Computer Technology Integration in a Centralized School System

    Science.gov (United States)

    Eteokleous, N.

    2008-01-01

    The study evaluated the current situation in Cyprus elementary classrooms regarding computer technology integration in an attempt to identify ways of expanding teachers' and students' experiences with computer technology. It examined how Cypriot elementary teachers use computers, and the factors that influence computer integration in their…

  13. Iterative Methods for MPC on Graphical Processing Units

    DEFF Research Database (Denmark)

    Gade-Nielsen, Nicolai Fog; Jørgensen, John Bagterp; Dammann, Bernd

    2012-01-01

    The high oating point performance and memory bandwidth of Graphical Processing Units (GPUs) makes them ideal for a large number of computations which often arises in scientic computing, such as matrix operations. GPUs achieve this performance by utilizing massive par- allelism, which requires...... reevaluating existing algorithms with respect to this new architecture. This is of particular interest to large-scale constrained optimization problems with real-time requirements. The aim of this study is to investigate dierent methods for solving large-scale optimization problems with focus...... on their applicability for GPUs. We examine published techniques for iterative methods in interior points methods (IPMs) by applying them to simple test cases, such as a system of masses connected by springs. Iterative methods allows us deal with the ill-conditioning occurring in the later iterations of the IPM as well...

  14. Adaptive image processing a computational intelligence perspective

    CERN Document Server

    Guan, Ling; Wong, Hau San

    2002-01-01

    Adaptive image processing is one of the most important techniques in visual information processing, especially in early vision such as image restoration, filtering, enhancement, and segmentation. While existing books present some important aspects of the issue, there is not a single book that treats this problem from a viewpoint that is directly linked to human perception - until now. This reference treats adaptive image processing from a computational intelligence viewpoint, systematically and successfully, from theory to applications, using the synergies of neural networks, fuzzy logic, and

  15. Accelerating sparse linear algebra using graphics processing units

    Science.gov (United States)

    Spagnoli, Kyle E.; Humphrey, John R.; Price, Daniel K.; Kelmelis, Eric J.

    2011-06-01

    The modern graphics processing unit (GPU) found in many standard personal computers is a highly parallel math processor capable of over 1 TFLOPS of peak computational throughput at a cost similar to a high-end CPU with excellent FLOPS-to-watt ratio. High-level sparse linear algebra operations are computationally intense, often requiring large amounts of parallel operations and would seem a natural fit for the processing power of the GPU. Our work is on a GPU accelerated implementation of sparse linear algebra routines. We present results from both direct and iterative sparse system solvers. The GPU execution model featured by NVIDIA GPUs based on CUDA demands very strong parallelism, requiring between hundreds and thousands of simultaneous operations to achieve high performance. Some constructs from linear algebra map extremely well to the GPU and others map poorly. CPUs, on the other hand, do well at smaller order parallelism and perform acceptably during low-parallelism code segments. Our work addresses this via hybrid a processing model, in which the CPU and GPU work simultaneously to produce results. In many cases, this is accomplished by allowing each platform to do the work it performs most naturally. For example, the CPU is responsible for graph theory portion of the direct solvers while the GPU simultaneously performs the low level linear algebra routines.

  16. Line-by-line spectroscopic simulations on graphics processing units

    Science.gov (United States)

    Collange, Sylvain; Daumas, Marc; Defour, David

    2008-01-01

    We report here on software that performs line-by-line spectroscopic simulations on gases. Elaborate models (such as narrow band and correlated-K) are accurate and efficient for bands where various components are not simultaneously and significantly active. Line-by-line is probably the most accurate model in the infrared for blends of gases that contain high proportions of H 2O and CO 2 as this was the case for our prototype simulation. Our implementation on graphics processing units sustains a speedup close to 330 on computation-intensive tasks and 12 on memory intensive tasks compared to implementations on one core of high-end processors. This speedup is due to data parallelism, efficient memory access for specific patterns and some dedicated hardware operators only available in graphics processing units. It is obtained leaving most of processor resources available and it would scale linearly with the number of graphics processing units in parallel machines. Line-by-line simulation coupled with simulation of fluid dynamics was long believed to be economically intractable but our work shows that it could be done with some affordable additional resources compared to what is necessary to perform simulations on fluid dynamics alone. Program summaryProgram title: GPU4RE Catalogue identifier: ADZY_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADZY_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 62 776 No. of bytes in distributed program, including test data, etc.: 1 513 247 Distribution format: tar.gz Programming language: C++ Computer: x86 PC Operating system: Linux, Microsoft Windows. Compilation requires either gcc/g++ under Linux or Visual C++ 2003/2005 and Cygwin under Windows. It has been tested using gcc 4.1.2 under Ubuntu Linux 7.04 and using Visual C

  17. Understanding the Functional Central Limit Theorems with Some Applications to Unit Root Testing with Structural Change

    Directory of Open Access Journals (Sweden)

    Juan Carlos Aquino

    2013-06-01

    Full Text Available The application of different unit root statistics is by now a standard practice in empirical work. Even when it is a practical issue, these statistics have complex nonstandard distributions depending on functionals of certain stochastic processes, and their derivations represent a barrier even for many theoretical econometricians. These derivations are based on rigorous and fundamental statistical tools which are not (very well known by standard econometricians. This paper aims to fill this gap by explaining in a simple way one of these fundamental tools: namely, the Functional Central Limit Theorem. To this end, this paper analyzes the foundations and applicability of two versions of the Functional Central Limit Theorem within the framework of a unit root with a structural break. Initial attention is focused on the probabilistic structure of the time series to be considered. Thereafter, attention is focused on the asymptotic theory for nonstationary time series proposed by Phillips (1987a, which is applied by Perron (1989 to study the effects of an (assumed exogenous structural break on the power of the augmented Dickey-Fuller test and by Zivot and Andrews (1992 to criticize the exogeneity assumption and propose a method for estimating an endogenous breakpoint. A systematic method for dealing with efficiency issues is introduced by Perron and Rodriguez (2003, which extends the Generalized Least Squares detrending approach due to Elliot et al. (1996. An empirical application is provided.

  18. The First Prototype for the FastTracker Processing Unit

    CERN Document Server

    Andreani, A; The ATLAS collaboration; Beretta, M; Bogdan, M; Citterio, M; Alberti, F; Giannetti, P; Lanza, A; Magalotti, D; Piendibene, M; Shochet, M; Stabile, A; Tang, J; Tompkins, L

    2012-01-01

    Modern experiments search for extremely rare processes hidden in much larger background levels. As the experiment complexity and the accelerator backgrounds and luminosity increase we need increasingly complex and exclusive selections. We present the first prototype of a new Processing Unit, the core of the FastTracker processor for Atlas, whose computing power is such that a couple of hundreds of them will be able to reconstruct all the tracks with transverse momentum above 1 GeV in the ATLAS events up to Phase II instantaneous luminosities (5×1034 cm-2 s-1) with an event input rate of 100 kHz and a latency below hundreds of microseconds. We plan extremely powerful, very compact and low consumption units for the far future, essential to increase efficiency and purity of the Level 2 selected samples through the intensive use of tracking. This strategy requires massive computing power to minimize the online execution time of complex tracking algorithms. The time consuming pattern recognition problem, generall...

  19. Managing internode data communications for an uninitialized process in a parallel computer

    Science.gov (United States)

    Archer, Charles J; Blocksome, Michael A; Miller, Douglas R; Parker, Jeffrey J; Ratterman, Joseph D; Smith, Brian E

    2014-05-20

    A parallel computer includes nodes, each having main memory and a messaging unit (MU). Each MU includes computer memory, which in turn includes, MU message buffers. Each MU message buffer is associated with an uninitialized process on the compute node. In the parallel computer, managing internode data communications for an uninitialized process includes: receiving, by an MU of a compute node, one or more data communications messages in an MU message buffer associated with an uninitialized process on the compute node; determining, by an application agent, that the MU message buffer associated with the uninitialized process is full prior to initialization of the uninitialized process; establishing, by the application agent, a temporary message buffer for the uninitialized process in main computer memory; and moving, by the application agent, data communications messages from the MU message buffer associated with the uninitialized process to the temporary message buffer in main computer memory.

  20. Fluid Centrality: A Social Network Analysis of Social-Technical Relations in Computer-Mediated Communication

    Science.gov (United States)

    Enriquez, Judith Guevarra

    2010-01-01

    In this article, centrality is explored as a measure of computer-mediated communication (CMC) in networked learning. Centrality measure is quite common in performing social network analysis (SNA) and in analysing social cohesion, strength of ties and influence in CMC, and computer-supported collaborative learning research. It argues that measuring…

  1. Implicit Theories of Creativity in Computer Science in the United States and China

    Science.gov (United States)

    Tang, Chaoying; Baer, John; Kaufman, James C.

    2015-01-01

    To study implicit concepts of creativity in computer science in the United States and mainland China, we first asked 308 Chinese computer scientists for adjectives that would describe a creative computer scientist. Computer scientists and non-computer scientists from China (N = 1069) and the United States (N = 971) then rated how well those…

  2. Chemical computing with reaction-diffusion processes.

    Science.gov (United States)

    Gorecki, J; Gizynski, K; Guzowski, J; Gorecka, J N; Garstecki, P; Gruenert, G; Dittrich, P

    2015-07-28

    Chemical reactions are responsible for information processing in living organisms. It is believed that the basic features of biological computing activity are reflected by a reaction-diffusion medium. We illustrate the ideas of chemical information processing considering the Belousov-Zhabotinsky (BZ) reaction and its photosensitive variant. The computational universality of information processing is demonstrated. For different methods of information coding constructions of the simplest signal processing devices are described. The function performed by a particular device is determined by the geometrical structure of oscillatory (or of excitable) and non-excitable regions of the medium. In a living organism, the brain is created as a self-grown structure of interacting nonlinear elements and reaches its functionality as the result of learning. We discuss whether such a strategy can be adopted for generation of chemical information processing devices. Recent studies have shown that lipid-covered droplets containing solution of reagents of BZ reaction can be transported by a flowing oil. Therefore, structures of droplets can be spontaneously formed at specific non-equilibrium conditions, for example forced by flows in a microfluidic reactor. We describe how to introduce information to a droplet structure, track the information flow inside it and optimize medium evolution to achieve the maximum reliability. Applications of droplet structures for classification tasks are discussed. PMID:26078345

  3. A computational model of human auditory signal processing and perception

    DEFF Research Database (Denmark)

    Jepsen, Morten Løve; Ewert, Stephan D.; Dau, Torsten

    2008-01-01

    A model of computational auditory signal-processing and perception that accounts for various aspects of simultaneous and nonsimultaneous masking in human listeners is presented. The model is based on the modulation filterbank model described by Dau et al. [J. Acoust. Soc. Am. 102, 2892 (1997......)] but includes major changes at the peripheral and more central stages of processing. The model contains outer- and middle-ear transformations, a nonlinear basilar-membrane processing stage, a hair-cell transduction stage, a squaring expansion, an adaptation stage, a 150-Hz lowpass modulation filter, a bandpass...... modulation filterbank, a constant-variance internal noise, and an optimal detector stage. The model was evaluated in experimental conditions that reflect, to a different degree, effects of compression as well as spectral and temporal resolution in auditory processing. The experiments include intensity...

  4. Hydrologic Terrain Processing Using Parallel Computing

    Science.gov (United States)

    Tarboton, D. G.; Watson, D. W.; Wallace, R. M.; Schreuders, K.; Tesfa, T. K.

    2009-12-01

    Topography in the form of Digital Elevation Models (DEMs), is widely used to derive information for the modeling of hydrologic processes. Hydrologic terrain analysis augments the information content of digital elevation data by removing spurious pits, deriving a structured flow field, and calculating surfaces of hydrologic information derived from the flow field. The increasing availability of high-resolution terrain datasets for large areas poses a challenge for existing algorithms that process terrain data to extract this hydrologic information. This paper will describe parallel algorithms that have been developed to enhance hydrologic terrain pre-processing so that larger datasets can be more efficiently computed. Message Passing Interface (MPI) parallel implementations have been developed for pit removal, flow direction, and generalized flow accumulation methods within the Terrain Analysis Using Digital Elevation Models (TauDEM) package. The parallel algorithm works by decomposing the domain into striped or tiled data partitions where each tile is processed by a separate processor. This method also reduces the memory requirements of each processor so that larger size grids can be processed. The parallel pit removal algorithm is adapted from the method of Planchon and Darboux that starts from a high elevation then progressively scans the grid, lowering each grid cell to the maximum of the original elevation or the lowest neighbor. The MPI implementation reconciles elevations along process domain edges after each scan. Generalized flow accumulation extends flow accumulation approaches commonly available in GIS through the integration of multiple inputs and a broad class of algebraic rules into the calculation of flow related quantities. It is based on establishing a flow field through DEM grid cells, that is then used to evaluate any mathematical function that incorporates dependence on values of the quantity being evaluated at upslope (or downslope) grid cells

  5. FamSeq: a variant calling program for family-based sequencing data using graphics processing units.

    Directory of Open Access Journals (Sweden)

    Gang Peng

    2014-10-01

    Full Text Available Various algorithms have been developed for variant calling using next-generation sequencing data, and various methods have been applied to reduce the associated false positive and false negative rates. Few variant calling programs, however, utilize the pedigree information when the family-based sequencing data are available. Here, we present a program, FamSeq, which reduces both false positive and false negative rates by incorporating the pedigree information from the Mendelian genetic model into variant calling. To accommodate variations in data complexity, FamSeq consists of four distinct implementations of the Mendelian genetic model: the Bayesian network algorithm, a graphics processing unit version of the Bayesian network algorithm, the Elston-Stewart algorithm and the Markov chain Monte Carlo algorithm. To make the software efficient and applicable to large families, we parallelized the Bayesian network algorithm that copes with pedigrees with inbreeding loops without losing calculation precision on an NVIDIA graphics processing unit. In order to compare the difference in the four methods, we applied FamSeq to pedigree sequencing data with family sizes that varied from 7 to 12. When there is no inbreeding loop in the pedigree, the Elston-Stewart algorithm gives analytical results in a short time. If there are inbreeding loops in the pedigree, we recommend the Bayesian network method, which provides exact answers. To improve the computing speed of the Bayesian network method, we parallelized the computation on a graphics processing unit. This allowed the Bayesian network method to process the whole genome sequencing data of a family of 12 individuals within two days, which was a 10-fold time reduction compared to the time required for this computation on a central processing unit.

  6. Green Computing

    Directory of Open Access Journals (Sweden)

    K. Shalini

    2013-01-01

    Full Text Available Green computing is all about using computers in a smarter and eco-friendly way. It is the environmentally responsible use of computers and related resources which includes the implementation of energy-efficient central processing units, servers and peripherals as well as reduced resource consumption and proper disposal of electronic waste .Computers certainly make up a large part of many people lives and traditionally are extremely damaging to the environment. Manufacturers of computer and its parts have been espousing the green cause to help protect environment from computers and electronic waste in any way.Research continues into key areas such as making the use of computers as energy-efficient as Possible, and designing algorithms and systems for efficiency-related computer technologies.

  7. Parallel processing architecture for computing inverse differential kinematic equations of the PUMA arm

    Science.gov (United States)

    Hsia, T. C.; Lu, G. Z.; Han, W. H.

    1987-01-01

    In advanced robot control problems, on-line computation of inverse Jacobian solution is frequently required. Parallel processing architecture is an effective way to reduce computation time. A parallel processing architecture is developed for the inverse Jacobian (inverse differential kinematic equation) of the PUMA arm. The proposed pipeline/parallel algorithm can be inplemented on an IC chip using systolic linear arrays. This implementation requires 27 processing cells and 25 time units. Computation time is thus significantly reduced.

  8. FOCUS: Fault Observatory for the Central United States

    Science.gov (United States)

    Wolf, L. W.; Langston, C. A.; Powell, C. A.; Cramer, C.; Johnston, A.; Hill, A.

    2007-12-01

    The mid-continent has a long, complex history of crustal modification and tectonism. Precambrian basement rocks record intense deformation from rifting and convergence that precedes accumulation of a thick sequence of Phanerozoic and recent sediments that constitute the present-day Mississippi Embayment. Despite its location far from the active North American plate margins, the New Madrid seismic zone of central U.S. exhibits a diffuse yet persistent pattern of seismicity, indicating that the region continues to be tectonically active. What causes this intraplate seismicity? How does the intraplate lithosphere support local, regional and plate-wide forces that maintain earthquake productivity in this supposedly stable tectonic setting? These long-standing scientific questions are the motivation behind the proposed establishment of a borehole geo-observatory in the New Madrid seismic zone. FOCUS (Fault Observatory for the Central U.S.) would allow an unprecedented look into the deep sediments and underlying rocks of the Embayment. The proposed drill hole would fill a critical need for better information on the geophysical, mechanical, petrological, and hydrological properties of the brittle crust and overlying sediments that would help to refine models of earthquake generation, wave propagation, and seismic hazard. Measurements of strains and strain transients, episodic tremor, seismic wave velocities, wave attenuation and amplification, heat flow, non-linear sediment response, fluid pressures, crustal permeabilities, fluid chemistry, and rock strength are just some of the target data sets needed. The ultimate goal of FOCUS is to drill a 5-6 km deep scientific hole into the Precambrian basement and into the New Madrid seismic zone. The scientific goal of FOCUS is a better understanding of why earthquakes occur in intraplate settings and a better definition of seismic hazard to benefit the public safety. Short-term objectives include the preparation of an

  9. Magma chamber processes in central volcanic systems of Iceland

    DEFF Research Database (Denmark)

    Þórarinsson, Sigurjón Böðvar; Tegner, Christian

    2009-01-01

    New field work and petrological investigations of the largest gabbro outcrop in Iceland, the Hvalnesfjall gabbro of the 6-7 Ma Austurhorn intrusive complex, have established a stratigraphic sequence exceeding 800 m composed of at least 8 macrorhythmic units. The bases of the macrorhythmic units...... olivine basalts from Iceland that had undergone about 20% crystallisation of olivine, plagioclase and clinopyroxene and that the macrorhythmic units formed from thin magma layers not exceeding 200-300 m. Such a "mushy" magma chamber is akin to volcanic plumbing systems in settings of high magma supply...... rate including the mid-ocean ridges and present-day magma chambers over the Iceland mantle plume. The Austurhorn central volcano likely formed in an off-rift flank zone proximal to the Iceland mantle plume during a major rift relocation....

  10. Modeling Units of Assessment for Sharing Assessment Process Information: towards an Assessment Process Specification

    OpenAIRE

    Miao, Yongwu; Sloep, Peter; Koper, Rob

    2009-01-01

    Miao, Y., Sloep, P. B., & Koper, R. (2008). Modeling Units of Assessment for Sharing Assessment Process Information: towards an Assessment Process Specification. In F. W. B. Li, J. Zhao, T. K. Shih, R. W. H. Lau, Q. Li & D. McLeod (Eds.), Advances in Web Based Learning - Proceedings of the 7th International Conference on Web-based Learning (ICWL 2008) (pp. 132-144). August, 20-22, 2008, Jinhua, China: Lecture Notes in Computer Science 5145 Springer 2008, ISBN 978-3-540-85032-8.

  11. 2008 Groundwater Monitoring Report Central Nevada Test Area, Corrective Action Unit 443

    Energy Technology Data Exchange (ETDEWEB)

    None

    2009-03-01

    This report presents the 2008 groundwater monitoring results collected by the U.S. Department of Energy (DOE) Office of Legacy Management (LM) for the Central Nevada Test Area (CNTA) Subsurface Corrective Action Unit (CAU) 443. Responsibility for the environmental site restoration of the CNTA was transferred from the DOE Office of Environmental Management (DOE-EM) to DOE-LM on October 1, 2006. The environmental restoration process and corrective action strategy for CAU 443 are conducted in accordance with the Federal Facility Agreement and Consent Order (FFACO 2005) entered into by DOE, the U.S. Department of Defense, and the State of Nevada. The corrective action strategy for the site includes proof-of-concept monitoring in support of site closure. This report summarizes investigation activities associated with CAU 443 that were conducted at the site during fiscal year 2008. This is the second groundwater monitoring report prepared by DOE-LM for the CNTA.

  12. Sanitary Engineering Unit Operations and Unit Processes Laboratory Manual.

    Science.gov (United States)

    American Association of Professors in Sanitary Engineering.

    This manual contains a compilation of experiments in Physical Operations, Biological and Chemical Processes for various education and equipment levels. The experiments are designed to be flexible so that they can be adapted to fit the needs of a particular program. The main emphasis is on hands-on student experiences to promote understanding.…

  13. Integration Process for the Habitat Demonstration Unit

    Science.gov (United States)

    Gill, Tracy; Merbitz, Jerad; Kennedy, Kriss; Tn, Terry; Toups, Larry; Howe, A. Scott; Smitherman, David

    2011-01-01

    The Habitat Demonstration Unit (HDU) is an experimental exploration habitat technology and architecture test platform designed for analog demonstration activities. The HDU previously served as a test bed for testing technologies and sub-systems in a terrestrial surface environment. in 2010 in the Pressurized Excursion Module (PEM) configuration. Due to the amount of work involved to make the HDU project successful, the HDU project has required a team to integrate a variety of contributions from NASA centers and outside collaborators The size of the team and number of systems involved With the HDU makes Integration a complicated process. However, because the HDU shell manufacturing is complete, the team has a head start on FY--11 integration activities and can focus on integrating upgrades to existing systems as well as integrating new additions. To complete the development of the FY-11 HDU from conception to rollout for operations in July 2011, a cohesive integration strategy has been developed to integrate the various systems of HDU and the payloads. The highlighted HDU work for FY-11 will focus on performing upgrades to the PEM configuration, adding the X-Hab as a second level, adding a new porch providing the astronauts a larger work area outside the HDU for EVA preparations, and adding a Hygiene module. Together these upgrades result in a prototype configuration of the Deep Space Habitat (DSH), an element under evaluation by NASA's Human Exploration Framework Team (HEFT) Scheduled activates include early fit-checks and the utilization of a Habitat avionics test bed prior to installation into HDU. A coordinated effort to utilize modeling and simulation systems has aided in design and integration concept development. Modeling tools have been effective in hardware systems layout, cable routing, sub-system interface length estimation and human factors analysis. Decision processes on integration and use of all new subsystems will be defined early in the project to

  14. Central and Eastern United States (CEUS) Seismic Source Characterization (SSC) for Nuclear Facilities Project

    Energy Technology Data Exchange (ETDEWEB)

    Kevin J. Coppersmith; Lawrence A. Salomone; Chris W. Fuller; Laura L. Glaser; Kathryn L. Hanson; Ross D. Hartleb; William R. Lettis; Scott C. Lindvall; Stephen M. McDuffie; Robin K. McGuire; Gerry L. Stirewalt; Gabriel R. Toro; Robert R. Youngs; David L. Slayter; Serkan B. Bozkurt; Randolph J. Cumbest; Valentina Montaldo Falero; Roseanne C. Perman' Allison M. Shumway; Frank H. Syms; Martitia (Tish) P. Tuttle

    2012-01-31

    This report describes a new seismic source characterization (SSC) model for the Central and Eastern United States (CEUS). It will replace the Seismic Hazard Methodology for the Central and Eastern United States, EPRI Report NP-4726 (July 1986) and the Seismic Hazard Characterization of 69 Nuclear Plant Sites East of the Rocky Mountains, Lawrence Livermore National Laboratory Model, (Bernreuter et al., 1989). The objective of the CEUS SSC Project is to develop a new seismic source model for the CEUS using a Senior Seismic Hazard Analysis Committee (SSHAC) Level 3 assessment process. The goal of the SSHAC process is to represent the center, body, and range of technically defensible interpretations of the available data, models, and methods. Input to a probabilistic seismic hazard analysis (PSHA) consists of both seismic source characterization and ground motion characterization. These two components are used to calculate probabilistic hazard results (or seismic hazard curves) at a particular site. This report provides a new seismic source model. Results and Findings The product of this report is a regional CEUS SSC model. This model includes consideration of an updated database, full assessment and incorporation of uncertainties, and the range of diverse technical interpretations from the larger technical community. The SSC model will be widely applicable to the entire CEUS, so this project uses a ground motion model that includes generic variations to allow for a range of representative site conditions (deep soil, shallow soil, hard rock). Hazard and sensitivity calculations were conducted at seven test sites representative of different CEUS hazard environments. Challenges and Objectives The regional CEUS SSC model will be of value to readers who are involved in PSHA work, and who wish to use an updated SSC model. This model is based on a comprehensive and traceable process, in accordance with SSHAC guidelines in NUREG/CR-6372, Recommendations for Probabilistic

  15. Computer performance optimization systems, applications, processes

    CERN Document Server

    Osterhage, Wolfgang W

    2013-01-01

    Computing power performance was important at times when hardware was still expensive, because hardware had to be put to the best use. Later on this criterion was no longer critical, since hardware had become inexpensive. Meanwhile, however, people have realized that performance again plays a significant role, because of the major drain on system resources involved in developing complex applications. This book distinguishes between three levels of performance optimization: the system level, application level and business processes level. On each, optimizations can be achieved and cost-cutting p

  16. Feature extraction & image processing for computer vision

    CERN Document Server

    Nixon, Mark

    2012-01-01

    This book is an essential guide to the implementation of image processing and computer vision techniques, with tutorial introductions and sample code in Matlab. Algorithms are presented and fully explained to enable complete understanding of the methods and techniques demonstrated. As one reviewer noted, ""The main strength of the proposed book is the exemplar code of the algorithms."" Fully updated with the latest developments in feature extraction, including expanded tutorials and new techniques, this new edition contains extensive new material on Haar wavelets, Viola-Jones, bilateral filt

  17. Towards a Unified Sentiment Lexicon Based on Graphics Processing Units

    Directory of Open Access Journals (Sweden)

    Liliana Ibeth Barbosa-Santillán

    2014-01-01

    Full Text Available This paper presents an approach to create what we have called a Unified Sentiment Lexicon (USL. This approach aims at aligning, unifying, and expanding the set of sentiment lexicons which are available on the web in order to increase their robustness of coverage. One problem related to the task of the automatic unification of different scores of sentiment lexicons is that there are multiple lexical entries for which the classification of positive, negative, or neutral {P,N,Z} depends on the unit of measurement used in the annotation methodology of the source sentiment lexicon. Our USL approach computes the unified strength of polarity of each lexical entry based on the Pearson correlation coefficient which measures how correlated lexical entries are with a value between 1 and −1, where 1 indicates that the lexical entries are perfectly correlated, 0 indicates no correlation, and −1 means they are perfectly inversely correlated and so is the UnifiedMetrics procedure for CPU and GPU, respectively. Another problem is the high processing time required for computing all the lexical entries in the unification task. Thus, the USL approach computes a subset of lexical entries in each of the 1344 GPU cores and uses parallel processing in order to unify 155802 lexical entries. The results of the analysis conducted using the USL approach show that the USL has 95.430 lexical entries, out of which there are 35.201 considered to be positive, 22.029 negative, and 38.200 neutral. Finally, the runtime was 10 minutes for 95.430 lexical entries; this allows a reduction of the time computing for the UnifiedMetrics by 3 times.

  18. Insulating process for HT-7U central solenoid model coils

    Institute of Scientific and Technical Information of China (English)

    2003-01-01

    The HT-7U superconducting Tokamak is a whole superconducting magnetically confined fusion device. The insulating system of its central solenoid coils is critical to its properties. In this paper the forming of the insulating system and the vacuum-pressure-impregnating (VPI) are introduced, and the whole insulating process is verified under the superconducting experiment condition.

  19. Accelerating VASP electronic structure calculations using graphic processing units

    KAUST Repository

    Hacene, Mohamed

    2012-08-20

    We present a way to improve the performance of the electronic structure Vienna Ab initio Simulation Package (VASP) program. We show that high-performance computers equipped with graphics processing units (GPUs) as accelerators may reduce drastically the computation time when offloading these sections to the graphic chips. The procedure consists of (i) profiling the performance of the code to isolate the time-consuming parts, (ii) rewriting these so that the algorithms become better-suited for the chosen graphic accelerator, and (iii) optimizing memory traffic between the host computer and the GPU accelerator. We chose to accelerate VASP with NVIDIA GPU using CUDA. We compare the GPU and original versions of VASP by evaluating the Davidson and RMM-DIIS algorithms on chemical systems of up to 1100 atoms. In these tests, the total time is reduced by a factor between 3 and 8 when running on n (CPU core + GPU) compared to n CPU cores only, without any accuracy loss. © 2012 Wiley Periodicals, Inc.

  20. Codifications of anaesthetic information for computer processing.

    Science.gov (United States)

    Harrison, M J; Johnson, F

    1981-07-01

    In order for any decision-making process to be computer-assisted it is necessary for the information to be encodable in some way so that the computer can manipulate the data using logical operations. In this paper the information used to generate an anaesthetic regiment is examined. A method is presented for obtaining a suitable set of statements to describe the patient's history and surgical requirements. These statements are then sorted by an algorithm which uses standard Boolean operators to produce a protocol for six phases of anaesthetic procedure. An example is given of the system in operation. The system incorporate knowledge at the level of a consultant anaesthetist. The program used 428 statements to encode patient data, and drew upon a list of 163 possible prescriptions. The program ran on an LSI-11/2 computer using one disc drive. The scheme has direct application in training of junior anaesthetist, as well as producing guidelines to application in other areas of medicine where the possibility of a similar codification may exist. PMID:7306370

  1. Computational Process Modeling for Additive Manufacturing (OSU)

    Science.gov (United States)

    Bagg, Stacey; Zhang, Wei

    2015-01-01

    Powder-Bed Additive Manufacturing (AM) through Direct Metal Laser Sintering (DMLS) or Selective Laser Melting (SLM) is being used by NASA and the Aerospace industry to "print" parts that traditionally are very complex, high cost, or long schedule lead items. The process spreads a thin layer of metal powder over a build platform, then melts the powder in a series of welds in a desired shape. The next layer of powder is applied, and the process is repeated until layer-by-layer, a very complex part can be built. This reduces cost and schedule by eliminating very complex tooling and processes traditionally used in aerospace component manufacturing. To use the process to print end-use items, NASA seeks to understand SLM material well enough to develop a method of qualifying parts for space flight operation. Traditionally, a new material process takes many years and high investment to generate statistical databases and experiential knowledge, but computational modeling can truncate the schedule and cost -many experiments can be run quickly in a model, which would take years and a high material cost to run empirically. This project seeks to optimize material build parameters with reduced time and cost through modeling.

  2. Data processing device for computed tomography system

    International Nuclear Information System (INIS)

    A data processing device applied to a computed tomography system which examines a living body utilizing radiation of X-rays is disclosed. The X-rays which have penetrated the living body are converted into electric signals in a detecting section. The electric signals are acquired and converted from an analog form into a digital form in a data acquisition section, and then supplied to a matrix data-generating section included in the data processing device. By this matrix data-generating section are generated matrix data which correspond to a plurality of projection data. These matrix data are supplied to a partial sum-producing section. The partial sums respectively corresponding to groups of the matrix data are calculated in this partial sum-producing section and then supplied to an accumulation section. In this accumulation section, the final value corresponding to the total sum of the matrix data is calculated, whereby the calculation for image reconstruction is performed

  3. Technical evaluation of proposed Ukrainian Central Radioactive Waste Processing Facility

    Energy Technology Data Exchange (ETDEWEB)

    Gates, R.; Glukhov, A.; Markowski, F.

    1996-06-01

    This technical report is a comprehensive evaluation of the proposal by the Ukrainian State Committee on Nuclear Power Utilization to create a central facility for radioactive waste (not spent fuel) processing. The central facility is intended to process liquid and solid radioactive wastes generated from all of the Ukrainian nuclear power plants and the waste generated as a result of Chernobyl 1, 2 and 3 decommissioning efforts. In addition, this report provides general information on the quantity and total activity of radioactive waste in the 30-km Zone and the Sarcophagus from the Chernobyl accident. Processing options are described that may ultimately be used in the long-term disposal of selected 30-km Zone and Sarcophagus wastes. A detailed report on the issues concerning the construction of a Ukrainian Central Radioactive Waste Processing Facility (CRWPF) from the Ukrainian Scientific Research and Design institute for Industrial Technology was obtained and incorporated into this report. This report outlines various processing options, their associated costs and construction schedules, which can be applied to solving the operating and decommissioning radioactive waste management problems in Ukraine. The costs and schedules are best estimates based upon the most current US industry practice and vendor information. This report focuses primarily on the handling and processing of what is defined in the US as low-level radioactive wastes.

  4. Technical evaluation of proposed Ukrainian Central Radioactive Waste Processing Facility

    International Nuclear Information System (INIS)

    This technical report is a comprehensive evaluation of the proposal by the Ukrainian State Committee on Nuclear Power Utilization to create a central facility for radioactive waste (not spent fuel) processing. The central facility is intended to process liquid and solid radioactive wastes generated from all of the Ukrainian nuclear power plants and the waste generated as a result of Chernobyl 1, 2 and 3 decommissioning efforts. In addition, this report provides general information on the quantity and total activity of radioactive waste in the 30-km Zone and the Sarcophagus from the Chernobyl accident. Processing options are described that may ultimately be used in the long-term disposal of selected 30-km Zone and Sarcophagus wastes. A detailed report on the issues concerning the construction of a Ukrainian Central Radioactive Waste Processing Facility (CRWPF) from the Ukrainian Scientific Research and Design institute for Industrial Technology was obtained and incorporated into this report. This report outlines various processing options, their associated costs and construction schedules, which can be applied to solving the operating and decommissioning radioactive waste management problems in Ukraine. The costs and schedules are best estimates based upon the most current US industry practice and vendor information. This report focuses primarily on the handling and processing of what is defined in the US as low-level radioactive wastes

  5. Seismic risk assessment and application in the central United States

    Science.gov (United States)

    Wang, Z.

    2011-01-01

    Seismic risk is a somewhat subjective, but important, concept in earthquake engineering and other related decision-making. Another important concept that is closely related to seismic risk is seismic hazard. Although seismic hazard and seismic risk have often been used interchangeably, they are fundamentally different: seismic hazard describes the natural phenomenon or physical property of an earthquake, whereas seismic risk describes the probability of loss or damage that could be caused by a seismic hazard. The distinction between seismic hazard and seismic risk is of practical significance because measures for seismic hazard mitigation may differ from those for seismic risk reduction. Seismic risk assessment is a complicated process and starts with seismic hazard assessment. Although probabilistic seismic hazard analysis (PSHA) is the most widely used method for seismic hazard assessment, recent studies have found that PSHA is not scientifically valid. Use of PSHA will lead to (1) artifact estimates of seismic risk, (2) misleading use of the annual probability of exccedance (i.e., the probability of exceedance in one year) as a frequency (per year), and (3) numerical creation of extremely high ground motion. An alternative approach, which is similar to those used for flood and wind hazard assessments, has been proposed. ?? 2011 ASCE.

  6. Handling geophysical flows: Numerical modelling using Graphical Processing Units

    Science.gov (United States)

    Garcia-Navarro, Pilar; Lacasta, Asier; Juez, Carmelo; Morales-Hernandez, Mario

    2016-04-01

    Computational tools may help engineers in the assessment of sediment transport during the decision-making processes. The main requirements are that the numerical results have to be accurate and simulation models must be fast. The present work is based on the 2D shallow water equations in combination with the 2D Exner equation [1]. The resulting numerical model accuracy was already discussed in previous work. Regarding the speed of the computation, the Exner equation slows down the already costly 2D shallow water model as the number of variables to solve is increased and the numerical stability is more restrictive. On the other hand, the movement of poorly sorted material over steep areas constitutes a hazardous environmental problem. Computational tools help in the predictions of such landslides [2]. In order to overcome this problem, this work proposes the use of Graphical Processing Units (GPUs) for decreasing significantly the simulation time [3, 4]. The numerical scheme implemented in GPU is based on a finite volume scheme. The mathematical model and the numerical implementation are compared against experimental and field data. In addition, the computational times obtained with the Graphical Hardware technology are compared against Single-Core (sequential) and Multi-Core (parallel) CPU implementations. References [Juez et al.(2014)] Juez, C., Murillo, J., & Garca-Navarro, P. (2014) A 2D weakly-coupled and efficient numerical model for transient shallow flow and movable bed. Advances in Water Resources. 71 93-109. [Juez et al.(2013)] Juez, C., Murillo, J., & Garca-Navarro, P. (2013) . 2D simulation of granular flow over irregular steep slopes using global and local coordinates. Journal of Computational Physics. 225 166-204. [Lacasta et al.(2014)] Lacasta, A., Morales-Hernndez, M., Murillo, J., & Garca-Navarro, P. (2014) An optimized GPU implementation of a 2D free surface simulation model on unstructured meshes Advances in Engineering Software. 78 1-15. [Lacasta

  7. Quantitative computer representation of propellant processing

    Science.gov (United States)

    Hicks, M. D.; Nikravesh, P. E.

    1990-01-01

    With the technology currently available for the manufacture of propellants, it is possible to control the variance of the total specific impulse obtained from the rocket boosters to within approximately five percent. Though at first inspection this may appear to be a reasonable amount of control, when it is considered that any uncertainty in the total kinetic energy delivered to the spacecraft translates into a design with less total usable payload, even this degree of uncertainty becomes unacceptable. There is strong motivation to control the variance in the specific impulse of the shuttle's solid boosters. Any small gains in the predictability and reliability of the booster would lead to a very substantial payoff in earth-to-orbit payload. The purpose of this study is to examine one aspect of the manufacture of solid propellants, namely, the mixing process. The traditional approach of computational fluid mechanics is notoriously complex and time consuming. Certain simplifications are made, yet certain fundamental aspects of the mixing process are investigated as a whole. It is possible to consider a mixing process in a mathematical sense as an operator, F, which maps a domain back upon itself. An operator which demonstrates good mixing should be able to spread any subset of the domain completely and evenly throughout the whole domain by successive applications of the mixing operator, F. Two and three dimensional models are developed and graphical visualization two and three dimensional mixing processes are presented.

  8. Kinematics of the New Madrid seismic zone, central United States, based on stepover models

    Science.gov (United States)

    Pratt, Thomas L.

    2012-01-01

    Seismicity in the New Madrid seismic zone (NMSZ) of the central United States is generally attributed to a stepover structure in which the Reelfoot thrust fault transfers slip between parallel strike-slip faults. However, some arms of the seismic zone do not fit this simple model. Comparison of the NMSZ with an analog sandbox model of a restraining stepover structure explains all of the arms of seismicity as only part of the extensive pattern of faults that characterizes stepover structures. Computer models show that the stepover structure may form because differences in the trends of lower crustal shearing and inherited upper crustal faults make a step between en echelon fault segments the easiest path for slip in the upper crust. The models predict that the modern seismicity occurs only on a subset of the faults in the New Madrid stepover structure, that only the southern part of the stepover structure ruptured in the A.D. 1811–1812 earthquakes, and that the stepover formed because the trends of older faults are not the same as the current direction of shearing.

  9. Report on the Fourth Reactor Refueling. Laguna Verde Nuclear Central. Unit 1. April-May 1995

    International Nuclear Information System (INIS)

    The fourth refueling of the Unit 1 of Laguna Verde Nuclear Central was executed in the period of April 17 to May 31 of 1995 with the participation of a task group of 358 persons, included technicians and radiation protection officials and auxiliaries.The radiation monitoring and radiological surveillance to the workers was present length ways the refueling process and always attached to the ALARA criteria. The check points for radiation levels were set at: primary container or dry well, reloading floor, decontamination room (level 10.5), turbine building and radioactive waste building. To take advantage of the refueling process, rooms 203 and 213 of the turbine buildings were subject to inspection and maintenance work in valves, heaters and drains of heaters. Management aspects as personnel selection and training, costs, and countable are also presented in this report. Owing to the high cost of man-hour of the members of the ININ staff, its participation in the refueling process was in smaller number than years before. (Author)

  10. Computer aided design of fast neutron therapy units

    International Nuclear Information System (INIS)

    Conceptual design of a radiation-therapy unit using fusion neutrons is presently being considered by KMS Fusion, Inc. As part of this effort, a powerful and versatile computer code, TBEAM, has been developed which enables the user to determine physical characteristics of the fast neutron beam generated in the facility under consideration, using certain given design parameters of the facility as inputs. TBEAM uses the method of statistical sampling (Monte Carlo) to solve the space, time and energy dependent neutron transport equation relating to the conceptual design described by the user-supplied input parameters. The code traces the individual source neutrons as they propagate throughout the shield-collimator structure of the unit, and it keeps track of each interaction by type, position and energy. In its present version, TBEAM is applicable to homogeneous and laminated shields of spherical geometry, to collimator apertures of conical shape, and to neutrons emitted by point sources or such plate sources as are used in neutron generators of various types. TBEAM-generated results comparing the performance of point or plate sources in otherwise identical shield-collimator configurations are presented in numerical form. (H.K.)

  11. Computer Applications in the Design Process.

    Science.gov (United States)

    Winchip, Susan

    Computer Assisted Design (CAD) and Computer Assisted Manufacturing (CAM) are emerging technologies now being used in home economics and interior design applications. A microcomputer in a computer network system is capable of executing computer graphic functions such as three-dimensional modeling, as well as utilizing office automation packages to…

  12. Optical signal processing using photonic reservoir computing

    Science.gov (United States)

    Salehi, Mohammad Reza; Dehyadegari, Louiza

    2014-10-01

    As a new approach to recognition and classification problems, photonic reservoir computing has such advantages as parallel information processing, power efficient and high speed. In this paper, a photonic structure has been proposed for reservoir computing which is investigated using a simple, yet, non-partial noisy time series prediction task. This study includes the application of a suitable topology with self-feedbacks in a network of SOA's - which lends the system a strong memory - and leads to adjusting adequate parameters resulting in perfect recognition accuracy (100%) for noise-free time series, which shows a 3% improvement over previous results. For the classification of noisy time series, the rate of accuracy showed a 4% increase and amounted to 96%. Furthermore, an analytical approach was suggested to solve rate equations which led to a substantial decrease in the simulation time, which is an important parameter in classification of large signals such as speech recognition, and better results came up compared with previous works.

  13. Centralized computer-based controls of the Nova Laser Facility

    International Nuclear Information System (INIS)

    This article introduces the overall architecture of the computer-based Nova Laser Control System and describes its basic components. Use of standard hardware and software components ensures that the system, while specialized and distributed throughout the facility, is adaptable. 9 references, 6 figures

  14. Computer Support for Document Management in the Danish Central Government

    DEFF Research Database (Denmark)

    Hertzum, Morten

    1995-01-01

    and organizational impact of document management systems in the Danish central government. The currently used systems unfold around the recording of incoming and outgoing paper mail and have typically not been accompanied by organizational changes. Rather, document management tends to remain an appendix......Document management systems are generally assumed to hold a potential for delegating the recording and retrieval of documents to professionals such as civil servants and for supporting the coordination and control of work, so-called workflow management. This study investigates the use...... is applied most extensively in an institution with certain mass production characteristics, and the systems do not address needs specific to the civil servants....

  15. Effects of sleep deprivation on central auditory processing

    OpenAIRE

    Liberalesso Paulo Breno; D’Andrea Karlin Fabianne; Cordeiro Mara L; Zeigelboim Bianca; Marques Jair; Jurkiewicz Ari

    2012-01-01

    AbstractBackgroundSleep deprivation is extremely common in contemporary society, and is considered to be a frequent cause of behavioral disorders, mood, alertness, and cognitive performance. Although the impacts of sleep deprivation have been studied extensively in various experimental paradigms, very few studies have addressed the impact of sleep deprivation on central auditory processing (CAP). Therefore, we examined the impact of sleep deprivation on CAP, for which there is sparse informat...

  16. Modeling of biopharmaceutical processes. Part 2: Process chromatography unit operation

    DEFF Research Database (Denmark)

    Kaltenbrunner, Oliver; McCue, Justin; Engel, Philip;

    2008-01-01

    Process modeling can be a useful tool to aid in process development, process optimization, and process scale-up. When modeling a chromatography process, one must first select the appropriate models that describe the mass transfer and adsorption that occurs within the porous adsorbent...

  17. Proton Testing of Advanced Stellar Compass Digital Processing Unit

    DEFF Research Database (Denmark)

    Thuesen, Gøsta; Denver, Troelz; Jørgensen, Finn E

    1999-01-01

    The Advanced Stellar Compass Digital Processing Unit was radiation tested with 300 MeV protons at Proton Irradiation Facility (PIF), Paul Scherrer Institute, Switzerland.......The Advanced Stellar Compass Digital Processing Unit was radiation tested with 300 MeV protons at Proton Irradiation Facility (PIF), Paul Scherrer Institute, Switzerland....

  18. Eros details enhanced by computer processing

    Science.gov (United States)

    2000-01-01

    The NEAR camera's ability to show details of Eros's surface is limited by the spacecraft's distance from the asteroid. That is, the closer the spacecraft is to the surface, the more that details are visible. However mission scientists regularly use computer processing to squeeze an extra measure of information from returned data. In a technique known as 'superresolution', many images of the same scene acquired at very, very slightly different camera pointing are carefully overlain and processed to bright out details even smaller than would normally be visible. In this rendition constructed out of 20 image frames acquired Feb. 12, 2000, the images have first been enhanced ('high-pass filtered') to accentuate small-scale details. Superresolution was then used to bring out features below the normal ability of the camera to resolve.Built and managed by The Johns Hopkins University Applied Physics Laboratory, Laurel, Maryland, NEAR was the first spacecraft launched in NASA's Discovery Program of low-cost, small-scale planetary missions. See the NEAR web page at http://near.jhuapl.edu for more details.

  19. Latar as the Central Point of Houses Group Unit: Identifiability for Spatial Structure in Kasongan, Yogyakarta, Indonesia

    Directory of Open Access Journals (Sweden)

    T. Yoyok Wahyu Subroto

    2012-05-01

    Full Text Available The massive spatial expansion of the city into the rural area in recent decades has caused such problems as related to the spatial exploitation in villages surrounding. This raises a question of whether the open space change into land coverage building may have a spatial structure implication on settlement growth and evolution process in the villages surrounding. This paper reports a case study of Kasongan village in Bantul regency, Yogyakarta, Indonesia in between 1973-2010 in which the problem refers to the discussion of spatial structure is rarely addressed especially in village’s settlement growth and evolution analysis. The bound axis which consists of 4 (four quadrants and one intersection refers to the reference axes in a Cartesian Coordinate System (CCS is used to analyze the setting of the houses group unit around 4 areas/ quadrants. Through such spatial process analysis by means spatial structure approach, the continuity of latar (yard, in the central of houses group unit is detected. There is finding from this research that the latar which exists in ‘the central point’ of houses group unit in Kasongan during 4 decades significantly becomes the prominent factor of the basic spatial structure. It composes the houses group unit in Kasongan.

  20. Optimization models of the supply of power structures’ organizational units with centralized procurement

    OpenAIRE

    Sysoiev Volodymyr

    2013-01-01

    Management of the state power structures’ organizational units for materiel and technical support requires the use of effective tools for supporting decisions, due to the complexity, interdependence, and dynamism of supply in the market economy. The corporate nature of power structures is of particular interest to centralized procurement management, as it provides significant advantages through coordination, eliminating duplication, and economy of scale. Th...

  1. Four central questions about prediction in language processing.

    Science.gov (United States)

    Huettig, Falk

    2015-11-11

    The notion that prediction is a fundamental principle of human information processing has been en vogue over recent years. The investigation of language processing may be particularly illuminating for testing this claim. Linguists traditionally have argued prediction plays only a minor role during language understanding because of the vast possibilities available to the language user as each word is encountered. In the present review I consider four central questions of anticipatory language processing: Why (i.e. what is the function of prediction in language processing)? What (i.e. what are the cues used to predict up-coming linguistic information and what type of representations are predicted)? How (what mechanisms are involved in predictive language processing and what is the role of possible mediating factors such as working memory)? When (i.e. do individuals always predict up-coming input during language processing)? I propose that prediction occurs via a set of diverse PACS (production-, association-, combinatorial-, and simulation-based prediction) mechanisms which are minimally required for a comprehensive account of predictive language processing. Models of anticipatory language processing must be revised to take multiple mechanisms, mediating factors, and situational context into account. Finally, I conjecture that the evidence considered here is consistent with the notion that prediction is an important aspect but not a fundamental principle of language processing. This article is part of a Special Issue entitled SI: Prediction and Attention.

  2. GPU-accelerated micromagnetic simulations using cloud computing

    Science.gov (United States)

    Jermain, C. L.; Rowlands, G. E.; Buhrman, R. A.; Ralph, D. C.

    2016-03-01

    Highly parallel graphics processing units (GPUs) can improve the speed of micromagnetic simulations significantly as compared to conventional computing using central processing units (CPUs). We present a strategy for performing GPU-accelerated micromagnetic simulations by utilizing cost-effective GPU access offered by cloud computing services with an open-source Python-based program for running the MuMax3 micromagnetics code remotely. We analyze the scaling and cost benefits of using cloud computing for micromagnetics.

  3. GPU-accelerated micromagnetic simulations using cloud computing

    CERN Document Server

    Jermain, C L; Buhrman, R A; Ralph, D C

    2015-01-01

    Highly-parallel graphics processing units (GPUs) can improve the speed of micromagnetic simulations significantly as compared to conventional computing using central processing units (CPUs). We present a strategy for performing GPU-accelerated micromagnetic simulations by utilizing cost-effective GPU access offered by cloud computing services with an open-source Python-based program for running the MuMax3 micromagnetics code remotely. We analyze the scaling and cost benefits of using cloud computing for micromagnetics.

  4. Accelerating glassy dynamics using graphics processing units

    CERN Document Server

    Colberg, Peter H

    2009-01-01

    Modern graphics hardware offers peak performances close to 1 Tflop/s, and NVIDIA's CUDA provides a flexible and convenient programming interface to exploit these immense computing resources. We demonstrate the ability of GPUs to perform high-precision molecular dynamics simulations for nearly a million particles running stably over many days. Particular emphasis is put on the numerical long-time stability in terms of energy and momentum conservation. Floating point precision is a crucial issue here, and sufficient precision is maintained by double-single emulation of the floating point arithmetic. As a demanding test case, we have reproduced the slow dynamics of a binary Lennard-Jones mixture close to the glass transition. The improved numerical accuracy permits us to follow the relaxation dynamics of a large system over 4 non-trivial decades in time. Further, our data provide evidence for a negative power-law decay of the velocity autocorrelation function with exponent 5/2 in the close vicinity of the transi...

  5. Optimized Technology for Residuum Processing in the ARGG Unit

    Institute of Scientific and Technical Information of China (English)

    Pan Luoqi; Yuan hongxing; Nie Baiqiu

    2006-01-01

    The influence of feedstock property on operation in the FCC unit was studied to identify the cause leading to deteriorated products distribution related with increasingly heavier feedstock for the ARGG unit. In order to maximize the economic benefits of the ARGG unit a string of measures, including the modification of catalyst formulation, retention of high catalyst activity, application of mixed termination agents to control the reaction temperature and once-through operation, and optimization of catalyst regeneration technique, were adopted to adapt the ARGG unit to processing of the heavy feedstock with its carbon residue equating to 7% on an average. The heavy oil processing technology has brought about apparent economic benefits.

  6. Beowulf Distributed Processing and the United States Geological Survey

    Science.gov (United States)

    Maddox, Brian G.

    2002-01-01

    Introduction In recent years, the United States Geological Survey's (USGS) National Mapping Discipline (NMD) has expanded its scientific and research activities. Work is being conducted in areas such as emergency response research, scientific visualization, urban prediction, and other simulation activities. Custom-produced digital data have become essential for these types of activities. High-resolution, remotely sensed datasets are also seeing increased use. Unfortunately, the NMD is also finding that it lacks the resources required to perform some of these activities. Many of these projects require large amounts of computer processing resources. Complex urban-prediction simulations, for example, involve large amounts of processor-intensive calculations on large amounts of input data. This project was undertaken to learn and understand the concepts of distributed processing. Experience was needed in developing these types of applications. The idea was that this type of technology could significantly aid the needs of the NMD scientific and research programs. Porting a numerically intensive application currently being used by an NMD science program to run in a distributed fashion would demonstrate the usefulness of this technology. There are several benefits that this type of technology can bring to the USGS's research programs. Projects can be performed that were previously impossible due to a lack of computing resources. Other projects can be performed on a larger scale than previously possible. For example, distributed processing can enable urban dynamics research to perform simulations on larger areas without making huge sacrifices in resolution. The processing can also be done in a more reasonable amount of time than with traditional single-threaded methods (a scaled version of Chester County, Pennsylvania, took about fifty days to finish its first calibration phase with a single-threaded program). This paper has several goals regarding distributed processing

  7. Accelerating chemical database searching using graphics processing units.

    Science.gov (United States)

    Liu, Pu; Agrafiotis, Dimitris K; Rassokhin, Dmitrii N; Yang, Eric

    2011-08-22

    The utility of chemoinformatics systems depends on the accurate computer representation and efficient manipulation of chemical compounds. In such systems, a small molecule is often digitized as a large fingerprint vector, where each element indicates the presence/absence or the number of occurrences of a particular structural feature. Since in theory the number of unique features can be exceedingly large, these fingerprint vectors are usually folded into much shorter ones using hashing and modulo operations, allowing fast "in-memory" manipulation and comparison of molecules. There is increasing evidence that lossless fingerprints can substantially improve retrieval performance in chemical database searching (substructure or similarity), which have led to the development of several lossless fingerprint compression algorithms. However, any gains in storage and retrieval afforded by compression need to be weighed against the extra computational burden required for decompression before these fingerprints can be compared. Here we demonstrate that graphics processing units (GPU) can greatly alleviate this problem, enabling the practical application of lossless fingerprints on large databases. More specifically, we show that, with the help of a ~$500 ordinary video card, the entire PubChem database of ~32 million compounds can be searched in ~0.2-2 s on average, which is 2 orders of magnitude faster than a conventional CPU. If multiple query patterns are processed in batch, the speedup is even more dramatic (less than 0.02-0.2 s/query for 1000 queries). In the present study, we use the Elias gamma compression algorithm, which results in a compression ratio as high as 0.097.

  8. Accelerating chemical database searching using graphics processing units.

    Science.gov (United States)

    Liu, Pu; Agrafiotis, Dimitris K; Rassokhin, Dmitrii N; Yang, Eric

    2011-08-22

    The utility of chemoinformatics systems depends on the accurate computer representation and efficient manipulation of chemical compounds. In such systems, a small molecule is often digitized as a large fingerprint vector, where each element indicates the presence/absence or the number of occurrences of a particular structural feature. Since in theory the number of unique features can be exceedingly large, these fingerprint vectors are usually folded into much shorter ones using hashing and modulo operations, allowing fast "in-memory" manipulation and comparison of molecules. There is increasing evidence that lossless fingerprints can substantially improve retrieval performance in chemical database searching (substructure or similarity), which have led to the development of several lossless fingerprint compression algorithms. However, any gains in storage and retrieval afforded by compression need to be weighed against the extra computational burden required for decompression before these fingerprints can be compared. Here we demonstrate that graphics processing units (GPU) can greatly alleviate this problem, enabling the practical application of lossless fingerprints on large databases. More specifically, we show that, with the help of a ~$500 ordinary video card, the entire PubChem database of ~32 million compounds can be searched in ~0.2-2 s on average, which is 2 orders of magnitude faster than a conventional CPU. If multiple query patterns are processed in batch, the speedup is even more dramatic (less than 0.02-0.2 s/query for 1000 queries). In the present study, we use the Elias gamma compression algorithm, which results in a compression ratio as high as 0.097. PMID:21696144

  9. Spatiotemporal computed tomography of dynamic processes

    Science.gov (United States)

    Kaestner, Anders; Münch, Beat; Trtik, Pavel; Butler, Les

    2011-12-01

    Modern computed tomography (CT) equipment allowing fast 3-D imaging also makes it possible to monitor dynamic processes by 4-D imaging. Because the acquisition time of various 3-D-CT systems is still in the range of at least milliseconds or even hours, depending on the detector system and the source, the balance of the desired temporal and spatial resolution must be adjusted. Furthermore, motion artifacts will occur, especially at high spatial resolution and longer measuring times. We propose two approaches based on nonsequential projection angle sequences allowing a convenient postacquisition balance of temporal and spatial resolution. Both strategies are compatible with existing instruments, needing only a simple reprograming of the angle list used for projection acquisition and care with the projection order list. Both approaches will reduce the impact of artifacts due to motion. The strategies are applied and validated with cold neutron imaging of water desorption from originally saturated particles during natural air-drying experiments and with x-ray tomography of a polymer blend heated during imaging.

  10. Central Computer Science Concepts to Research-Based Teacher Training in Computer Science: An Experimental Study

    Science.gov (United States)

    Zendler, Andreas; Klaudt, Dieter

    2012-01-01

    The significance of computer science for economics and society is undisputed. In particular, computer science is acknowledged to play a key role in schools (e.g., by opening multiple career paths). The provision of effective computer science education in schools is dependent on teachers who are able to properly represent the discipline and whose…

  11. An Overview of Computer-Based Natural Language Processing.

    Science.gov (United States)

    Gevarter, William B.

    Computer-based Natural Language Processing (NLP) is the key to enabling humans and their computer-based creations to interact with machines using natural languages (English, Japanese, German, etc.) rather than formal computer languages. NLP is a major research area in the fields of artificial intelligence and computational linguistics. Commercial…

  12. Ground motion-simulations of 1811-1812 New Madrid earthquakes, central United States

    Science.gov (United States)

    Ramirez-Guzman, L.; Graves, Robert; Olsen, Kim B.; Boyd, Oliver; Cramer, Chris H.; Hartzell, Stephen; Ni, Sidao; Somerville, Paul G.; Williams, Robert; Zhong, Jinquan

    2015-01-01

    We performed a suite of numerical simulations based on the 1811–1812 New Madrid seismic zone (NMSZ) earthquakes, which demonstrate the importance of 3D geologic structure and rupture directivity on the ground‐motion response throughout a broad region of the central United States (CUS) for these events. Our simulation set consists of 20 hypothetical earthquakes located along two faults associated with the current seismicity trends in the NMSZ. The hypothetical scenarios range in magnitude from M 7.0 to 7.7 and consider various epicenters, slip distributions, and rupture characterization approaches. The low‐frequency component of our simulations was computed deterministically up to a frequency of 1 Hz using a regional 3D seismic velocity model and was combined with higher‐frequency motions calculated for a 1D medium to generate broadband synthetics (0–40 Hz in some cases). For strike‐slip earthquakes located on the southwest–northeast‐striking NMSZ axial arm of seismicity, our simulations show 2–10 s period energy channeling along the trend of the Reelfoot rift and focusing strong shaking northeast toward Paducah, Kentucky, and Evansville, Indiana, and southwest toward Little Rock, Arkansas. These waveguide effects are further accentuated by rupture directivity such that an event with a western epicenter creates strong amplification toward the northeast, whereas an eastern epicenter creates strong amplification toward the southwest. These effects are not as prevalent for simulations on the reverse‐mechanism Reelfoot fault, and large peak ground velocities (>40  cm/s) are typically confined to the near‐source region along the up‐dip projection of the fault. Nonetheless, these basin response and rupture directivity effects have a significant impact on the pattern and level of the estimated intensities, which leads to additional uncertainty not previously considered in magnitude estimates of the 1811–1812 sequence based only on historical

  13. Computer data processing operation speed influencing factors%计算机数据处理的运算速度影响因素探讨

    Institute of Scientific and Technical Information of China (English)

    吕睿

    2015-01-01

    In view of the current speed of the computer data processing is difficult to meet people's growing entertainment and office demand, restricts the development and progress of computer technology, this paper made a brief analysis of the basic concepts of computer data processing, clear focus its data processing features, combined with the computer data processing of the professional theory, the influence of the operation speed of the computer data processing with the factors, including the central processing unit, computer, computer hard drives, memory, etc., it is concluded that the central processing unit (CPU), computer memory, computer hard disk is the main factors influencing the speed of computer data processing, suggested comprehensive optimization, in order to improve the operation speed of computer data processing.%针对当前计算机数据处理的运算速度难以满足人们日渐增长的娱乐与办公需求,制约着计算机技术的发展与进步,本文通过对计算机数据处理的基本概念进行简要的分析,重点明确其数据处理特征,结合计算机数据处理的专业理论,深入剖析计算机数据处理的运算速度的影响因素,包括中央处理器、计算机内存、计算机硬盘等方面,得出了中央处理器、计算机内存、计算机硬盘是影响计算机数据处理的运算速度的主要因素的结论,建议全面优化,以提高计算机数据处理的运算速度。

  14. Evaluation of the Central Hearing Process in Parkinson Patients

    Directory of Open Access Journals (Sweden)

    Santos, Rosane Sampaio

    2011-04-01

    Full Text Available Introduction: Parkinson disease (PD is a degenerating disease with a deceitful character, impairing the central nervous system and causing biological, psychological and social changes. It shows motor signs and symptoms characterized by trembling, postural instability, rigidity and bradykinesia. Objective: To evaluate the central hearing function in PD patients. Method: A descriptive, prospect and transversal study, in which 10 individuals diagnosed of PD named study group (SG and 10 normally hearing individuals named control group (CG were evaluated, age average of 63.8 and (SD 5.96. Both groups went through otorhinolaryngological and ordinary audiological evaluations, and dichotic test of alternate disyllables (SSW. Results: In the quantitative analysis, CG showed 80% normality on competitive right-ear hearing (RC and 60% on the competitive left-ear hearing (LC in comparison with the SG that presented 70% on RC and 40% on LC. In the qualitative analysis, the biggest percentage of errors was evident in the SG in the order effect. The results showed a difficulty in identifying a sound when there is another competitive sound and in the memory ability. Conclusion: A qualitative and quantitative difference was observed in the SSW test between the evaluated groups, although statistical data does not show significant differences. The importance to evaluate the central hearing process is emphasized when contributing to the procedures to be taken at the therapeutic follow-up.

  15. Coordination processes in computer supported collaborative writing

    NARCIS (Netherlands)

    Kanselaar, G.; Erkens, Gijsbert; Jaspers, Jos; Prangsma, M.E.

    2005-01-01

    In the COSAR-project a computer-supported collaborative learning environment enables students to collaborate in writing an argumentative essay. The TC3 groupware environment (TC3: Text Composer, Computer supported and Collaborative) offers access to relevant information sources, a private notepad, a

  16. 2009 Groundwater Monitoring Report Central Nevada Test Area, Corrective Action Unit 443

    Energy Technology Data Exchange (ETDEWEB)

    None

    2010-09-01

    This report presents the 2009 groundwater monitoring results collected by the U.S. Department of Energy (DOE) Office of Legacy Management (LM) for the Central Nevada Test Area (CNTA) Subsurface Corrective Action Unit (CAU) 443. Responsibility for the environmental site restoration of CNTA was transferred from the DOE Office of Environmental Management to LM on October 1, 2006. The environmental restoration process and corrective action strategy for CAU 443 are conducted in accordance with the Federal Facility Agreement and Consent Order entered into by DOE, the U.S. Department of Defense, and the State of Nevada. The corrective action strategy for the site includes proof-of-concept monitoring in support of site closure. This report summarizes investigation activities associated with CAU 443 that were conducted at the site from October 2008 through December 2009. It also represents the first year of the enhanced monitoring network and begins the new 5-year proof-of-concept monitoring period that is intended to validate the compliance boundary

  17. Closure Report Central Nevada Test Area Subsurface Corrective Action Unit 443 January 2016

    Energy Technology Data Exchange (ETDEWEB)

    Findlay, Rick [US Department of Energy, Washington, DC (United States). Office of Legacy Management

    2015-11-01

    The U.S. Department of Energy (DOE) Office of Legacy Management (LM) prepared this Closure Report for the subsurface Corrective Action Unit (CAU) 443 at the Central Nevada Test Area (CNTA), Nevada, Site. CNTA was the site of a 0.2- to 1-megaton underground nuclear test in 1968. Responsibility for the site’s environmental restoration was transferred from the DOE, National Nuclear Security Administration, Nevada Field Office to LM on October 1, 2006. The environmental restoration process and corrective action strategy for CAU 443 are conducted in accordance with the Federal Facility Agreement and Consent Order (FFACO 1996, as amended 2011) and all applicable Nevada Division of Environmental Protection (NDEP) policies and regulations. This Closure Report provides justification for closure of CAU 443 and provides a summary of completed closure activities; describes the selected corrective action alternative; provides an implementation plan for long-term monitoring with well network maintenance and approaches/policies for institutional controls (ICs); and presents the contaminant, compliance, and use-restriction boundaries for the site.

  18. 2010 Groundwater Monitoring Report Central Nevada Test Area, Corrective Action Unit 443

    Energy Technology Data Exchange (ETDEWEB)

    None

    2011-02-01

    This report presents the 2010 groundwater monitoring results collected by the U.S. Department of Energy (DOE) Office of Legacy Management (LM) for the Central Nevada Test Area (CNTA) Subsurface Corrective Action Unit (CAU) 443. Responsibility for the environmental site restoration of CNTA was transferred from the DOE Office of Environmental Management to LM on October 1, 2006. The environmental restoration process and corrective action strategy for CAU 443 are conducted in accordance with the Federal Facility Agreement and Consent Order entered into by DOE, the U.S. Department of Defense, and the State of Nevada. The corrective action strategy for the site includes proof-of-concept monitoring in support of site closure. This report summarizes investigation activities associated with CAU 443 that were conducted at the site from December 2009 through December 2010. It also represents the second year of the enhanced monitoring network and the 5-year proof-of-concept monitoring period that is intended to validate the compliance boundary

  19. Marrying Content and Process in Computer Science Education

    Science.gov (United States)

    Zendler, A.; Spannagel, C.; Klaudt, D.

    2011-01-01

    Constructivist approaches to computer science education emphasize that as well as knowledge, thinking skills and processes are involved in active knowledge construction. K-12 computer science curricula must not be based on fashions and trends, but on contents and processes that are observable in various domains of computer science, that can be…

  20. Ultrasound guidance for central vascular access in the neonatal and pediatric intensive care unit

    Directory of Open Access Journals (Sweden)

    Al Sofyani Khouloud

    2012-01-01

    Full Text Available Background: Percutaneous central venous cannulation (CVC in infants and children is a challenging procedure, and it is usually achieved with a blinded, external landmark-guided technique. Recent guidelines from the National Institute for Clinical Excellence (NICE recommend the use of ultrasound guidance for central venous catheterization in children. The purpose of this study was to evaluate this method in a pediatric and neonatal intensive care unit, assessing the number of attempts, access time (skin to vein, incidence of complication, and the ease of use for central venous access in the neonatal age group. Methods: After approval by the local departmental ethical committee, we evaluated an ultrasound-guided method over a period of 6 months in 20 critically ill patients requiring central venous access in a pediatric intensive care unit and a neonatal intensive care unit (median age 9 (0-204 months and weight 9.3 (1.9-60 kg. Cannulation was performed after locating the puncture site with the aid of an ultrasound device (8 MHz transducer, Vividi General Electrics® Burroughs, USA covered by a sterile sheath. Outcome measures included successful insertion rate, number of attempts, access time, and incidence of complications. Results: Cannulation of the central vein was 100% successful in all patients. The right femoral vein was preferred in 60% of the cases. The vein was entered on the first attempt in 75% of all patients, and the median number of attempts was 1. The median access time (skin to vein for all patients was 64.5 s. No arterial punctures or hematomas occurred using the ultrasound technique. Conclusions: In a sample of critically ill patients from a pediatric and neonatal intensive care unit, ultrasound-guided CVC compared with published reports on traditional technique required fewer attempts and less time. It improved the overall success rate, minimized the occurrence of complications during vein cannulation and was easy to apply in

  1. General circulation model simulations of recent cooling in the east-central United States

    Science.gov (United States)

    Robinson, Walter A.; Reudy, Reto; Hansen, James E.

    2002-12-01

    In ensembles of retrospective general circulation model (GCM) simulations, surface temperatures in the east-central United States cool between 1951 and 1997. This cooling, which is broadly consistent with observed surface temperatures, is present in GCM experiments driven by observed time varying sea-surface temperatures (SSTs) in the tropical Pacific, whether or not increasing greenhouse gases and other time varying climate forcings are included. Here we focus on ensembles with fixed radiative forcing and with observed varying SST in different regions. In these experiments the trend and variability in east-central U.S. surface temperatures are tied to tropical Pacific SSTs. Warm tropical Pacific SSTs cool U.S. temperatures by diminishing solar heating through an increase in cloud cover. These associations are embedded within a year-round response to warm tropical Pacific SST that features tropospheric warming throughout the tropics and regions of tropospheric cooling in midlatitudes. Precipitable water vapor over the Gulf of Mexico and the Caribbean and the tropospheric thermal gradient across the Gulf Coast of the United States increase when the tropical Pacific is warm. In observations, recent warming in the tropical Pacific is also associated with increased precipitable water over the southeast United States. The observed cooling in the east-central United States, relative to the rest of the globe, is accompanied by increased cloud cover, though year-to-year variations in cloud cover, U.S. surface temperatures, and tropical Pacific SST are less tightly coupled in observations than in the GCM.

  2. Fast extended focused imaging in digital holography using a graphics processing unit.

    Science.gov (United States)

    Wang, Le; Zhao, Jianlin; Di, Jianglei; Jiang, Hongzhen

    2011-05-01

    We present a simple and effective method for reconstructing extended focused images in digital holography using a graphics processing unit (GPU). The Fresnel transform method is simplified by an algorithm named fast Fourier transform pruning with frequency shift. Then the pixel size consistency problem is solved by coordinate transformation and combining the subpixel resampling and the fast Fourier transform pruning with frequency shift. With the assistance of the GPU, we implemented an improved parallel version of this method, which obtained about a 300-500-fold speedup compared with central processing unit codes.

  3. Steroid induced central serous retinopathy following follicular unit extraction in androgenic alopecia

    Directory of Open Access Journals (Sweden)

    Rakesh Tilak Raj

    2016-06-01

    Full Text Available Dermatologists for various conditions and procedures commonly use corticosteroids worldwide. The development of central serous retinopathy is a lesser known complication occurring in <10% of the cases with steroid use. This case report highlights the development of central serous retinopathy after prescribing low dose of prednisolone 20 mg per day for androgenic alopecia during post-surgical follicular unit extraction (FUE surgery follow-up that recovered spontaneously after gradual withdrawal of steroids. Therefore, awareness is required for its early detection and management as it has a potential of causing irreversible visual impairment. [Int J Basic Clin Pharmacol 2016; 5(3.000: 1152-1155

  4. Bandwidth Enhancement between Graphics Processing Units on the Peripheral Component Interconnect Bus

    Directory of Open Access Journals (Sweden)

    ANTON Alin

    2015-10-01

    Full Text Available General purpose computing on graphics processing units is a new trend in high performance computing. Present day applications require office and personal supercomputers which are mostly based on many core hardware accelerators communicating with the host system through the Peripheral Component Interconnect (PCI bus. Parallel data compression is a difficult topic but compression has been used successfully to improve the communication between parallel message passing interface (MPI processes on high performance computing clusters. In this paper we show that special pur pose compression algorithms designed for scientific floating point data can be used to enhance the bandwidth between 2 graphics processing unit (GPU devices on the PCI Express (PCIe 3.0 x16 bus in a homebuilt personal supercomputer (PSC.

  5. 77 FR 51828 - Dominican Republic-Central America-United States Free Trade Agreement; Notice of Extension of the...

    Science.gov (United States)

    2012-08-27

    ... of the Secretary Dominican Republic--Central America--United States Free Trade Agreement; Notice of... Republic--Central America--United States Free Trade Agreement (CAFTA-DR). On December 22, 2011, OTLA... International Labor Affairs, U.S. Department of Labor. ACTION: Notice. The Office of Trade and Labor...

  6. Calculation of HELAS amplitudes for QCD processes using graphics processing unit (GPU)

    CERN Document Server

    Hagiwara, K; Okamura, N; Rainwater, D L; Stelzer, T

    2009-01-01

    We use a graphics processing unit (GPU) for fast calculations of helicity amplitudes of quark and gluon scattering processes in massless QCD. New HEGET ({\\bf H}ELAS {\\bf E}valuation with {\\bf G}PU {\\bf E}nhanced {\\bf T}echnology) codes for gluon self-interactions are introduced, and a C++ program to convert the MadGraph generated FORTRAN codes into HEGET codes in CUDA (a C-platform for general purpose computing on GPU) is created. Because of the proliferation of the number of Feynman diagrams and the number of independent color amplitudes, the maximum number of final state jets we can evaluate on a GPU is limited to 4 for pure gluon processes ($gg\\to 4g$), or 5 for processes with one or more quark lines such as $q\\bar{q}\\to 5g$ and $qq\\to qq+3g$. Compared with the usual CPU-based programs, we obtain 60-100 times better performance on the GPU, except for 5-jet production processes and the $gg\\to 4g$ processes for which the GPU gain over the CPU is about 20.

  7. Acceleration of integral imaging based incoherent Fourier hologram capture using graphic processing unit.

    Science.gov (United States)

    Jeong, Kyeong-Min; Kim, Hee-Seung; Hong, Sung-In; Lee, Sung-Keun; Jo, Na-Young; Kim, Yong-Soo; Lim, Hong-Gi; Park, Jae-Hyeung

    2012-10-01

    Speed enhancement of integral imaging based incoherent Fourier hologram capture using a graphic processing unit is reported. Integral imaging based method enables exact hologram capture of real-existing three-dimensional objects under regular incoherent illumination. In our implementation, we apply parallel computation scheme using the graphic processing unit, accelerating the processing speed. Using enhanced speed of hologram capture, we also implement a pseudo real-time hologram capture and optical reconstruction system. The overall operation speed is measured to be 1 frame per second.

  8. Parallelizing Kernel Polynomial Method Applying Graphics Processing Units

    Directory of Open Access Journals (Sweden)

    Shinichi Yamagiwa

    2012-01-01

    Full Text Available

    The Kernel Polynomial Method (KPM is one of the fast diagonalization methods used for simulations of quantum systems in research fields of condensed matter physics and chemistry. The algorithm has a difficulty to be parallelized on a cluster computer or a supercomputer due to the fine-grain recursive calculations. This paper proposes an implementation of the KPM on the recent graphics processing units (GPU where the recursive calculations are able to be parallelized in the massively parallel environment. This paper also describes performance evaluations regarding the cases when the actual simulation parameters are applied, where one parameter is applied for the increased intensive calculations and another is applied for the increased amount of memory usage. Moreover, the impact for applying the Compress Row Storage (CRS format to the KPM algorithm is also discussed. Finally, it concludes that the performance on the GPU promises very high performance compared to the one on CPU and reduces the overall simulation time.

  9. Graphics Processing Unit Acceleration of Gyrokinetic Turbulence Simulations

    Science.gov (United States)

    Hause, Benjamin; Parker, Scott; Chen, Yang

    2013-10-01

    We find a substantial increase in on-node performance using Graphics Processing Unit (GPU) acceleration in gyrokinetic delta-f particle-in-cell simulation. Optimization is performed on a two-dimensional slab gyrokinetic particle simulation using the Portland Group Fortran compiler with the OpenACC compiler directives and Fortran CUDA. Mixed implementation of both Open-ACC and CUDA is demonstrated. CUDA is required for optimizing the particle deposition algorithm. We have implemented the GPU acceleration on a third generation Core I7 gaming PC with two NVIDIA GTX 680 GPUs. We find comparable, or better, acceleration relative to the NERSC DIRAC cluster with the NVIDIA Tesla C2050 computing processor. The Tesla C 2050 is about 2.6 times more expensive than the GTX 580 gaming GPU. We also see enormous speedups (10 or more) on the Titan supercomputer at Oak Ridge with Kepler K20 GPUs. Results show speed-ups comparable or better than that of OpenMP models utilizing multiple cores. The use of hybrid OpenACC, CUDA Fortran, and MPI models across many nodes will also be discussed. Optimization strategies will be presented. We will discuss progress on optimizing the comprehensive three dimensional general geometry GEM code.

  10. Kinematic Modelling of Disc Galaxies using Graphics Processing Units

    CERN Document Server

    Bekiaris, Georgios; Fluke, Christopher J; Abraham, Roberto

    2015-01-01

    With large-scale Integral Field Spectroscopy (IFS) surveys of thousands of galaxies currently under-way or planned, the astronomical community is in need of methods, techniques and tools that will allow the analysis of huge amounts of data. We focus on the kinematic modelling of disc galaxies and investigate the potential use of massively parallel architectures, such as the Graphics Processing Unit (GPU), as an accelerator for the computationally expensive model-fitting procedure. We review the algorithms involved in model-fitting and evaluate their suitability for GPU implementation. We employ different optimization techniques, including the Levenberg-Marquardt and Nested Sampling algorithms, but also a naive brute-force approach based on Nested Grids. We find that the GPU can accelerate the model-fitting procedure up to a factor of ~100 when compared to a single-threaded CPU, and up to a factor of ~10 when compared to a multi-threaded dual CPU configuration. Our method's accuracy, precision and robustness a...

  11. Study guide to accompany computers data and processing

    CERN Document Server

    Deitel, Harvey M

    1985-01-01

    Study Guide to Accompany Computer and Data Processing provides information pertinent to the fundamental aspects of computers and computer technology. This book presents the key benefits of using computers.Organized into five parts encompassing 19 chapters, this book begins with an overview of the evolution of modern computing systems from the earliest mechanical calculating devices to microchips. This text then introduces computer hardware and describes the processor. Other chapters describe how microprocessors are made and describe the physical operation of computers. This book discusses as w

  12. Computer-aided modeling of aluminophosphate zeolites as packings of building units

    KAUST Repository

    Peskov, Maxim

    2012-03-22

    New building schemes of aluminophosphate molecular sieves from packing units (PUs) are proposed. We have investigated 61 framework types discovered in zeolite-like aluminophosphates and have identified important PU combinations using a recently implemented computational algorithm of the TOPOS package. All PUs whose packing completely determines the overall topology of the aluminophosphate framework were described and catalogued. We have enumerated 235 building models for the aluminophosphates belonging to 61 zeolite framework types, from ring- or cage-like PU clusters. It is indicated that PUs can be considered as precursor species in the zeolite synthesis processes. © 2012 American Chemical Society.

  13. A 1.5 GFLOPS Reciprocal Unit for Computer Graphics

    DEFF Research Database (Denmark)

    Nannarelli, Alberto; Rasmussen, Morten Sleth; Stuart, Matthias Bo

    2006-01-01

    The reciprocal operation 1/d is a frequent operation performed in graphics processors (GPUs). In this work, we present the design of a radix-16 reciprocal unit based on the algorithm combining the traditional digit-by-digit algorithm and the approximation of the reciprocal by one Newton......-Raphson iteration. We design a fully pipelined single-precision unit to be used in GPUs. The results of the implementation show that the proposed unit can sustain a higher throughput than that of a unit implementing the normal Newton-Raphson approximation, and its area is smaller....

  14. Image-Processing Software For A Hypercube Computer

    Science.gov (United States)

    Lee, Meemong; Mazer, Alan S.; Groom, Steven L.; Williams, Winifred I.

    1992-01-01

    Concurrent Image Processing Executive (CIPE) is software system intended to develop and use image-processing application programs on concurrent computing environment. Designed to shield programmer from complexities of concurrent-system architecture, it provides interactive image-processing environment for end user. CIPE utilizes architectural characteristics of particular concurrent system to maximize efficiency while preserving architectural independence from user and programmer. CIPE runs on Mark-IIIfp 8-node hypercube computer and associated SUN-4 host computer.

  15. High-throughput sequence alignment using Graphics Processing Units

    Directory of Open Access Journals (Sweden)

    Trapnell Cole

    2007-12-01

    Full Text Available Abstract Background The recent availability of new, less expensive high-throughput DNA sequencing technologies has yielded a dramatic increase in the volume of sequence data that must be analyzed. These data are being generated for several purposes, including genotyping, genome resequencing, metagenomics, and de novo genome assembly projects. Sequence alignment programs such as MUMmer have proven essential for analysis of these data, but researchers will need ever faster, high-throughput alignment tools running on inexpensive hardware to keep up with new sequence technologies. Results This paper describes MUMmerGPU, an open-source high-throughput parallel pairwise local sequence alignment program that runs on commodity Graphics Processing Units (GPUs in common workstations. MUMmerGPU uses the new Compute Unified Device Architecture (CUDA from nVidia to align multiple query sequences against a single reference sequence stored as a suffix tree. By processing the queries in parallel on the highly parallel graphics card, MUMmerGPU achieves more than a 10-fold speedup over a serial CPU version of the sequence alignment kernel, and outperforms the exact alignment component of MUMmer on a high end CPU by 3.5-fold in total application time when aligning reads from recent sequencing projects using Solexa/Illumina, 454, and Sanger sequencing technologies. Conclusion MUMmerGPU is a low cost, ultra-fast sequence alignment program designed to handle the increasing volume of data produced by new, high-throughput sequencing technologies. MUMmerGPU demonstrates that even memory-intensive applications can run significantly faster on the relatively low-cost GPU than on the CPU.

  16. AUTOMATION OF INVENTORY PROCESS OF PERSONAL COMPUTERS

    Directory of Open Access Journals (Sweden)

    A. I. Zaharenko

    2013-01-01

    Full Text Available The modern information infrastructure of a large or medium-sized enterprise is inconceivable without an effective system of the computer equipment and fictitious assets inventory. An example of creation of such system which is simple for implementation and has low cost of possession is considered in this article.

  17. Hydroclimatological Processes in the Central American Dry Corridor

    Science.gov (United States)

    Hidalgo, H. G.; Duran-Quesada, A. M.; Amador, J. A.; Alfaro, E. J.; Mora, G.

    2015-12-01

    This work studies the hydroclimatological variability and the climatic precursors of drought in the Central American Dry Corridor (CADC), a subregion located in the Pacific coast of Southern Mexico and Central America. Droughts are frequent in the CADC, which is featured by a higher climatological aridity compared to the highlands and Caribbean coast of Central America. The CADC region presents large social vulnerability to hydroclimatological impacts originated from dry conditions, as there is a large part of population that depends on subsistance agriculture. The influence of large-scale climatic precursors such as ENSO, the Caribbean Low-Level Jet (CLLJ), low frequency signals from the Pacific and Caribbean and some intra-seasonal signals such as the MJO are evaluated. Previous work by the authors identified a connection between the CLLJ and CADC precipitation. This connection is more complex than a simple rain-shadow effect, and instead it was suggested that convection at the exit of the jet in the Costa-Rica and Nicaragua Caribbean coasts and consequent subsidence in the Pacific could be playing a role in this connection. During summer, when the CLLJ is stronger than normal, the Inter-Tropical Convergence Zone (located mainly in the Pacific) displaces to a more southern position, and vice-versa, suggesting a connection between these two processes that has not been fully explained yet. The role of the Western Hemisphere Warm Pool also needs more research. All this is important, as it suggest a working hypothesis that during summer, the effect of the Caribbean wind strength may be responsible for the dry climate of the CADC. Another previous analysis by the authors was based on downscaled precipitation and temperature from GCMs and the NCEP/NCAR reanalysis. The data was later used in a hydrological model. Results showed a negative trend in reanalysis' runoff for 1980-2012 in San José (Costa Rica) and Tegucigalpa (Honduras). This highly significant drying trend

  18. Computer Applications in the United Aircraft Corporation Library System

    Science.gov (United States)

    Neufeld, I. H.

    1973-01-01

    A remote computer terminal is used for data input in the production of a biweekly announcement bulletin covering journal literature. The terminal can also be used to query several catalog files. In another application, a subject listing of internal reports is obtained on computer output microfilm. (1 reference) (Author)

  19. Globalized Computing Education: Europe and the United States

    Science.gov (United States)

    Scime, A.

    2008-01-01

    As computing makes the world a smaller place there will be an increase in the mobility of information technology workers and companies. The European Union has recognized the need for mobility and is instituting educational reforms to provide recognition of worker qualifications. Within computing there have been a number of model curricula proposed…

  20. A Framework for Smart Distribution of Bio-signal Processing Units in M-Health

    OpenAIRE

    Mei, Hailiang; Widya, Ing; Broens, Tom; Pawar, Pravin; Halteren, van, AT; Shishkov, Boris; Sinderen, van, Marten

    2007-01-01

    This paper introduces the Bio-Signal Processing Unit (BSPU) as a functional component that hosts (part of ) the bio-signal information processing algorithms that are needed for an m-health application. With our approach, the BSPUs can be dynamically assigned to available nodes between the bio-signal source and the application to optimize the use of computation and communication resources. The main contributions of this paper are: (1) it presents the supporting architecture (e.g. components an...

  1. Operating The Central Process Systems At Glenn Research Center

    Science.gov (United States)

    Weiler, Carly P.

    2004-01-01

    As a research facility, the Glenn Research Center (GRC) trusts and expects all the systems, controlling their facilities to run properly and efficiently in order for their research and operations to occur proficiently and on time. While there are many systems necessary for the operations at GRC, one of those most vital systems is the Central Process Systems (CPS). The CPS controls operations used by GRC's wind tunnels, propulsion systems lab, engine components research lab, and compressor, turbine and combustor test cells. Used widely throughout the lab, it operates equipment such as exhausters, chillers, cooling towers, compressors, dehydrators, and other such equipment. Through parameters such as pressure, temperature, speed, flow, etc., it performs its primary operations on the major systems of Electrical Dispatch (ED), Central Air Dispatch (CAD), Central Air Equipment Building (CAEB), and Engine Research Building (ERB). In order for the CPS to continue its operations at Glenn, a new contract must be awarded. Consequently, one of my primary responsibilities was assisting the Source Evaluation Board (SEB) with the process of awarding the recertification contract of the CPS. The job of the SEB was to evaluate the proposals of the contract bidders and then to present their findings to the Source Selecting Official (SSO). Before the evaluations began, the Center Director established the level of the competition. For this contract, the competition was limited to those companies classified as a small, disadvantaged business. After an industry briefing that explained to qualified companies the CPS and type of work required, each of the interested companies then submitted proposals addressing three components: Mission Suitability, Cost, and Past Performance. These proposals were based off the Statement of Work (SOW) written by the SEB. After companies submitted their proposals, the SEB reviewed all three components and then presented their results to the SSO. While the

  2. Failure detection in high-performance clusters and computers using chaotic map computations

    Science.gov (United States)

    Rao, Nageswara S.

    2015-09-01

    A programmable media includes a processing unit capable of independent operation in a machine that is capable of executing 10.sup.18 floating point operations per second. The processing unit is in communication with a memory element and an interconnect that couples computing nodes. The programmable media includes a logical unit configured to execute arithmetic functions, comparative functions, and/or logical functions. The processing unit is configured to detect computing component failures, memory element failures and/or interconnect failures by executing programming threads that generate one or more chaotic map trajectories. The central processing unit or graphical processing unit is configured to detect a computing component failure, memory element failure and/or an interconnect failure through an automated comparison of signal trajectories generated by the chaotic maps.

  3. The Organization and Evaluation of a Computer-Assisted, Centralized Immunization Registry.

    Science.gov (United States)

    Loeser, Helen; And Others

    1983-01-01

    Evaluation of a computer-assisted, centralized immunization registry after one year shows that 93 percent of eligible health practitioners initially agreed to provide data and that 73 percent continue to do so. Immunization rates in audited groups have improved significantly. (GC)

  4. On-line satellite/central computer facility of the Multiparticle Argo Spectrometer System

    International Nuclear Information System (INIS)

    An on-line satellite/central computer facility has been developed at Brookhaven National Laboratory as part of the Multiparticle Argo Spectrometer System (MASS). This facility consisting of a PDP-9 and a CDC-6600, has been successfully used in study of proton-proton interactions at 28.5 GeV/c. (U.S.)

  5. Computer simulation of gear tooth manufacturing processes

    Science.gov (United States)

    Mavriplis, Dimitri; Huston, Ronald L.

    1990-01-01

    The use of computer graphics to simulate gear tooth manufacturing procedures is discussed. An analytical basis for the simulation is established for spur gears. The simulation itself, however, is developed not only for spur gears, but for straight bevel gears as well. The applications of the developed procedure extend from the development of finite element models of heretofore intractable geometrical forms, to exploring the fabrication of nonstandard tooth forms.

  6. Formalizing the Process of Constructing Chains of Lexical Units

    Directory of Open Access Journals (Sweden)

    Grigorij Chetverikov

    2015-06-01

    Full Text Available Formalizing the Process of Constructing Chains of Lexical Units The paper investigates mathematical aspects of describing the construction of chains of lexical units on the basis of finite-predicate algebra. Analyzing the construction peculiarities is carried out and application of the method of finding the power of linear logical transformation for removing characteristic words of a dictionary entry is given. Analysis and perspectives of the results of the study are provided.

  7. The Application of Computers to Library Technical Processing

    Science.gov (United States)

    Veaner, Allen B.

    1970-01-01

    Describes computer applications to acquisitions and technical processing and reports in detail on Stanford's development work in automated technical processing. Author is Assistant Director for Bibliographic Operation, Stanford University Libraries. (JB)

  8. Coating Process Monitoring Using Computer Vision

    OpenAIRE

    Veijola, Erik

    2013-01-01

    The aim of this Bachelor’s Thesis was to make a prototype system for Metso Paper Inc. for monitoring a paper roll coating process. If the coating is done badly and there are faults one has to redo the process which lowers the profits of the company since the process is costly. The work was proposed by Seppo Parviainen in December of 2012. The resulting system was to alarm the personnel of faults in the process. Specifically if the system that is applying the synthetic resin on to the roll...

  9. Computer Supported Collaborative Processes in Virtual Organizations

    OpenAIRE

    Paszkiewicz, Zbigniew; Cellary, Wojciech

    2012-01-01

    In global economy, turbulent organization environment strongly influences organization's operation. Organizations must constantly adapt to changing circumstances and search for new possibilities of gaining competitive advantage. To face this challenge, small organizations base their operation on collaboration within Virtual Organizations (VOs). VO operation is based on collaborative processes. Due to dynamism and required flexibility of collaborative processes, existing business information s...

  10. (Central Auditory Processing: the impact of otitis media

    Directory of Open Access Journals (Sweden)

    Leticia Reis Borges

    2013-07-01

    Full Text Available OBJECTIVE: To analyze auditory processing test results in children suffering from otitis media in their first five years of age, considering their age. Furthermore, to classify central auditory processing test findings regarding the hearing skills evaluated. METHODS: A total of 109 students between 8 and 12 years old were divided into three groups. The control group consisted of 40 students from public school without a history of otitis media. Experimental group I consisted of 39 students from public schools and experimental group II consisted of 30 students from private schools; students in both groups suffered from secretory otitis media in their first five years of age and underwent surgery for placement of bilateral ventilation tubes. The individuals underwent complete audiological evaluation and assessment by Auditory Processing tests. RESULTS: The left ear showed significantly worse performance when compared to the right ear in the dichotic digits test and pitch pattern sequence test. The students from the experimental groups showed worse performance when compared to the control group in the dichotic digits test and gaps-in-noise. Children from experimental group I had significantly lower results on the dichotic digits and gaps-in-noise tests compared with experimental group II. The hearing skills that were altered were temporal resolution and figure-ground perception. CONCLUSION: Children who suffered from secretory otitis media in their first five years and who underwent surgery for placement of bilateral ventilation tubes showed worse performance in auditory abilities, and children from public schools had worse results on auditory processing tests compared with students from private schools.

  11. Sensor-based mapping of soil quality on degraded claypan landscapes of the central United States

    Science.gov (United States)

    Claypan soils (Epiaqualfs) in the central USA have experienced severe erosion as a result of tillage practices of the late 1800s and 1900s. Because of the site-specific nature of erosion processes within claypan fields, research is needed to achieve cost-effective sensing and mapping of soil and lan...

  12. Effects of sleep deprivation on central auditory processing

    Directory of Open Access Journals (Sweden)

    Liberalesso Paulo Breno

    2012-07-01

    Full Text Available Abstract Background Sleep deprivation is extremely common in contemporary society, and is considered to be a frequent cause of behavioral disorders, mood, alertness, and cognitive performance. Although the impacts of sleep deprivation have been studied extensively in various experimental paradigms, very few studies have addressed the impact of sleep deprivation on central auditory processing (CAP. Therefore, we examined the impact of sleep deprivation on CAP, for which there is sparse information. In the present study, thirty healthy adult volunteers (17 females and 13 males, aged 30.75 ± 7.14 years were subjected to a pure tone audiometry test, a speech recognition threshold test, a speech recognition task, the Staggered Spondaic Word Test (SSWT, and the Random Gap Detection Test (RGDT. Baseline (BSL performance was compared to performance after 24 hours of being sleep deprived (24hSD using the Student’s t test. Results Mean RGDT score was elevated in the 24hSD condition (8.0 ± 2.9 ms relative to the BSL condition for the whole cohort (6.4 ± 2.8 ms; p = 0.0005, for males (p = 0.0066, and for females (p = 0.0208. Sleep deprivation reduced SSWT scores for the whole cohort in both ears [(right: BSL, 98.4 % ± 1.8 % vs. SD, 94.2 % ± 6.3 %. p = 0.0005(left: BSL, 96.7 % ± 3.1 % vs. SD, 92.1 % ± 6.1 %, p  Conclusion Sleep deprivation impairs RGDT and SSWT performance. These findings confirm that sleep deprivation has central effects that may impair performance in other areas of life.

  13. Flocking-based Document Clustering on the Graphics Processing Unit

    Energy Technology Data Exchange (ETDEWEB)

    Cui, Xiaohui [ORNL; Potok, Thomas E [ORNL; Patton, Robert M [ORNL; ST Charles, Jesse Lee [ORNL

    2008-01-01

    Abstract?Analyzing and grouping documents by content is a complex problem. One explored method of solving this problem borrows from nature, imitating the flocking behavior of birds. Each bird represents a single document and flies toward other documents that are similar to it. One limitation of this method of document clustering is its complexity O(n2). As the number of documents grows, it becomes increasingly difficult to receive results in a reasonable amount of time. However, flocking behavior, along with most naturally inspired algorithms such as ant colony optimization and particle swarm optimization, are highly parallel and have found increased performance on expensive cluster computers. In the last few years, the graphics processing unit (GPU) has received attention for its ability to solve highly-parallel and semi-parallel problems much faster than the traditional sequential processor. Some applications see a huge increase in performance on this new platform. The cost of these high-performance devices is also marginal when compared with the price of cluster machines. In this paper, we have conducted research to exploit this architecture and apply its strengths to the document flocking problem. Our results highlight the potential benefit the GPU brings to all naturally inspired algorithms. Using the CUDA platform from NIVIDA? we developed a document flocking implementation to be run on the NIVIDA?GEFORCE 8800. Additionally, we developed a similar but sequential implementation of the same algorithm to be run on a desktop CPU. We tested the performance of each on groups of news articles ranging in size from 200 to 3000 documents. The results of these tests were very significant. Performance gains ranged from three to nearly five times improvement of the GPU over the CPU implementation. This dramatic improvement in runtime makes the GPU a potentially revolutionary platform for document clustering algorithms.

  14. Winter habitat associations of blackbirds and starlings wintering in the south-central United States

    Science.gov (United States)

    Matthew Strassburg,; Crimmins, Shawn M.; George M. Linz,; McKann, Patrick C.; Thogmartin, Wayne E.

    2015-01-01

    Birds can cause extensive crop damage in the United States. In some regions, depredating species comprise a substantial portion of the total avian population, emphasizing their importance both economically and ecologically. We used the National Audubon Society Christmas Bird Count data from the south-central United States and mixed-effects models to identify habitat factors associated with population trend and abundance for 5 species: red-winged blackbird (Agelaius phoeniceus), common grackle (Quiscalus quiscula), rusty blackbird (Euphagus carolinus), Brewer’s blackbird (Euphagus cyanocephalus), and European starling (Sturnus vulgaris). Overall, we found positive associations between bird abundance and agricultural land-cover for all species. Relationships between abundance and other land-cover types were species-specific, often with contrasting relationships among species. Likewise, we found no consistent patterns among abundance and climate. Of the 5 species, only red-winged blackbirds had a significant population trend in our study area, increasing annually by 2.4%. There was marginal evidence to suggest population increases for rusty blackbirds, whereas all other species showed no trend in population size within our study area. Our study provides managers who are interested in limiting crop damage in the south-central United States with novel information on habitat associations in the region that could be used to improve management and control actions.

  15. Modeling Units of Assessment for Sharing Assessment Process Information: towards an Assessment Process Specification

    NARCIS (Netherlands)

    Miao, Yongwu; Sloep, Peter; Koper, Rob

    2009-01-01

    Miao, Y., Sloep, P. B., & Koper, R. (2008). Modeling Units of Assessment for Sharing Assessment Process Information: towards an Assessment Process Specification. Presentation at the ICWL 2008 conference. August, 20, 2008, Jinhua, China.

  16. Central washout sign in computer-aided evaluation of breast MRI: preliminary results

    International Nuclear Information System (INIS)

    Background: Although computer-aided evaluation (CAE) programs were introduced to help differentiate benign tumors from malignant ones, the set of CAE-measured parameters that best predict malignancy have not yet been established. Purpose: To assess the value of the central washout sign on CAE color overlay images of breast MRI. Material and Methods: We evaluated the frequency of the central washout sign using CAE. The central washout sign was determined so that thin, rim-like, persistent kinetics were seen in the periphery of the tumor. Then, sequentially, plateau and washout kinetics appeared. Two additional CAE-delayed kinetic variables were compared with the central washout sign for assessment of diagnostic utility: the predominant enhancement type (washout, plateau, or persistent) and the most suspicious enhancement type (any washout > any plateau > any persistent kinetics). Results: One hundred and forty-nine pathologically proven breast lesions (130 malignant, 19 benign) were evaluated. A central washout sign was associated with 87% of malignant lesions but only 11% of benign lesions. Significant differences were found when delayed-phase kinetics were categorized by the most suspicious enhancement type (P< 0.001) and the presence of the central washout sign (P< 0.001). Under the criteria of the most suspicious kinetics, 68% of benign lesions were assigned as plateau or washout pattern. Conclusion: The central washout sign is a reliable indicator of malignancy on CAE color overlay images of breast MRI

  17. 从图形处理器到基于GPU的通用计算%From Graphic Processing Unit to General Purpose Graphic Processing Unit

    Institute of Scientific and Technical Information of China (English)

    刘金硕; 刘天晓; 吴慧; 曾秋梅; 任梦菲; 顾宜淳

    2013-01-01

    对GPU(graphic process unit)、基于GPU的通用计算(general purpose GPU,GPGPU)、基于GPU的编程模型与环境进行了界定;将GPU的发展分为4个阶段,阐述了GPU的架构由非统一的渲染架构到统一的渲染架构,再到新一代的费米架构的变化;通过对基于GPU的通用计算的架构与多核CPU架构、分布式集群架构进行了软硬件的对比.分析表明:当进行中粒度的线程级数据密集型并行运算时,采用多核多线程并行;当进行粗粒度的网络密集型并行运算时,采用集群并行;当进行细粒度的计算密集型并行运算时,采用GPU通用计算并行.最后本文展示了未来的GPGPU的研究热点和发展方向--GPGPU自动并行化、CUDA对多种语言的支持、CUDA的性能优化,并介绍了GPGPU的一些典型应用.%This paper defines the outline of GPU(graphic processing unit) , the general purpose computation, the programming model and the environment for GPU. Besides, it introduces the evolution process from GPU to GPGPU (general purpose graphic processing unit) , and the change from non-uniform render architecture to the unified render architecture and the next Fermi architecture in details. Then it compares GPGPU architecture with multi-core GPU architecture and distributed cluster architecture from the perspective of software and hardware. When doing the middle grain level thread data intensive parallel computing, the multi-core and multi-thread should be utilized. When doing the coarse grain network computing, the cluster computing should be utilized. When doing the fine grained compute intensive parallel computing, the general purpose computation should be adopted. Meanwhile, some classical applications of GPGPU have been mentioned. At last, this paper demonstrates the further developments and research hotspots of GPGPU, which are automatic parallelization of GPGPU, multi-language support and performance optimization of CUDA, and introduces the classic

  18. Neonatal mortality in intensive care units of Central Brazil Mortalidade neonatal em unidades de cuidados intensivos no Brasil Central

    Directory of Open Access Journals (Sweden)

    Claci F Weirich

    2005-10-01

    Full Text Available OBJECTIVE: To identify potential prognostic factors for neonatal mortality among newborns referred to intensive care units. METHODS: A live-birth cohort study was carried out in Goiânia, Central Brazil, from November 1999 to October 2000. Linked birth and infant death certificates were used to ascertain the cohort of live born infants. An additional active surveillance system of neonatal-based mortality was implemented. Exposure variables were collected from birth and death certificates. The outcome was survivors (n=713 and deaths (n=162 in all intensive care units in the study period. Cox's proportional hazards model was applied and a Receiver Operating Characteristic curve was used to compare the performance of statistically significant variables in the multivariable model. Adjusted mortality rates by birth weight and 5-min Apgar score were calculated for each intensive care unit. RESULTS: Low birth weight and 5-min Apgar score remained independently associated to death. Birth weight equal to 2,500g had 0.71 accuracy (95% CI: 0.65-0.77 for predicting neonatal death (sensitivity =72.2%. A wide variation in the mortality rates was found among intensive care units (9.5-48.1% and two of them remained with significant high mortality rates even after adjusting for birth weight and 5-min Apgar score. CONCLUSIONS: This study corroborates birth weight as a sensitive screening variable in surveillance programs for neonatal death and also to target intensive care units with high mortality rates for implementing preventive actions and interventions during the delivery period.OBJETIVO: Identificar fatores prognósticos de mortalidade neonatal em unidades de cuidados intensivos. MÉTODOS: Realizou-se estudo de coorte de nascidos vivos do município de Goiânia, no período de novembro de 1999 a outubro de 2000. Procedeu-se à vinculação das bases de dados das declarações de nascidos vivos e de óbitos, das quais as variáveis de exposição foram extra

  19. Computer teaching process optimization strategy analysis of thinking ability

    Directory of Open Access Journals (Sweden)

    Luo Liang

    2016-01-01

    Full Text Available As is known to all, computer is a college student in a university course, one of the basic course in the process of education for college students which lay a theoretical foundation for the next professional learning. At the same time, in recent years, countries and universities attach great importance to and focus on computer teaching for young college students, the purpose is to improve students’ thinking ability, eventually to promote college students’ ability to use computational thinking to solve and analyze the problems of daily life. Therefore, this article on how to the calculation of optimization in the process of computer teaching college students thinking ability on further discussion and analysis, and then explore the strategies and methods, so as to promote the computer teaching in the process of the cultivation of thinking ability and optimize the computer

  20. Characterization of the Temporal Clustering of Flood Events across the Central United States in terms of Climate States

    Science.gov (United States)

    Mallakpour, Iman; Villarini, Gabriele; Jones, Michael; Smith, James

    2016-04-01

    The central United States is a region of the country that has been plagued by frequent catastrophic flooding (e.g., flood events of 1993, 2008, 2013, and 2014), with large economic and social repercussions (e.g., fatalities, agricultural losses, flood losses, water quality issues). The goal of this study is to examine whether it is possible to describe the occurrence of flood events at the sub-seasonal scale in terms of variations in the climate system. Daily streamflow time series from 774 USGS stream gage stations over the central United States (defined here to include North Dakota, South Dakota, Nebraska, Kansas, Missouri, Iowa, Minnesota, Wisconsin, Illinois, West Virginia, Kentucky, Ohio, Indiana, and Michigan) with a record of at least 50 years and ending no earlier than 2011 are used for this study. We use a peak-over-threshold (POT) approach to identify flood peaks so that we have, on average two events per year. We model the occurrence/non-occurrence of a flood event over time using regression models based on Cox processes. Cox processes are widely used in biostatistics and can be viewed as a generalization of Poisson processes. Rather than assuming that flood events occur independently of the occurrence of previous events (as in Poisson processes), Cox processes allow us to account for the potential presence of temporal clustering, which manifests itself in an alternation of quiet and active periods. Here we model the occurrence/non-occurrence of flood events using two climate indices as climate time-varying covariates: the North Atlantic Oscillation (NAO) and the Pacific-North American pattern (PNA). The results of this study show that NAO and/or PNA can explain the temporal clustering in flood occurrences in over 90% of the stream gage stations we considered. Analyses of the sensitivity of the results to different average numbers of flood events per year (from one to five) are also performed and lead to the same conclusions. The findings of this work

  1. High performance direct gravitational N-body simulations on graphics processing units II: An implementation in CUDA

    NARCIS (Netherlands)

    R.G. Belleman; J. Bédorf; S.F. Portegies Zwart

    2008-01-01

    We present the results of gravitational direct N-body simulations using the graphics processing unit (GPU) on a commercial NVIDIA GeForce 8800GTX designed for gaming computers. The force evaluation of the N-body problem is implemented in "Compute Unified Device Architecture" (CUDA) using the GPU to

  2. Soft computing in big data processing

    CERN Document Server

    Park, Seung-Jong; Lee, Jee-Hyong

    2014-01-01

    Big data is an essential key to build a smart world as a meaning of the streaming, continuous integration of large volume and high velocity data covering from all sources to final destinations. The big data range from data mining, data analysis and decision making, by drawing statistical rules and mathematical patterns through systematical or automatically reasoning. The big data helps serve our life better, clarify our future and deliver greater value. We can discover how to capture and analyze data. Readers will be guided to processing system integrity and implementing intelligent systems. With intelligent systems, we deal with the fundamental data management and visualization challenges in effective management of dynamic and large-scale data, and efficient processing of real-time and spatio-temporal data. Advanced intelligent systems have led to managing the data monitoring, data processing and decision-making in realistic and effective way. Considering a big size of data, variety of data and frequent chan...

  3. A Computational Chemistry Database for Semiconductor Processing

    Science.gov (United States)

    Jaffe, R.; Meyyappan, M.; Arnold, J. O. (Technical Monitor)

    1998-01-01

    The concept of 'virtual reactor' or 'virtual prototyping' has received much attention recently in the semiconductor industry. Commercial codes to simulate thermal CVD and plasma processes have become available to aid in equipment and process design efforts, The virtual prototyping effort would go nowhere if codes do not come with a reliable database of chemical and physical properties of gases involved in semiconductor processing. Commercial code vendors have no capabilities to generate such a database, rather leave the task to the user of finding whatever is needed. While individual investigations of interesting chemical systems continue at Universities, there has not been any large scale effort to create a database. In this presentation, we outline our efforts in this area. Our effort focuses on the following five areas: 1. Thermal CVD reaction mechanism and rate constants. 2. Thermochemical properties. 3. Transport properties.4. Electron-molecule collision cross sections. and 5. Gas-surface interactions.

  4. Computer-aided software development process design

    Science.gov (United States)

    Lin, Chi Y.; Levary, Reuven R.

    1989-01-01

    The authors describe an intelligent tool designed to aid managers of software development projects in planning, managing, and controlling the development process of medium- to large-scale software projects. Its purpose is to reduce uncertainties in the budget, personnel, and schedule planning of software development projects. It is based on dynamic model for the software development and maintenance life-cycle process. This dynamic process is composed of a number of time-varying, interacting developmental phases, each characterized by its intended functions and requirements. System dynamics is used as a modeling methodology. The resulting Software LIfe-Cycle Simulator (SLICS) and the hybrid expert simulation system of which it is a subsystem are described.

  5. Development in Central Europe Includes Food Processing Business

    OpenAIRE

    Declerck, Francis

    2004-01-01

    The economic integration of Central European countries to the EU started in the beginning of the 1990's. ESSEC Business School, in partnership with Warsaw Agricultural University SGGW, and food companies have heavily invested in Central Europe, particularly Poland, before May 1, 2004 the official date of the EU enlargement to 8 Central European Countries: Estonia, Hungary, Latvia, Lithuania, Poland, the Czech Republic, Slovenia and Slovakia. With more than half the population and business act...

  6. EEG processing and its application in brain-computer interface

    Institute of Scientific and Technical Information of China (English)

    Wang Jing; Xu Guanghua; Xie Jun; Zhang Feng; Li Lili; Han Chengcheng; Li Yeping; Sun Jingjing

    2013-01-01

    Electroencephalogram (EEG) is an efficient tool in exploring human brains.It plays a very important role in diagnosis of disorders related to epilepsy and development of new interaction techniques between machines and human beings,namely,brain-computer interface (BCI).The purpose of this review is to illustrate the recent researches in EEG processing and EEG-based BCI.First,we outline several methods in removing artifacts from EEGs,and classical algorithms for fatigue detection are discussed.Then,two BCI paradigms including motor imagery and steady-state motion visual evoked potentials (SSMVEP) produced by oscillating Newton' s rings are introduced.Finally,BCI systems including wheelchair controlling and electronic car navigation are elaborated.As a new technique to control equipments,BCI has promising potential in rehabilitation of disorders in central nervous system,such as stroke and spinal cord injury,treatment of attention deficit hyperactivity disorder (ADHD) in children and development of novel games such as brain-controlled auto racings.

  7. Computer Aided Teaching of Digital Signal Processing.

    Science.gov (United States)

    Castro, Ian P.

    1990-01-01

    Describes a microcomputer-based software package developed at the University of Surrey for teaching digital signal processing to undergraduate science and engineering students. Menu-driven software capabilities are explained, including demonstration of qualitative concepts and experimentation with quantitative data, and examples are given of…

  8. X/Qs and unit dose calculations for Central Waste Complex interim safety basis effort

    International Nuclear Information System (INIS)

    The objective for this problem is to calculate the ground-level release dispersion factors (X/Q) and unit doses for onsite facility and offsite receptors at the site boundary and at Highway 240 for plume meander, building wake effect, plume rise, and the combined effect. The release location is at Central Waste Complex Building P4 in the 200 West Area. The onsite facility is located at Building P7. Acute ground level release 99.5 percentile dispersion factors (X/Q) were generated using the GXQ. The unit doses were calculated using the GENII code. The dimensions of Building P4 are 15 m in W x 24 m in L x 6 m in H

  9. Design, implementation and evalution of a central unit for controlling climatic conditions in the greenhouse

    Directory of Open Access Journals (Sweden)

    Gh. Zarei

    2016-02-01

    Full Text Available In greenhouse culture, in addition to increasing the quantity and quality of crop production in comparison with traditional methods, the agricultural inputs are saved, too. Recently, using new methods, designs and materials, and higher automation in greenhouses, better management has become possible for enhancing yield and improving the quality of greenhouse crops. The constructed and evaluated central controller unit (CCU is a central controller system and computerized monitoring unit for greenhouse application. Several sensors, one CCU, several operators, and a data-collection and recorder unit were the major components of this system. The operators included heating, cooling, spraying, ventilation and lighting systems, and the sensors are for temperature, humidity, carbon dioxide, oxygen and light in inside and outside the greenhouse. Environmental conditions were measured by the accurate sensors and transmitted to the CCU. Based on this information, the CCU changed variables to optimize the greenhouse environmental conditions to predetermined ranges. This system was totally made of local instruments and parts and had the ability to integrate with the needs of the client. The designed and implemented CCU was tested in a greenhouse located in Agriculture and Natural Resources Research Center of Khuzestan Province during summer season of 2011. The CCU was operated successfully for controlling greenhouse temperature in the range of 22-29 ˚C, relative humidity of 35-55%, artificial lighting in the case of receiving radiation of less than 800 Lux and turning on the ventilation units if the concentration of carbon dioxide was more than 800 mg/L.

  10. A monolithic 3D integrated nanomagnetic co-processing unit

    Science.gov (United States)

    Becherer, M.; Breitkreutz-v. Gamm, S.; Eichwald, I.; Žiemys, G.; Kiermaier, J.; Csaba, G.; Schmitt-Landsiedel, D.

    2016-01-01

    As CMOS scaling becomes more and more challenging there is strong impetus for beyond CMOS device research to add new functionality to ICs. In this article, a promising technology with non-volatile ferromagnetic computing states - the so-called Perpendicular Nanomagnetic Logic (pNML) - is reviewed. After introducing the 2D planar implementation of NML with magnetization perpendicular to the surface, the path to monolithically 3D integrated systems is discussed. Instead of CMOS substitution, additional functionality is added by a co-processor architecture as a prospective back-end-of-line (BEOL) process, where the computing elements are clocked by a soft-magnetic on-chip inductor. The unconventional computation in the ferromagnetic domain can lead to highly dense computing structures without leakage currents, attojoule dissipation per bit operation and data-throughputs comparable to state-of-the-art high-performance CMOS CPUs. In appropriate applications and with specialized computing architectures they might even circumvent the bottleneck of time-consuming memory access, as computation is inherently performed with non-volatile computing states.

  11. COST ESTIMATION MODELS FOR DRINKING WATER TREATMENT UNIT PROCESSES

    Science.gov (United States)

    Cost models for unit processes typically utilized in a conventional water treatment plant and in package treatment plant technology are compiled in this paper. The cost curves are represented as a function of specified design parameters and are categorized into four major catego...

  12. Practical Secure Computation with Pre-Processing

    DEFF Research Database (Denmark)

    Zakarias, Rasmus Winther

    2016-01-01

    , communicating O(n log∗ n) ele- ments in the small field and performing O(n log n log log n) operations on small field elements. The fourth main result of the dissertation is a generic and efficient protocol for proving knowledge of a witness for circuit satisfiability in Zero-Knowledge. We prove our......Secure Multiparty Computation has been divided between protocols best suited for binary circuits and protocols best suited for arithmetic circuits. With their MiniMac protocol in [DZ13], Damgård and Zakarias take an important step towards bridging these worlds with an arithmetic protocol tuned for...... yields an astonishing fast evaluation per AES block of 400μs = 400 ∗ 10−6 seconds. Our techniques focus on AES but work in general. In particular we reduce round complexity of the protocol using oblivious table lookup to take care of the non-linear parts. At first glance one might expect table lookup to...

  13. Computer simulation of surface and film processes

    Science.gov (United States)

    Tiller, W. A.; Halicioglu, M. T.

    1984-01-01

    All the investigations which were performed employed in one way or another a computer simulation technique based on atomistic level considerations. In general, three types of simulation methods were used for modeling systems with discrete particles that interact via well defined potential functions: molecular dynamics (a general method for solving the classical equations of motion of a model system); Monte Carlo (the use of Markov chain ensemble averaging technique to model equilibrium properties of a system); and molecular statics (provides properties of a system at T = 0 K). The effects of three-body forces on the vibrational frequencies of triatomic cluster were investigated. The multilayer relaxation phenomena for low index planes of an fcc crystal was analyzed also as a function of the three-body interactions. Various surface properties for Si and SiC system were calculated. Results obtained from static simulation calculations for slip formation were presented. The more elaborate molecular dynamics calculations on the propagation of cracks in two-dimensional systems were outlined.

  14. Use of parallel computing in mass processing of laser data

    Science.gov (United States)

    Będkowski, J.; Bratuś, R.; Prochaska, M.; Rzonca, A.

    2015-12-01

    The first part of the paper includes a description of the rules used to generate the algorithm needed for the purpose of parallel computing and also discusses the origins of the idea of research on the use of graphics processors in large scale processing of laser scanning data. The next part of the paper includes the results of an efficiency assessment performed for an array of different processing options, all of which were substantially accelerated with parallel computing. The processing options were divided into the generation of orthophotos using point clouds, coloring of point clouds, transformations, and the generation of a regular grid, as well as advanced processes such as the detection of planes and edges, point cloud classification, and the analysis of data for the purpose of quality control. Most algorithms had to be formulated from scratch in the context of the requirements of parallel computing. A few of the algorithms were based on existing technology developed by the Dephos Software Company and then adapted to parallel computing in the course of this research study. Processing time was determined for each process employed for a typical quantity of data processed, which helped confirm the high efficiency of the solutions proposed and the applicability of parallel computing to the processing of laser scanning data. The high efficiency of parallel computing yields new opportunities in the creation and organization of processing methods for laser scanning data.

  15. Launch Site Computer Simulation and its Application to Processes

    Science.gov (United States)

    Sham, Michael D.

    1995-01-01

    This paper provides an overview of computer simulation, the Lockheed developed STS Processing Model, and the application of computer simulation to a wide range of processes. The STS Processing Model is an icon driven model that uses commercial off the shelf software and a Macintosh personal computer. While it usually takes one year to process and launch 8 space shuttles, with the STS Processing Model this process is computer simulated in about 5 minutes. Facilities, orbiters, or ground support equipment can be added or deleted and the impact on launch rate, facility utilization, or other factors measured as desired. This same computer simulation technology can be used to simulate manufacturing, engineering, commercial, or business processes. The technology does not require an 'army' of software engineers to develop and operate, but instead can be used by the layman with only a minimal amount of training. Instead of making changes to a process and realizing the results after the fact, with computer simulation, changes can be made and processes perfected before they are implemented.

  16. NUMATH: a nuclear-material-holdup estimator for unit operations and chemical processes

    Energy Technology Data Exchange (ETDEWEB)

    Krichinsky, A.M.

    1983-02-01

    A computer program, NUMATH (Nuclear Material Holdup Estimator), has been developed to estimate compositions of materials in vessels involved in unit operations and chemical processes. This program has been implemented in a remotely operated nuclear fuel processing plant. NUMATH provides estimates of the steady-state composition of materials residing in process vessels until representative samples can be obtained and chemical analyses can be performed. Since these compositions are used for inventory estimations, the results are determined for the cataloged in container-oriented files. The estimated compositions represent materials collected in applicable vessels - including consideration for materials previously acknowledged in these vessels. The program utilizes process measurements and simple performance models to estimate material holdup and distribution within unit operations. In simulated run-testing, NUMATH typically produced estimates within 5% of the measured inventories for uranium and within 8% of the measured inventories for thorium during steady-state process operation.

  17. Interventions on central computing services during the weekend of 21 and 22 August

    CERN Multimedia

    2004-01-01

    As part of the planned upgrade of the computer centre infrastructure to meet the LHC computing needs, approximately 150 servers, hosting in particular the NICE home directories, Mail services and Web services, will need to be physically relocated to another part of the computing hall during the weekend of the 21 and 22 August. On Saturday 21 August, starting from 8:30a.m. interruptions of typically 60 minutes will take place on the following central computing services: NICE and the whole Windows infrastructure, Mail services, file services (including home directories and DFS workspaces), Web services, VPN access, Windows Terminal Services. During any interruption, incoming mail from outside CERN will be queued and delivered as soon as the service is operational again. All Services should be available again on Saturday 21 at 17:30 but a few additional interruptions will be possible after that time and on Sunday 22 August. IT Department

  18. Effect of High Receiver Thermal Loss Per Unit Area on the Performance of Solar Central Receiver Systems Having Optimum Heliostat Fields and Optimum Receiver Aperture Areas.

    Science.gov (United States)

    Pitman, Charles L.

    Recent efforts in solar central receiver research have been directed toward high temperature applications. Associated with high temperature processes are greater receiver thermal losses due to reradiation and convection. This dissertation examines the performance of central receiver systems having optimum heliostate fields and receiver aperture areas as a function of receiver thermal loss per unit area of receiver aperture. The results address the problem of application optimization (loss varies) as opposed to the problem of optimization of a design for a specific application (loss fixed). A reasonable range of values for the primary independent variable L (the average reradiative and convective loss per unit area of receiver aperture) and a reasonable set of design assumptions were first established. The optimum receiver aperture area, number and spacings of heliostats, and field boundary were then determined for two tower focal heights and for each value of L. From this, the solar subsystem performance for each optimized system was calculated. Heliostat field analysis and optimization required a detailed computational analysis. A significant modification to the standard method of solving the optimization equations, effectively a decoupling of the solution process into collector and receiver subsystem parts, greatly aided the analysis. Results are presented for tower focal heights of 150 and 180 m. Values of L ranging from 0.04 to 0.50 MW m('-2) were considered, roughly corresponding to working fluid temperatures (at receiver exit) in the range of 650 to 1650 C. As L increases over this range, the receiver thermal efficiency and the receiver interception factor decrease. The optimal power level drops by almost half, and the cost per unit of energy produced increases by about 25% for the base case set of design assumptions. The resulting decrease in solar subsystem efficiency (relative to the defined annual input energy) from 0.57 to 0.35 is about 40% and is a

  19. Farm Process (FMP) Parameters used in the Central Valley Hydrologic Model (CVHM)

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This digital dataset defines the farm-process parameters used in the transient hydrologic model of the Central Valley flow system. The Central Valley encompasses an...

  20. Clinical radiodiagnosis of metastases of central lung cancer in regional lymph nodes using computers

    International Nuclear Information System (INIS)

    On the basis of literary data and clinical examination (112 patients) methods of clinical radiodiagnosis of metastases of central lung cancer in regional lymph nodes using computers are developed. Methods are tested on control clinical material (110 patients). Using computers (Bayes and Vald methods) 57.3% and 65.5% correct answers correspondingly are obtained, that is by 14.6% and 22.8% higher the level of clinical diagnosis of metastases. Diagnostic errors are analysed. Complexes of clinical-radiological signs of symptoms of metastases are outlined

  1. Analysis of source spectra, attenuation, and site effects from central and eastern United States earthquakes

    Energy Technology Data Exchange (ETDEWEB)

    Lindley, G.

    1998-02-01

    This report describes the results from three studies of source spectra, attenuation, and site effects of central and eastern United States earthquakes. In the first study source parameter estimates taken from 27 previous studies were combined to test the assumption that the earthquake stress drop is roughly a constant, independent of earthquake size. 200 estimates of stress drop and seismic moment from eastern North American earthquakes were combined. It was found that the estimated stress drop from the 27 studies increases approximately as the square-root of the seismic moment, from about 3 bars at 10{sup 20} dyne-cm to 690 bars at 10{sup 25} dyne-cm. These results do not support the assumption of a constant stress drop when estimating ground motion parameters from eastern North American earthquakes. In the second study, broadband seismograms recorded by the United States National Seismograph Network and cooperating stations have been analysed to determine Q{sub Lg} as a function of frequency in five regions: the northeastern US, southeastern US, central US, northern Basin and Range, and California and western Nevada. In the third study, using spectral analysis, estimates have been made for the anelastic attenuation of four regional phases, and estimates have been made for the source parameters of 27 earthquakes, including the M{sub b} 5.6, 14 April, 1995, West Texas earthquake.

  2. Analysis of source spectra, attenuation, and site effects from central and eastern United States earthquakes

    International Nuclear Information System (INIS)

    This report describes the results from three studies of source spectra, attenuation, and site effects of central and eastern United States earthquakes. In the first study source parameter estimates taken from 27 previous studies were combined to test the assumption that the earthquake stress drop is roughly a constant, independent of earthquake size. 200 estimates of stress drop and seismic moment from eastern North American earthquakes were combined. It was found that the estimated stress drop from the 27 studies increases approximately as the square-root of the seismic moment, from about 3 bars at 1020 dyne-cm to 690 bars at 1025 dyne-cm. These results do not support the assumption of a constant stress drop when estimating ground motion parameters from eastern North American earthquakes. In the second study, broadband seismograms recorded by the United States National Seismograph Network and cooperating stations have been analysed to determine QLg as a function of frequency in five regions: the northeastern US, southeastern US, central US, northern Basin and Range, and California and western Nevada. In the third study, using spectral analysis, estimates have been made for the anelastic attenuation of four regional phases, and estimates have been made for the source parameters of 27 earthquakes, including the Mb 5.6, 14 April, 1995, West Texas earthquake

  3. Research on Three Dimensional Computer Assistance Assembly Process Design System

    Institute of Scientific and Technical Information of China (English)

    HOU Wenjun; YAN Yaoqi; DUAN Wenjia; SUN Hanxu

    2006-01-01

    The computer aided process planning will certainly play a significant role in the success of enterprise informationization. 3-dimensional design will promote Tri-dimensional process planning. This article analysis nowadays situation and problems of assembly process planning, gives a 3-dimensional computer aided process planning system (3D-VAPP), and researches on the product information extraction, assembly sequence and path planning in visual interactive assembly process design, dynamic emulation of assembly and process verification, assembly animation outputs and automatic exploding view generation, interactive craft filling and craft knowledge management, etc. It also gives a multi-layer collision detect and multi-perspective automatic camera switching algorithm. Experiments were done to validate the feasibility of such technology and algorithm, which established the foundation of tri-dimensional computer aided process planning.

  4. Artificial intelligence, expert systems, computer vision, and natural language processing

    Science.gov (United States)

    Gevarter, W. B.

    1984-01-01

    An overview of artificial intelligence (AI), its core ingredients, and its applications is presented. The knowledge representation, logic, problem solving approaches, languages, and computers pertaining to AI are examined, and the state of the art in AI is reviewed. The use of AI in expert systems, computer vision, natural language processing, speech recognition and understanding, speech synthesis, problem solving, and planning is examined. Basic AI topics, including automation, search-oriented problem solving, knowledge representation, and computational logic, are discussed.

  5. Central Nervous System Based Computing Models for Shelf Life Prediction of Soft Mouth Melting Milk Cakes

    Directory of Open Access Journals (Sweden)

    Gyanendra Kumar Goyal

    2012-04-01

    Full Text Available This paper presents the latency and potential of central nervous system based system intelligent computer engineering system for detecting shelf life of soft mouth melting milk cakes stored at 10o C. Soft mouth melting milk cakes are exquisite sweetmeat cuisine made out of heat and acid thickened solidified sweetened milk. In today’s highly competitive market consumers look for good quality food products. Shelf life is a good and accurate indicator to the food quality and safety. To achieve good quality of food products, detection of shelf life is important. Central nervous system based intelligent computing model was developed which detected 19.82 days shelf life, as against 21 days experimental shelf life.

  6. Leveraging EarthScope USArray with the Central and Eastern United States Seismic Network

    Science.gov (United States)

    Busby, R.; Sumy, D. F.; Woodward, R.; Frassetto, A.; Brudzinski, M.

    2015-12-01

    Recent earthquakes, such as the 2011 M5.8 Mineral, Virginia earthquake, raised awareness of the comparative lack of knowledge about seismicity, site response to ground shaking, and the basic geologic underpinnings in this densely populated region. With this in mind, the National Science Foundation, United States Geological Survey, United States Nuclear Regulatory Commission, and Department of Energy supported the creation of the Central and Eastern United States Seismic Network (CEUSN). These agencies, along with the IRIS Consortium who operates the network, recognized the unique opportunity to retain EarthScope Transportable Array (TA) seismic stations in this region beyond the standard deployment duration of two years per site. The CEUSN project supports 159 broadband TA stations, more than 30 with strong motion sensors added, that are scheduled to operate through 2017. Stations were prioritized in regions of elevated seismic hazard that have not been traditionally heavily monitored, such as the Charlevoix and Central Virginia Seismic Zones, and in regions proximal to nuclear power plants and other critical facilities. The stations (network code N4) transmit data in real time, with broadband and strong motion sensors sampling at 100 samples per second. More broadly the CEUSN concept also recognizes the existing backbone coverage of permanently operating seismometers in the CEUS, and forms a network of over 300 broadband stations. This multi-agency collaboration is motivated by the opportunity to use one facility to address multiple missions and needs in a way that is rarely possible, and to produce data that enables both researchers and federal agencies to better understand seismic hazard potential and associated seismic risks. In June 2015, the CEUSN Working Group (www.usarray.org/ceusn_working_group) was formed to review and provide advice to IRIS Management on the performance of the CEUSN as it relates to the target scientific goals and objectives. Map shows

  7. Central Nervous System Based Computing Models for Shelf Life Prediction of Soft Mouth Melting Milk Cakes

    OpenAIRE

    Gyanendra Kumar Goyal; Sumit Goyal

    2012-01-01

    This paper presents the latency and potential of central nervous system based system intelligent computer engineering system for detecting shelf life of soft mouth melting milk cakes stored at 10o C. Soft mouth melting milk cakes are exquisite sweetmeat cuisine made out of heat and acid thickened solidified sweetened milk. In today’s highly competitive market consumers look for good quality food products. Shelf life is a good and accurate indicator to the food quality and safety. To achieve g...

  8. Computer and control applications in a vegetable processing plant

    Science.gov (United States)

    There are many advantages to the use of computers and control in food industry. Software in the food industry takes 2 forms - general purpose commercial computer software and software for specialized applications, such as drying and thermal processing of foods. Many applied simulation models for d...

  9. Computer Data Processing of the Hydrogen Peroxide Decomposition Reaction

    Institute of Scientific and Technical Information of China (English)

    余逸男; 胡良剑

    2003-01-01

    Two methods of computer data processing, linear fitting and nonlinear fitting, are applied to compute the rate constant for hydrogen peroxide decomposition reaction. The results indicate that not only the new methods work with no necessity to measure the final oxygen volume, but also the fitting errors decrease evidently.

  10. Contributions to Parallel Simulation of Equation-Based Models on Graphics Processing Units

    OpenAIRE

    Stavåker, Kristian

    2011-01-01

    In this thesis we investigate techniques and methods for parallel simulation of equation-based, object-oriented (EOO) Modelica models on graphics processing units (GPUs). Modelica is being developed through an international effort via the Modelica Association. With Modelica it is possible to build computationally heavy models; simulating such models however might take a considerable amount of time. Therefor techniques of utilizing parallel multi-core architectures for simulation are desirable...

  11. Distributed match-making for processes in computer networks

    OpenAIRE

    Mullender, Sape; Vitányi, Paul

    1986-01-01

    In the very large multiprocessor systems and, on a grander scale, computer networks now emerging, processes are not tied to fixed processors but run on processors taken from a pool of processors. Processors are released when a process dies, migrates or when the process crashes. In distributed operating systems using the service concept, processes can be clients asking for a service, servers giving a service or both. Establishing communication between a process asking for a service and a proce...

  12. Computational Tools for Accelerating Carbon Capture Process Development

    Energy Technology Data Exchange (ETDEWEB)

    Miller, David; Sahinidis, N V; Cozad, A; Lee, A; Kim, H; Morinelly, J; Eslick, J; Yuan, Z

    2013-06-04

    This presentation reports development of advanced computational tools to accelerate next generation technology development. These tools are to develop an optimized process using rigorous models. They include: Process Models; Simulation-Based Optimization; Optimized Process; Uncertainty Quantification; Algebraic Surrogate Models; and Superstructure Optimization (Determine Configuration).

  13. Ambient Ammonia Monitoring in the Central United States Using Passive Diffusion Samplers

    Science.gov (United States)

    Caughey, M.; Gay, D.; Sweet, C.

    2008-12-01

    Environmental scientists and governmental authorities are increasingly aware of the need for more comprehensive measurements of ambient ammonia in urban, rural and remote locations. As the predominant alkaline gas, ammonia plays a critical role in atmospheric chemistry by reacting readily with acidic gases and particles. Ammonium salts often comprise a major portion of the aerosols that impair visibility, not only in urban areas, but also in national parks and other Class I areas. Ammonia is also important as a plant nutrient that directly or indirectly affects terrestrial and aquatic biomes. Successful computer simulations of important environmental processes require an extensive representative data set of ambient ammonia measurements in the range of 0.1 ppbv or greater. Generally instruments with that level of sensitivity are not only expensive, but also require electrical connections, an enclosed shelter and, in many instances, frequent attention from trained technicians. Such requirements significantly restrict the number and locations of ambient ammonia monitors that can be supported. As an alternative we have employed simple passive diffusion samplers to measure ambient ammonia at 9 monitoring sites in the central U.S. over the past 3 years. Passive samplers consist of a layer of an acidic trapping medium supported at a fixed distance behind a microporous barrier for which the diffusive properties are known. Ammonia uptake rates are determined by the manufacturer under controlled laboratory conditions. (When practical, field results are compared against those from collocated conventional samplers, e.g., pumped annular denuders.) After a known exposure time at the sampling site, the sampler is resealed in protective packaging and shipped to the analytical laboratory where the ammonia captured in the acidic medium is carefully extracted and quantified. Because passive samplers are comparatively inexpensive and do not require electricity or other facilities they

  14. New photosensitizer with phenylenebisthiophene central unit and cyanovinylene 4-nitrophenyl terminal units for dye-sensitized solar cells

    Energy Technology Data Exchange (ETDEWEB)

    Mikroyannidis, J.A., E-mail: mikroyan@chemistry.upatras.gr [Chemical Technology Laboratory, Department of Chemistry, University of Patras, GR-26500 Patras (Greece); Suresh, P. [Physics Department, Molecular Electronic and Optoelectronic Device Laboratory, JNV University, Jodhpur (Raj.) 342005 (India); Roy, M.S. [Defence Laboratory, Jodhpur (Raj.) 342011 (India); Sharma, G.D., E-mail: sharmagd_in@yahoo.com [Physics Department, Molecular Electronic and Optoelectronic Device Laboratory, JNV University, Jodhpur (Raj.) 342005 (India); R and D Centre for Engineering and Science, Jaipur Engineering College, Kukas, Jaipur (Raj.) (India)

    2011-06-30

    Graphical abstract: A novel dye D was synthesized and used as photosensitizer for quasi solid state dye-sensitized solar cells. A power conversion efficiency of 4.4% was obtained which was improved to 5.52% when diphenylphosphinic acid (DPPA) was added as coadsorbent. Display Omitted Highlights: > A new low band gap photosensitizer with cyanovinylene 4-nitrophenyl terminal units was synthesized. > A power conversion efficiency of 4.4% was obtained for the dye-sensitized solar cell based on this photosensitizer. > The power conversion efficiency of the dye-sensitized solar cell was further improved to 5.52% when diphenylphosphinic acid was added as coadsorbent. - Abstract: A new low band gap photosensitizer, D, which contains 2,2'-(1,4-phenylene) bisthiophene central unit and cyanovinylene 4-nitrophenyl terminal units at both sides was synthesized. The two carboxyls attached to the 2,5-positions of the phenylene ring act as anchoring groups. Dye D was soluble in common organic solvents, showed long-wavelength absorption maximum at 620-636 nm and optical band gap of 1.72 eV. The electrochemical parameters, i.e. the highest occupied molecular orbital (HOMO) (-5.1 eV) and the lowest unoccupied molecular orbital (LUMO) (-3.3 eV) energy levels of D show that this dye is suitable as molecular sensitizer. The quasi solid state dye-sensitized solar cell (DSSC) based on D shows a short circuit current (J{sub sc}) of 9.95 mA/cm{sup 2}, an open circuit voltage (V{sub oc}) of 0.70 V, and a fill factor (FF) of 0.64 corresponding to an overall power conversion efficiency (PCE) of 4.40% under 100 mW/cm{sup 2} irradiation. The overall PCE has been further improved to 5.52% when diphenylphosphinic acid (DPPA) coadsorbent is incorporated into the D solution. This increased PCE has been attributed to the enhancement in the electron lifetime and reduced recombination of injected electrons with the iodide ions present in the electrolyte with the use of DPPA as coadsorbant. The

  15. Proton computed tomography from multiple physics processes

    Science.gov (United States)

    Bopp, C.; Colin, J.; Cussol, D.; Finck, Ch; Labalme, M.; Rousseau, M.; Brasse, D.

    2013-10-01

    Proton CT (pCT) nowadays aims at improving hadron therapy treatment planning by mapping the relative stopping power (RSP) of materials with respect to water. The RSP depends mainly on the electron density of the materials. The main information used is the energy of the protons. However, during a pCT acquisition, the spatial and angular deviation of each particle is recorded and the information about its transmission is implicitly available. The potential use of those observables in order to get information about the materials is being investigated. Monte Carlo simulations of protons sent into homogeneous materials were performed, and the influence of the chemical composition on the outputs was studied. A pCT acquisition of a head phantom scan was simulated. Brain lesions with the same electron density but different concentrations of oxygen were used to evaluate the different observables. Tomographic images from the different physics processes were reconstructed using a filtered back-projection algorithm. Preliminary results indicate that information is present in the reconstructed images of transmission and angular deviation that may help differentiate tissues. However, the statistical uncertainty on these observables generates further challenge in order to obtain an optimal reconstruction and extract the most pertinent information.

  16. Estimating earthquake magnitudes from reported intensities in the central and eastern United States

    Science.gov (United States)

    Boyd, Oliver; Cramer, Chris H.

    2014-01-01

    A new macroseismic intensity prediction equation is derived for the central and eastern United States and is used to estimate the magnitudes of the 1811–1812 New Madrid, Missouri, and 1886 Charleston, South Carolina, earthquakes. This work improves upon previous derivations of intensity prediction equations by including additional intensity data, correcting magnitudes in the intensity datasets to moment magnitude, and accounting for the spatial and temporal population distributions. The new relation leads to moment magnitude estimates for the New Madrid earthquakes that are toward the lower range of previous studies. Depending on the intensity dataset to which the new macroseismic intensity prediction equation is applied, mean estimates for the 16 December 1811, 23 January 1812, and 7 February 1812 mainshocks, and 16 December 1811 dawn aftershock range from 6.9 to 7.1, 6.8 to 7.1, 7.3 to 7.6, and 6.3 to 6.5, respectively. One‐sigma uncertainties on any given estimate could be as high as 0.3–0.4 magnitude units. We also estimate a magnitude of 6.9±0.3 for the 1886 Charleston, South Carolina, earthquake. We find a greater range of magnitude estimates when also accounting for multiple macroseismic intensity prediction equations. The inability to accurately and precisely ascertain magnitude from intensities increases the uncertainty of the central United States earthquake hazard by nearly a factor of two. Relative to the 2008 national seismic hazard maps, our range of possible 1811–1812 New Madrid earthquake magnitudes increases the coefficient of variation of seismic hazard estimates for Memphis, Tennessee, by 35%–42% for ground motions expected to be exceeded with a 2% probability in 50 years and by 27%–35% for ground motions expected to be exceeded with a 10% probability in 50 years.

  17. Evaluation of the Central Hearing Process in Parkinson Patients

    OpenAIRE

    Santos, Rosane Sampaio; Teive, Hélio A. Ghizoni; Gorski, Leslie Palma; Klagenberg, Karlin Fabianne; Muñoz, Monica Barby; Zeigelboim, Bianca Simone

    2011-01-01

    Introduction: Parkinson disease (PD) is a degenerating disease with a deceitful character, impairing the central nervous system and causing biological, psychological and social changes. It shows motor signs and symptoms characterized by trembling, postural instability, rigidity and bradykinesia. Objective: To evaluate the central hearing function in PD patients. Method: A descriptive, prospect and transversal study, in which 10 individuals diagnosed of PD named study group (SG) and 10 normall...

  18. Computer Forensics Field Triage Process Model

    Directory of Open Access Journals (Sweden)

    Marcus K. Rogers

    2006-06-01

    Full Text Available With the proliferation of digital based evidence, the need for the timely identification, analysis and interpretation of digital evidence is becoming more crucial. In many investigations critical information is required while at the scene or within a short period of time - measured in hours as opposed to days. The traditional cyber forensics approach of seizing a system(s/media, transporting it to the lab, making a forensic image(s, and then searching the entire system for potential evidence, is no longer appropriate in some circumstances. In cases such as child abductions, pedophiles, missing or exploited persons, time is of the essence. In these types of cases, investigators dealing with the suspect or crime scene need investigative leads quickly; in some cases it is the difference between life and death for the victim(s. The Cyber Forensic Field Triage Process Model (CFFTPM proposes an onsite or field approach for providing the identification, analysis and interpretation of digital evidence in a short time frame, without the requirement of having to take the system(s/media back to the lab for an in-depth examination or acquiring a complete forensic image(s. The proposed model adheres to commonly held forensic principles, and does not negate the ability that once the initial field triage is concluded, the system(s/storage media be transported back to a lab environment for a more thorough examination and analysis. The CFFTPM has been successfully used in various real world cases, and its investigative importance and pragmatic approach has been amply demonstrated. Furthermore, the derived evidence from these cases has not been challenged in the court proceedings where it has been introduced. The current article describes the CFFTPM in detail, discusses the model’s forensic soundness, investigative support capabilities and practical considerations.

  19. Accelerating Image Reconstruction in Three-Dimensional Optoacoustic Tomography on Graphics Processing Units

    CERN Document Server

    Wang, Kun; Kao, Yu-Jiun; Chou, Cheng-Ying; Oraevsky, Alexander A; Anastasio, Mark A; 10.1118/1.4774361

    2013-01-01

    Purpose: Optoacoustic tomography (OAT) is inherently a three-dimensional (3D) inverse problem. However, most studies of OAT image reconstruction still employ two-dimensional (2D) imaging models. One important reason is because 3D image reconstruction is computationally burdensome. The aim of this work is to accelerate existing image reconstruction algorithms for 3D OAT by use of parallel programming techniques. Methods: Parallelization strategies are proposed to accelerate a filtered backprojection (FBP) algorithm and two different pairs of projection/backprojection operations that correspond to two different numerical imaging models. The algorithms are designed to fully exploit the parallel computing power of graphic processing units (GPUs). In order to evaluate the parallelization strategies for the projection/backprojection pairs, an iterative image reconstruction algorithm is implemented. Computer-simulation and experimental studies are conducted to investigate the computational efficiency and numerical a...

  20. Effects of aging on peripheral and central auditory processing in rats.

    Science.gov (United States)

    Costa, Margarida; Lepore, Franco; Prévost, François; Guillemot, Jean-Paul

    2016-08-01

    Hearing loss is a hallmark sign in the elderly population. Decline in auditory perception provokes deficits in the ability to localize sound sources and reduces speech perception, particularly in noise. In addition to a loss of peripheral hearing sensitivity, changes in more complex central structures have also been demonstrated. Related to these, this study examines the auditory directional maps in the deep layers of the superior colliculus of the rat. Hence, anesthetized Sprague-Dawley adult (10 months) and aged (22 months) rats underwent distortion product of otoacoustic emissions (DPOAEs) to assess cochlear function. Then, auditory brainstem responses (ABRs) were assessed, followed by extracellular single-unit recordings to determine age-related effects on central auditory functions. DPOAE amplitude levels were decreased in aged rats although they were still present between 3.0 and 24.0 kHz. ABR level thresholds in aged rats were significantly elevated at an early (cochlear nucleus - wave II) stage in the auditory brainstem. In the superior colliculus, thresholds were increased and the tuning widths of the directional receptive fields were significantly wider. Moreover, no systematic directional spatial arrangement was present among the neurons of the aged rats, implying that the topographical organization of the auditory directional map was abolished. These results suggest that the deterioration of the auditory directional spatial map can, to some extent, be attributable to age-related dysfunction at more central, perceptual stages of auditory processing.

  1. Seismic proving test of process computer systems with a seismic floor isolation system

    Energy Technology Data Exchange (ETDEWEB)

    Fujimoto, S.; Niwa, H.; Kondo, H. [Toshiba Corp., Kawasaki (Japan)] [and others

    1995-12-01

    The authors have carried out seismic proving tests for process computer systems as a Nuclear Power Engineering Corporation (NUPEC) project sponsored by the Ministry of International Trade and Industry (MITI). This paper presents the seismic test results for evaluating functional capabilities of process computer systems with a seismic floor isolation system. The seismic floor isolation system to isolate the horizontal motion was composed of a floor frame (13 m x 13 m), ball bearing units, and spring-damper units. A series of seismic excitation tests was carried out using a large-scale shaking table of NUPEC. From the test results, the functional capabilities during large earthquakes of computer systems with a seismic floor isolation system were verified.

  2. Seismic proving test of process computer systems with a seismic floor isolation system

    International Nuclear Information System (INIS)

    The authors have carried out seismic proving tests for process computer systems as a Nuclear Power Engineering Corporation (NUPEC) project sponsored by the Ministry of International Trade and Industry (MITI). This paper presents the seismic test results for evaluating functional capabilities of process computer systems with a seismic floor isolation system. The seismic floor isolation system to isolate the horizontal motion was composed of a floor frame (13 m x 13 m), ball bearing units, and spring-damper units. A series of seismic excitation tests was carried out using a large-scale shaking table of NUPEC. From the test results, the functional capabilities during large earthquakes of computer systems with a seismic floor isolation system were verified

  3. COMPUTING

    CERN Multimedia

    I. Fisk

    2010-01-01

    Introduction It has been a very active quarter in Computing with interesting progress in all areas. The activity level at the computing facilities, driven by both organised processing from data operations and user analysis, has been steadily increasing. The large-scale production of simulated events that has been progressing throughout the fall is wrapping-up and reprocessing with pile-up will continue. A large reprocessing of all the proton-proton data has just been released and another will follow shortly. The number of analysis jobs by users each day, that was already hitting the computing model expectations at the time of ICHEP, is now 33% higher. We are expecting a busy holiday break to ensure samples are ready in time for the winter conferences. Heavy Ion An activity that is still in progress is computing for the heavy-ion program. The heavy-ion events are collected without zero suppression, so the event size is much large at roughly 11 MB per event of RAW. The central collisions are more complex and...

  4. Theoretic computing model of combustion process of asphalt smoke

    Institute of Scientific and Technical Information of China (English)

    HUANG Rui; CHAI Li-yuan; HE De-wen; PENG Bing; WANG Yun-yan

    2005-01-01

    Based on the data and methods provided by research literature, dispersing mathematical model of combustion process of asphalt smoke is set by theoretic analysis. Through computer programming, the dynamic combustion process of asphalt smoke is calculated to simulate an experimental model. The computing result shows that the temperature and the concentration of asphalt smoke influence its burning temperature in approximatively linear manner. The consumed quantity of fuel to ignite the asphalt smoke needs to be measured from the two factors.

  5. Everything You Always Wanted to Know about Computers but Were Afraid to Ask.

    Science.gov (United States)

    DiSpezio, Michael A.

    1989-01-01

    An overview of the basics of computers is presented. Definitions and discussions of processing, programs, memory, DOS, anatomy and design, central processing unit (CPU), disk drives, floppy disks, and peripherals are included. This article was designed to help teachers to understand basic computer terminology. (CW)

  6. 77 FR 50722 - Software Unit Testing for Digital Computer Software Used in Safety Systems of Nuclear Power Plants

    Science.gov (United States)

    2012-08-22

    ... COMMISSION Software Unit Testing for Digital Computer Software Used in Safety Systems of Nuclear Power Plants... regulatory guide (DG), DG-1208, ``Software Unit Testing for Digital Computer Software used in Safety Systems... entitled ``Software Unit Testing for Digital Computer Software Used in Safety Systems of Nuclear...

  7. Oaks were the historical foundation genus of the east-central United States

    Science.gov (United States)

    Hanberry, Brice B.; Nowacki, Gregory J.

    2016-08-01

    Foundation tree species are dominant and define ecosystems. Because of the historical importance of oaks (Quercus) in east-central United States, it was unlikely that oak associates, such as pines (Pinus), hickories (Carya) and chestnut (Castanea), rose to this status. We used 46 historical tree studies or databases (ca. 1620-1900) covering 28 states, 1.7 million trees, and 50% of the area of the eastern United States to examine importance of oaks compared to pines, hickories, and chestnuts. Oak was the most abundant genus, ranging from 40% to 70% of total tree composition at the ecological province scale and generally increasing in dominance from east to west across this area. Pines, hickories, and chestnuts were co-dominant (ratio of oak composition to other genera of United States, and thus by definition, were not foundational. Although other genera may be called foundational because of localized abundance or perceptions resulting from inherited viewpoints, they decline from consideration when compared to overwhelming oak abundance across this spatial extent. The open structure and high-light conditions of oak ecosystems uniquely supported species-rich understories. Loss of oak as a foundation genus has occurred with loss of open forest ecosystems at landscape scales.

  8. Parallel Computer Vision Algorithms for Graphics Processing Units

    OpenAIRE

    Berjón Díez, Daniel

    2016-01-01

    La evolución de los teléfonos móviles inteligentes, dotados de cámaras digitales, está provocando una creciente demanda de aplicaciones cada vez más complejas que necesitan algoritmos de visión artificial en tiempo real; puesto que el tamaño de las señales de vídeo no hace sino aumentar y en cambio el rendimiento de los procesadores de un solo núcleo se ha estancado, los nuevos algoritmos que se diseñen para visión artificial han de ser paralelos para poder ejecutarse en múltiples procesadore...

  9. Mathematical modelling in the computer-aided process planning

    Science.gov (United States)

    Mitin, S.; Bochkarev, P.

    2016-04-01

    This paper presents new approaches to organization of manufacturing preparation and mathematical models related to development of the computer-aided multi product process planning (CAMPP) system. CAMPP system has some peculiarities compared to the existing computer-aided process planning (CAPP) systems: fully formalized developing of the machining operations; a capacity to create and to formalize the interrelationships among design, process planning and process implementation; procedures for consideration of the real manufacturing conditions. The paper describes the structure of the CAMPP system and shows the mathematical models and methods to formalize the design procedures.

  10. Towards Process Support for Migrating Applications to Cloud Computing

    DEFF Research Database (Denmark)

    Chauhan, Muhammad Aufeef; Babar, Muhammad Ali

    2012-01-01

    Cloud computing is an active area of research for industry and academia. There are a large number of organizations providing cloud computing infrastructure and services. In order to utilize these infrastructure resources and services, existing applications need to be migrated to clouds. However...... for supporting migration to cloud computing based on our experiences from migrating an Open Source System (OSS), Hackystat, to two different cloud computing platforms. We explained the process by performing a comparative analysis of our efforts to migrate Hackystate to Amazon Web Services and Google App Engine....... We also report the potential challenges, suitable solutions, and lesson learned to support the presented process framework. We expect that the reported experiences can serve guidelines for those who intend to migrate software applications to cloud computing....

  11. Role of centralized review processes for making reimbursement decisions on new health technologies in Europe

    Directory of Open Access Journals (Sweden)

    Stafinski T

    2011-08-01

    Full Text Available Tania Stafinski1, Devidas Menon2, Caroline Davis1, Christopher McCabe31Health Technology and Policy Unit, 2Health Policy and Management, School of Public Health, University of Alberta, Edmonton, Alberta, Canada; 3Academic Unit of Health Economics, Leeds Institute for Health Sciences, University of Leeds, Leeds, UKBackground: The purpose of this study was to compare centralized reimbursement/coverage decision-making processes for health technologies in 23 European countries, according to: mandate, authority, structure, and policy options; mechanisms for identifying, selecting, and evaluating technologies; clinical and economic evidence expectations; committee composition, procedures, and factors considered; available conditional reimbursement options for promising new technologies; and the manufacturers' roles in the process.Methods: A comprehensive review of publicly available information from peer-reviewed literature (using a variety of bibliographic databases and gray literature (eg, working papers, committee reports, presentations, and government documents was conducted. Policy experts in each of the 23 countries were also contacted. All information collected was reviewed by two independent researchers.Results: Most European countries have established centralized reimbursement systems for making decisions on health technologies. However, the scope of technologies considered, as well as processes for identifying, selecting, and reviewing them varies. All systems include an assessment of clinical evidence, compiled in accordance with their own guidelines or internationally recognized published ones. In addition, most systems require an economic evaluation. The quality of such information is typically assessed by content and methodological experts. Committees responsible for formulating recommendations or decisions are multidisciplinary. While criteria used by committees appear transparent, how they are operationalized during deliberations

  12. Neuromotor recovery from stroke: Computational models at central, functional, and muscle synergy level

    Directory of Open Access Journals (Sweden)

    Maura eCasadio

    2013-08-01

    Full Text Available Computational models of neuromotor recovery after a stroke might help to unveil the underlying physiological mechanisms and might suggest how to make recovery faster and more effective. At least in principle, these models could serve: (i To provide testable hypotheses on the nature of recovery; (ii To predict the recovery of individual patients; (iii To design patient-specific ’optimal’ therapy, by setting the treatment variables for maximizing the amount of recovery or for achieving a better generalization of the learned abilities across different tasks.Here we review the state of the art of computational models for neuromotor recovery through exercise, and their implications for treatment. We show that to properly account for the computational mechanisms of neuromotor recovery, multiple levels of description need to be taken into account. The review specifically covers models of recovery at central, functional and muscle synergy level.

  13. Snore related signals processing in a private cloud computing system.

    Science.gov (United States)

    Qian, Kun; Guo, Jian; Xu, Huijie; Zhu, Zhaomeng; Zhang, Gongxuan

    2014-09-01

    Snore related signals (SRS) have been demonstrated to carry important information about the obstruction site and degree in the upper airway of Obstructive Sleep Apnea-Hypopnea Syndrome (OSAHS) patients in recent years. To make this acoustic signal analysis method more accurate and robust, big SRS data processing is inevitable. As an emerging concept and technology, cloud computing has motivated numerous researchers and engineers to exploit applications both in academic and industry field, which could have an ability to implement a huge blue print in biomedical engineering. Considering the security and transferring requirement of biomedical data, we designed a system based on private cloud computing to process SRS. Then we set the comparable experiments of processing a 5-hour audio recording of an OSAHS patient by a personal computer, a server and a private cloud computing system to demonstrate the efficiency of the infrastructure we proposed.

  14. Ontology-based metrics computation for business process analysis

    OpenAIRE

    Carlos Pedrinaci; John Domingue

    2009-01-01

    Business Process Management (BPM) aims to support the whole life-cycle necessary to deploy and maintain business processes in organisations. Crucial within the BPM lifecycle is the analysis of deployed processes. Analysing business processes requires computing metrics that can help determining the health of business activities and thus the whole enterprise. However, the degree of automation currently achieved cannot support the level of reactivity and adaptation demanded by businesses. In thi...

  15. Simulation and Improvement of the Processing Subsystem of the Manchester Dataflow Computer

    Institute of Scientific and Technical Information of China (English)

    1995-01-01

    The Manchester dataflow computer is a famous dynamic dataflow computer.It is centralized in architecture and simple in organization.Its overhead for communication and scheduling is very small.Its efficiency comes down,when processing elements in the processing subsystem increase.Several articles evaluated its performance and presented improved methods.The authors studied its processing subsystem and carried out the simulation.The simulation results show that the efficiency of the processing subsystem drops dramatically when average instruction execution microcycles become less and the maximum instruction execution rate is nearly attained.Two improved methods are presented to oversome the disadvantage.The improved processing subsystem with a cheap distributor made up of a bus and a two-level fixed priority circuit possesses almost full efficiency no matter whether the average nstruction execution microcycles number is large or small and even if the maximum instruction execution rate is approached.

  16. Using Graphics Processing Units to solve the classical N-body problem in physics and astrophysics

    CERN Document Server

    Spera, Mario

    2014-01-01

    Graphics Processing Units (GPUs) can speed up the numerical solution of various problems in astrophysics including the dynamical evolution of stellar systems; the performance gain can be more than a factor 100 compared to using a Central Processing Unit only. In this work I describe some strategies to speed up the classical N-body problem using GPUs. I show some features of the N-body code HiGPUs as template code. In this context, I also give some hints on the parallel implementation of a regularization method and I introduce the code HiGPUs-R. Although the main application of this work concerns astrophysics, some of the presented techniques are of general validity and can be applied to other branches of physics such as electrodynamics and QCD.

  17. Design of Central Management & Control Unit for Onboard High-Speed Data Handling System

    Institute of Scientific and Technical Information of China (English)

    LI Yan-qin; JIN Sheng-zhen; NING Shu-nian

    2007-01-01

    The Main Optical Telescope (MOT) is an important payload of the Space Solar Telescope (SST) with various instruments and observation modes. Its real-time data handling and management and control tasks are arduous. Based on the advanced techniques of foreign countries, an improved structure of onboard data handling systems feasible for SST, is proposed. This article concentrated on the development of a Central Management & Control Unit (MCU) based on FPGA and DSP. Through reconfigurating the FPGA and DSP programs, the prototype could perform different tasks.Thus the inheritability of the whole system is improved. The completed dual-channel prototype proves that the system meets all requirements of the MOT. Its high reliability and safety features also meet the requirements under harsh conditions such as mine detection.

  18. Optimal location of centralized biodigesters for small dairy farms: A case study from the United States

    Directory of Open Access Journals (Sweden)

    Deep Mukherjee

    2015-06-01

    Full Text Available Anaerobic digestion technology is available for converting livestock waste to bio-energy, but its potential is far from fully exploited in the United States because the technology has a scale effect. Utilization of the centralized anaerobic digester (CAD concept could make the technology economically feasible for smaller dairy farms. An interdisciplinary methodology to determine the cost minimizing location, size, and number of CAD facilities in a rural dairy region with mostly small farms is described. This study employs land suitability analysis, operations research model and Geographical Information System (GIS tools to evaluate the environmental, social, and economic constraints in selecting appropriate sites for CADs in Windham County, Connecticut. Results indicate that overall costs are lower if the CADs are of larger size and are smaller in number.

  19. Well Installation Report for Corrective Action Unit 443, Central Nevada Test Area, Nye County, Nevada

    International Nuclear Information System (INIS)

    A Corrective Action Investigation (CAI) was performed in several stages from 1999 to 2003, as set forth in the ''Corrective Action Investigation Plan for the Central Nevada Test Area Subsurface Sites, Corrective Action Unit 443'' (DOE/NV, 1999). Groundwater modeling was the primary activity of the CAI. Three phases of modeling were conducted for the Faultless underground nuclear test. The first phase involved the gathering and interpretation of geologic and hydrogeologic data, and inputting the data into a three-dimensional numerical model to depict groundwater flow. The output from the groundwater flow model was used in a transport model to simulate the migration of a radionuclide release (Pohlmann et al., 2000). The second phase of modeling (known as a Data Decision Analysis [DDA]) occurred after NDEP reviewed the first model. This phase was designed to respond to concerns regarding model uncertainty (Pohll and Mihevc, 2000). The third phase of modeling updated the original flow and transport model to incorporate the uncertainty identified in the DDA, and focused the model domain on the region of interest to the transport predictions. This third phase culminated in the calculation of contaminant boundaries for the site (Pohll et al., 2003). Corrective action alternatives were evaluated and an alternative was submitted in the ''Corrective Action Decision Document/Corrective Action Plan for Corrective Action Unit 443: Central Nevada Test Area-Subsurface'' (NNSA/NSO, 2004). Based on the results of this evaluation, the preferred alternative for CAU 443 is Proof-of-Concept and Monitoring with Institutional Controls. This alternative was judged to meet all requirements for the technical components evaluated and will control inadvertent exposure to contaminated groundwater at CAU 443

  20. Translator-computer interaction in action:An observational process study of computer-aided translation

    OpenAIRE

    Bundgaard, Kristine; Christensen, Tina Paulsen; Schjoldager, Anne

    2016-01-01

    Though we lack empirically-based knowledge of the impact of computer-aided translation (CAT) tools on translation processes, it is generally agreed that all professional translators are now involved in some kind of translator-computer interaction (TCI), using O’Brien’s (2012) term. Taking a TCI perspective, this paper investigates the relationship between machines and humans in the field of translation, analysing a CAT process in which machine-translation (MT) technology was integrated into a...

  1. Use of simulation for planning the organization of the computational process on digital computing systems

    International Nuclear Information System (INIS)

    The technique of choice for job processing on a given computer system structure in real time is proposed. These tasks are accomplished with limited and unlimited memory buffers by choosing subjects of dispatching and management of their use to specify the parameters of a set of objectives. The characteristics of the computational process have been calculated using the simulation program designed and drawn up in the GPSS

  2. Quantum computation and the physical computation level of biological information processing

    OpenAIRE

    Castagnoli, Giuseppe

    2009-01-01

    On the basis of introspective analysis, we establish a crucial requirement for the physical computation basis of consciousness: it should allow processing a significant amount of information together at the same time. Classical computation does not satisfy the requirement. At the fundamental physical level, it is a network of two body interactions, each the input-output transformation of a universal Boolean gate. Thus, it cannot process together at the same time more than the three bit input ...

  3. An Investigation of the Artifacts and Process of Constructing Computers Games about Environmental Science in a Fifth Grade Classroom

    Science.gov (United States)

    Baytak, Ahmet; Land, Susan M.

    2011-01-01

    This study employed a case study design (Yin, "Case study research, design and methods," 2009) to investigate the processes used by 5th graders to design and develop computer games within the context of their environmental science unit, using the theoretical framework of "constructionism." Ten fifth graders designed computer games using "Scratch"…

  4. Computer-Assisted Regulation of Emotional and Social Processes

    OpenAIRE

    Vanhala, Toni; Surakka, Veikko

    2008-01-01

    The current work presented a model for a computer system that supports the regulation of emotion related processes during exposure to provoking stimuli. We identified two main challenges for these kinds of systems. First, emotions as such are complex, multi-component processes that are measured with several complementary methods. The amount of

  5. Computer presentation in mineral processing by software comuputer packets

    OpenAIRE

    Krstev, Aleksandar; Krstev, Boris; Golomeov, Blagoj; Golomeova, Mirjana

    2009-01-01

    In this paper will be shown computer application of softwares Minteh-1, Minteh-2 and Minteh-3 in Visual Basic, Visual Studio for presentation of two-products for some closed circuits od grinding-clasifying processes. These methods make possibilities for appropriate, fasr and sure presentation of some complex circuits in the mineral processing technologies.

  6. Computers in Public Schools: Changing the Image with Image Processing.

    Science.gov (United States)

    Raphael, Jacqueline; Greenberg, Richard

    1995-01-01

    The kinds of educational technologies selected can make the difference between uninspired, rote computer use and challenging learning experiences. University of Arizona's Image Processing for Teaching Project has worked with over 1,000 teachers to develop image-processing techniques that provide students with exciting, open-ended opportunities for…

  7. One central oscillatory drive is compatible with experimental motor unit behaviour in essential and Parkinsonian tremor

    Science.gov (United States)

    Dideriksen, Jakob L.; Gallego, Juan A.; Holobar, Ales; Rocon, Eduardo; Pons, Jose L.; Farina, Dario

    2015-08-01

    Objective. Pathological tremors are symptomatic to several neurological disorders that are difficult to differentiate and the way by which central oscillatory networks entrain tremorogenic contractions is unknown. We considered the alternative hypotheses that tremor arises from one oscillator (at the tremor frequency) or, as suggested by recent findings from the superimposition of two separate inputs (at the tremor frequency and twice that frequency). Approach. Assuming one central oscillatory network we estimated analytically the relative amplitude of the harmonics of the tremor frequency in the motor neuron output for different temporal behaviors of the oscillator. Next, we analyzed the bias in the relative harmonics amplitude introduced by superimposing oscillations at twice the tremor frequency. These findings were validated using experimental measurements of wrist angular velocity and surface electromyography (EMG) from 22 patients (11 essential tremor, 11 Parkinson’s disease). The ensemble motor unit action potential trains identified from the EMG represented the neural drive to the muscles. Main results. The analytical results showed that the relative power of the tremor harmonics in the analytical models of the neural drive was determined by the variability and duration of the tremor bursts and the presence of the second oscillator biased this power towards higher values. The experimental findings accurately matched the analytical model assuming one oscillator, indicating a negligible functional role of secondary oscillatory inputs. Furthermore, a significant difference in the relative power of harmonics in the neural drive was found across the patient groups, suggesting a diagnostic value of this measure (classification accuracy: 86%). This diagnostic power decreased substantially when estimated from limb acceleration or the EMG. Signficance. The results indicate that the neural drive in pathological tremor is compatible with one central network

  8. Computer Processing Of Tunable-Diode-Laser Spectra

    Science.gov (United States)

    May, Randy D.

    1991-01-01

    Tunable-diode-laser spectrometer measuring transmission spectrum of gas operates under control of computer, which also processes measurement data. Measurements in three channels processed into spectra. Computer controls current supplied to tunable diode laser, stepping it through small increments of wavelength while processing spectral measurements at each step. Program includes library of routines for general manipulation and plotting of spectra, least-squares fitting of direct-transmission and harmonic-absorption spectra, and deconvolution for determination of laser linewidth and for removal of instrumental broadening of spectral lines.

  9. Quantum information processing in nanostructures Quantum optics; Quantum computing

    CERN Document Server

    Reina-Estupinan, J H

    2002-01-01

    Since information has been regarded os a physical entity, the field of quantum information theory has blossomed. This brings novel applications, such as quantum computation. This field has attracted the attention of numerous researchers with backgrounds ranging from computer science, mathematics and engineering, to the physical sciences. Thus, we now have an interdisciplinary field where great efforts are being made in order to build devices that should allow for the processing of information at a quantum level, and also in the understanding of the complex structure of some physical processes at a more basic level. This thesis is devoted to the theoretical study of structures at the nanometer-scale, 'nanostructures', through physical processes that mainly involve the solid-state and quantum optics, in order to propose reliable schemes for the processing of quantum information. Initially, the main results of quantum information theory and quantum computation are briefly reviewed. Next, the state-of-the-art of ...

  10. Computer-Aided Modeling of Lipid Processing Technology

    DEFF Research Database (Denmark)

    Diaz Tovar, Carlos Axel

    2011-01-01

    are widely used for design, analysis, and optimization of processes in the chemical and petrochemical industries. These computer-aided tools have helped the chemical industry to evolve beyond commodities toward specialty chemicals and ‘consumer oriented chemicals based products’. Unfortunately...... this is not the case for the edible oil and biodiesel industries. The oleochemical industry lags behind the chemical industry in terms of thermophysical property modeling and development of computational tools suitable for the design/analysis, and optimization of lipid-related processes. The aim of this work has been...... increase along with growing interest in biofuels, the oleochemical industry faces in the upcoming years major challenges in terms of design and development of better products and more sustainable processes to make them. Computer-aided methods and tools for process synthesis, modeling and simulation...

  11. Applications of evolutionary computation in image processing and pattern recognition

    CERN Document Server

    Cuevas, Erik; Perez-Cisneros, Marco

    2016-01-01

    This book presents the use of efficient Evolutionary Computation (EC) algorithms for solving diverse real-world image processing and pattern recognition problems. It provides an overview of the different aspects of evolutionary methods in order to enable the reader in reaching a global understanding of the field and, in conducting studies on specific evolutionary techniques that are related to applications in image processing and pattern recognition. It explains the basic ideas of the proposed applications in a way that can also be understood by readers outside of the field. Image processing and pattern recognition practitioners who are not evolutionary computation researchers will appreciate the discussed techniques beyond simple theoretical tools since they have been adapted to solve significant problems that commonly arise on such areas. On the other hand, members of the evolutionary computation community can learn the way in which image processing and pattern recognition problems can be translated into an...

  12. Central venous catheter-related bloodstream infections in the intensive care unit

    Directory of Open Access Journals (Sweden)

    Harsha V Patil

    2011-01-01

    Full Text Available Context: Central venous catheter-related bloodstream infection (CRBSI is associated with high rates of morbidity and mortality in critically ill patients. Aims: This study was conducted to determine the incidence of central venous catheter-related infections (CRIs and to identify the factors influencing it. So far, there are very few studies that have been conducted on CRBSI in the intensive care unit in India. Settings and Design: This was a prospective, observational study carried out in the medical intensive care unit (MICU over a period of 1 year from January to December 2004. Materials and Methods: A total of 54 patients with indwelling central venous catheters of age group between 20 and 75 years were included. The catheters were cultured using the standard semiquantitative culture (SQC method. Statistical analysis used SPSS-10 version statistical software. Results: A total of 54 CVC catheters with 319 catheter days were included in this study. Of 54 patients with CVCs studied for bacteriology, 39 (72.22% catheters showed negative SQCs and also negative blood cultures. A total of 15 (27.77% catheters were positive on SQC, of which 10 (18.52% were with catheter-associated infection and four (7.41% were with catheter-associated bacteremia; the remaining one was a probable catheter-associated bacteremia. CRIs were high among catheters that were kept in situ for more than 3 days and emergency procedures where two or more attempts were required for catheterization (P 3 days, inexperienced venupucturist, more number of attempts and emergency CVC were associated with more incidence of CVCBSIs, with P <0.02. The duration of catheter in situ was negatively correlated (-0.53 and number of attempts required to put CVC was positively correlated (+0.39 with incidence of CVCBSIs. Sixty-five percent of the isolates belonged to the CONS group (13/20. Staphylococcus epidermidis showed maximum susceptibility to amikacin, doxycycline and amoxycillin with

  13. Kinds of damage that could result from a great earthquake in the central United States

    Science.gov (United States)

    Hooper, M.G.; Algermissen, S.T.

    1985-01-01

    In the winter of 1811-12 a series of three great earthquakes occurred in the New Madrid, Missouri seismic zone in the central United States. In addition to the three principal shocks, at least 15 other earthquakes of intensity VIII or more occurred within a year of the first large earthquake on December 16, 1811. The three main shocks were felt over the entire eastern United States. They were strong enough to cause minor damage cause minor damage as far away as Indiana and Ohio on the north, the Carolinas on the east, and southern Mississippi to the south. They were strong enough to cause severe or structural damage in parts of Missouri, Illinois, Indiana, Kentucky, Tennessee, Mississippi, and Arkansas. A later section in this article describes what happened in the epicentral region. Fortunately, few people lived in the severely shaken area in 1811; that is not the case today. What would happen if a series of earthquakes as large and numerous as the "New Madrid" earthquakes were to occur in the New Madrid seismic zone today?

  14. Rationale awareness for quality assurance in iterative human computation processes

    CERN Document Server

    Xiao, Lu

    2012-01-01

    Human computation refers to the outsourcing of computation tasks to human workers. It offers a new direction for solving a variety of problems and calls for innovative ways of managing human computation processes. The majority of human computation tasks take a parallel approach, whereas the potential of an iterative approach, i.e., having workers iteratively build on each other's work, has not been sufficiently explored. This study investigates whether and how human workers' awareness of previous workers' rationales affects the performance of the iterative approach in a brainstorming task and a rating task. Rather than viewing this work as a conclusive piece, the author believes that this research endeavor is just the beginning of a new research focus that examines and supports meta-cognitive processes in crowdsourcing activities.

  15. Opportunities in the United States' gas processing industry

    International Nuclear Information System (INIS)

    To keep up with the increasing amount of natural gas that will be required by the market and with the decreasing quality of the gas at the well-head, the gas processing industry must look to new technologies to stay competitive. The Gas Research Institute (GR); is managing a research, development, design and deployment program that is projected to save the industry US dollar 230 million/year in operating and capital costs from gas processing related activities in NGL extraction and recovery, dehydration, acid gas removal/sulfur recovery, and nitrogen rejection. Three technologies are addressed here. Multivariable Control (MVC) technology for predictive process control and optimization is installed or in design at fourteen facilities treating a combined total of over 30x109 normal cubic meter per year (BN m3/y) [1.1x1012 standard cubic feet per year (Tcf/y)]. Simple pay backs are typically under 6 months. A new acid gas removal process based on n-formyl morpholine (NFM) is being field tested that offers 40-50% savings in operating costs and 15-30% savings in capital costs relative to a commercially available physical solvent. The GRI-MemCalcTM Computer Program for Membrane Separations and the GRI-Scavenger CalcBaseTM Computer Program for Scavenging Technologies are screening tools that engineers can use to determine the best practice for treating their gas. (au) 19 refs

  16. Learner Use of Holistic Language Units in Multimodal, Task-Based Synchronous Computer-Mediated Communication

    Directory of Open Access Journals (Sweden)

    Karina Collentine

    2009-06-01

    Full Text Available Second language acquisition (SLA researchers strive to understand the language and exchanges that learners generate in synchronous computer-mediated communication (SCMC. Doughty and Long (2003 advocate replacing open-ended SCMC with task-based language teaching (TBLT design principles. Since most task-based SCMC (TB-SCMC research addresses an interactionist view (e.g., whether uptake occurs, we know little about holistic language units generated by learners even though research suggests that task demands make TB-SCMC communication notably different from general SCMC communication. This study documents and accounts for discourse-pragmatic and sociocultural behaviors learners exhibit in TB-SCMC. To capture a variety of such behaviors, it documents holistic language units produced by intermediate and advanced learners of Spanish during two multimodal, TB-SCMC activities. The study found that simple assertions were most prevalent (a with dyads at the lower level of instruction and (b when dyads had a relatively short amount of time to chat. Additionally, interpersonal, sociocultural behaviors (e.g., joking, off-task discussions were more likely to occur (a amongst dyads at the advanced level and (b when they had relatively more time to chat. Implications explain how tasks might mitigate the potential processing overload that multimodal materials could incur.

  17. Intelligent Computational Systems. Opening Remarks: CFD Application Process Workshop

    Science.gov (United States)

    VanDalsem, William R.

    1994-01-01

    This discussion will include a short review of the challenges that must be overcome if computational physics technology is to have a larger impact on the design cycles of U.S. aerospace companies. Some of the potential solutions to these challenges may come from the information sciences fields. A few examples of potential computational physics/information sciences synergy will be presented, as motivation and inspiration for the Improving The CFD Applications Process Workshop.

  18. Image processing and computer graphics in radiology. Pt. B

    International Nuclear Information System (INIS)

    The reports give a full review of all aspects of digital imaging in radiology which are of significance to image processing and the subsequent picture archiving and communication techniques. The review strongly clings to practice and illustrates the various contributions from specialized areas of the computer sciences, such as computer vision, computer graphics, database systems and information and communication systems, man-machine interactions and software engineering. Methods and models available are explained and assessed for their respective performance and value, and basic principles are briefly explained. (DG)

  19. Image processing and computer graphics in radiology. Pt. A

    International Nuclear Information System (INIS)

    The reports give a full review of all aspects of digital imaging in radiology which are of significance to image processing and the subsequent picture archiving and communication techniques. The review strongly clings to practice and illustrates the various contributions from specialized areas of the computer sciences, such as computer vision, computer graphics, database systems and information and communication systems, man-machine interactions and software engineering. Methods and models available are explained and assessed for their respective performance and value, and basic principles are briefly explained. (DG)

  20. Modeling Units of Assessment for Sharing Assessment Process Information: towards an Assessment Process Specification

    NARCIS (Netherlands)

    Miao, Yongwu; Sloep, Peter; Koper, Rob

    2008-01-01

    Miao, Y., Sloep, P. B., & Koper, R. (2008). Modeling Units of Assessment for Sharing Assessment Process Information: towards an Assessment Process Specification. In F. W. B. Li, J. Zhao, T. K. Shih, R. W. H. Lau, Q. Li & D. McLeod (Eds.), Advances in Web Based Learning - Proceedings of the 7th Inter

  1. Distributed trace using central performance counter memory

    Science.gov (United States)

    Satterfield, David L.; Sexton, James C.

    2013-01-22

    A plurality of processing cores, are central storage unit having at least memory connected in a daisy chain manner, forming a daisy chain ring layout on an integrated chip. At least one of the plurality of processing cores places trace data on the daisy chain connection for transmitting the trace data to the central storage unit, and the central storage unit detects the trace data and stores the trace data in the memory co-located in with the central storage unit.

  2. Toward optical signal processing using photonic reservoir computing.

    Science.gov (United States)

    Vandoorne, Kristof; Dierckx, Wouter; Schrauwen, Benjamin; Verstraeten, David; Baets, Roel; Bienstman, Peter; Van Campenhout, Jan

    2008-07-21

    We propose photonic reservoir computing as a new approach to optical signal processing in the context of large scale pattern recognition problems. Photonic reservoir computing is a photonic implementation of the recently proposed reservoir computing concept, where the dynamics of a network of nonlinear elements are exploited to perform general signal processing tasks. In our proposed photonic implementation, we employ a network of coupled Semiconductor Optical Amplifiers (SOA) as the basic building blocks for the reservoir. Although they differ in many key respects from traditional software-based hyperbolic tangent reservoirs, we show using simulations that such a photonic reservoir can outperform traditional reservoirs on a benchmark classification task. Moreover, a photonic implementation offers the promise of massively parallel information processing with low power and high speed.

  3. Toward optical signal processing using photonic reservoir computing.

    Science.gov (United States)

    Vandoorne, Kristof; Dierckx, Wouter; Schrauwen, Benjamin; Verstraeten, David; Baets, Roel; Bienstman, Peter; Van Campenhout, Jan

    2008-07-21

    We propose photonic reservoir computing as a new approach to optical signal processing in the context of large scale pattern recognition problems. Photonic reservoir computing is a photonic implementation of the recently proposed reservoir computing concept, where the dynamics of a network of nonlinear elements are exploited to perform general signal processing tasks. In our proposed photonic implementation, we employ a network of coupled Semiconductor Optical Amplifiers (SOA) as the basic building blocks for the reservoir. Although they differ in many key respects from traditional software-based hyperbolic tangent reservoirs, we show using simulations that such a photonic reservoir can outperform traditional reservoirs on a benchmark classification task. Moreover, a photonic implementation offers the promise of massively parallel information processing with low power and high speed. PMID:18648434

  4. Discontinuous Galerkin methods on graphics processing units for nonlinear hyperbolic conservation laws

    CERN Document Server

    Fuhry, Martin; Krivodonova, Lilia

    2016-01-01

    We present a novel implementation of the modal discontinuous Galerkin (DG) method for hyperbolic conservation laws in two dimensions on graphics processing units (GPUs) using NVIDIA's Compute Unified Device Architecture (CUDA). Both flexible and highly accurate, DG methods accommodate parallel architectures well as their discontinuous nature produces element-local approximations. High performance scientific computing suits GPUs well, as these powerful, massively parallel, cost-effective devices have recently included support for double-precision floating point numbers. Computed examples for Euler equations over unstructured triangle meshes demonstrate the effectiveness of our implementation on an NVIDIA GTX 580 device. Profiling of our method reveals performance comparable to an existing nodal DG-GPU implementation for linear problems.

  5. Business Process Quality Computation: Computing Non-Functional Requirements to Improve Business Processes

    NARCIS (Netherlands)

    Heidari, F.

    2015-01-01

    Business process modelling is an important part of system design. When designing or redesigning a business process, stakeholders specify, negotiate, and agree on business requirements to be satisfied, including non-functional requirements that concern the quality of the business process. This thesis

  6. Congestion estimation technique in the optical network unit registration process.

    Science.gov (United States)

    Kim, Geunyong; Yoo, Hark; Lee, Dongsoo; Kim, Youngsun; Lim, Hyuk

    2016-07-01

    We present a congestion estimation technique (CET) to estimate the optical network unit (ONU) registration success ratio for the ONU registration process in passive optical networks. An optical line terminal (OLT) estimates the number of collided ONUs via the proposed scheme during the serial number state. The OLT can obtain congestion level among ONUs to be registered such that this information may be exploited to change the size of a quiet window to decrease the collision probability. We verified the efficiency of the proposed method through simulation and experimental results.

  7. Computer presentation of the closed circuits in mineral processing by software computer packets

    OpenAIRE

    Krstev, Aleksandar; Krstev, Boris; Golomeov, Blagoj

    2009-01-01

    In this paper will be shown computer application of softwares Minteh-1, Minteh-2 and Minteh-3 in Visual Basic, Visual Studio for presentation of two-products for some closed circuits od grinding-clasifying processes. These methods make possibilities for appropriate, fasr and sure presentation of some complex circuits in the mineral processing technologies.

  8. Product- and Process Units in the CRITT Translation Process Research Database

    DEFF Research Database (Denmark)

    Carl, Michael

    The first version of the "Translation Process Research Database" (TPR DB v1.0) was released In August 2012, containing logging data of more than 400 translation and text production sessions. The current version of the TPR DB, (v1.4), contains data from more than 940 sessions, which represents more...... than 300 hours of text production. The database provides the raw logging data, as well as Tables of pre-processed product- and processing units. The TPR-DB includes various types of simple and composed product and process units that are intended to support the analysis and modelling of human text...... reception, production, and translation processes. In this talk I describe some of the functions and features of the TPR-DB v1.4, and how they can be deployed in empirical human translation process research....

  9. A new perspective on the 1930s mega-heat waves across central United States

    Science.gov (United States)

    Cowan, Tim; Hegerl, Gabi

    2016-04-01

    The unprecedented hot and dry conditions that plagued contiguous United States during the 1930s caused widespread devastation for many local communities and severely dented the emerging economy. The heat extremes experienced during the aptly named Dust Bowl decade were not isolated incidences, but part of a tendency towards warm summers over the central United States in the early 1930s, and peaked in the boreal summer 1936. Using high-quality daily maximum and minimum temperature observations from more than 880 Global Historical Climate Network stations across the United States and southern Canada, we assess the record breaking heat waves in the 1930s Dust Bowl decade. A comparison is made to more recent heat waves that have occurred during the latter half of the 20th century (i.e., in a warming world), both averaged over selected years and across decades. We further test the ability of coupled climate models to simulate mega-heat waves (i.e. most extreme events) across the United States in a pre-industrial climate without the impact of any long-term anthropogenic warming. Well-established heat wave metrics based on the temperature percentile threshold exceedances over three or more consecutive days are used to describe variations in the frequency, duration, amplitude and timing of the events. Casual factors such as drought severity/soil moisture deficits in the lead up to the heat waves (interannual), as well as the concurrent synoptic conditions (interdiurnal) and variability in Pacific and Atlantic sea surface temperatures (decadal) are also investigated. Results suggest that while each heat wave summer in the 1930s exhibited quite unique characteristics in terms of their timing, duration, amplitude, and regional clustering, a common factor in the Dust Bowl decade was the high number of consecutive dry seasons, as measured by drought indicators such as the Palmer Drought Severity and Standardised Precipitation indices, that preceded the mega-heat waves. This

  10. Improving management decision processes through centralized communication linkages

    Science.gov (United States)

    Simanton, D. F.; Garman, J. R.

    1985-01-01

    Information flow is a critical element to intelligent and timely decision-making. At NASA's Johnson Space Center the flow of information is being automated through the use of a centralized backbone network. The theoretical basis of this network, its implications to the horizontal and vertical flow of information, and the technical challenges involved in its implementation are the focus of this paper. The importance of the use of common tools among programs and some future concerns related to file transfer, graphics transfer, and merging of voice and data are also discussed.

  11. Advanced Investigation and Comparative Study of Graphics Processing Unit-queries Countered

    Directory of Open Access Journals (Sweden)

    A. Baskar

    2014-10-01

    Full Text Available GPU, Graphics Processing Unit, is the buzz word ruling the market these days. What is that and how has it gained that much importance is what to be answered in this research work. The study has been constructed with full attention paid towards answering the following question. What is a GPU? How is it different from a CPU? How good/bad it is computationally when comparing to CPU? Can GPU replace CPU, or it is a day dream? How significant is arrival of APU (Accelerated Processing Unit in market? What tools are needed to make GPU work? What are the improvement/focus areas for GPU to stand in the market? All the above questions are discussed and answered well in this study with relevant explanations.

  12. Corrective Action Plan for Corrective Action Unit 417: Central Nevada Test Area Surface, Nevada

    Energy Technology Data Exchange (ETDEWEB)

    K. Campbell

    2000-04-01

    This Corrective Action Plan provides methods for implementing the approved corrective action alternative as provided in the Corrective Action Decision Document for the Central Nevada Test Area (CNTA), Corrective Action Unit (CAU) 417 (DOE/NV, 1999). The CNTA is located in the Hot Creek Valley in Nye County, Nevada, approximately 137 kilometers (85 miles) northeast of Tonopah, Nevada. The CNTA consists of three separate land withdrawal areas commonly referred to as UC-1, UC-3, and UC-4, all of which are accessible to the public. CAU 417 consists of 34 Corrective Action Sites (CASs). Results of the investigation activities completed in 1998 are presented in Appendix D of the Corrective Action Decision Document (DOE/NV, 1999). According to the results, the only Constituent of Concern at the CNTA is total petroleum hydrocarbons (TPH). Of the 34 CASs, corrective action was proposed for 16 sites in 13 CASs. In fiscal year 1999, a Phase I Work Plan was prepared for the construction of a cover on the UC-4 Mud Pit C to gather information on cover constructibility and to perform site management activities. With Nevada Division of Environmental Protection concurrence, the Phase I field activities began in August 1999. A multi-layered cover using a Geosynthetic Clay Liner as an infiltration barrier was constructed over the UC-4 Mud Pit. Some TPH impacted material was relocated, concrete monuments were installed at nine sites, signs warning of site conditions were posted at seven sites, and subsidence markers were installed on the UC-4 Mud Pit C cover. Results from the field activities indicated that the UC-4 Mud Pit C cover design was constructable and could be used at the UC-1 Central Mud Pit (CMP). However, because of the size of the UC-1 CMP this design would be extremely costly. An alternative cover design, a vegetated cover, is proposed for the UC-1 CMP.

  13. Evaluating historical climate and hydrologic trends in the Central Appalachian region of the United States

    Science.gov (United States)

    Gaertner, B. A.; Zegre, N.

    2015-12-01

    Climate change is surfacing as one of the most important environmental and social issues of the 21st century. Over the last 100 years, observations show increasing trends in global temperatures and intensity and frequency of precipitation events such as flooding, drought, and extreme storms. Global circulation models (GCM) show similar trends for historic and future climate indicators, albeit with geographic and topographic variability at regional and local scale. In order to assess the utility of GCM projections for hydrologic modeling, it is important to quantify how robust GCM outputs are compared to robust historical observations at finer spatial scales. Previous research in the United States has primarily focused on the Western and Northeastern regions due to dominance of snow melt for runoff and aquifer recharge but the impact of climate warming in the mountainous central Appalachian Region is poorly understood. In this research, we assess the performance of GCM-generated historical climate compared to historical observations primarily in the context of forcing data for macro-scale hydrologic modeling. Our results show significant spatial heterogeneity of modeled climate indices when compared to observational trends at the watershed scale. Observational data is showing considerable variability within maximum temperature and precipitation trends, with consistent increases in minimum temperature. The geographic, temperature, and complex topographic gradient throughout the central Appalachian region is likely the contributing factor in temperature and precipitation variability. Variable climate changes are leading to more severe and frequent climate events such as temperature extremes and storm events, which can have significant impacts on our drinking water supply, infrastructure, and health of all downstream communities.

  14. BarraCUDA - a fast short read sequence aligner using graphics processing units

    LENUS (Irish Health Repository)

    Klus, Petr

    2012-01-13

    Abstract Background With the maturation of next-generation DNA sequencing (NGS) technologies, the throughput of DNA sequencing reads has soared to over 600 gigabases from a single instrument run. General purpose computing on graphics processing units (GPGPU), extracts the computing power from hundreds of parallel stream processors within graphics processing cores and provides a cost-effective and energy efficient alternative to traditional high-performance computing (HPC) clusters. In this article, we describe the implementation of BarraCUDA, a GPGPU sequence alignment software that is based on BWA, to accelerate the alignment of sequencing reads generated by these instruments to a reference DNA sequence. Findings Using the NVIDIA Compute Unified Device Architecture (CUDA) software development environment, we ported the most computational-intensive alignment component of BWA to GPU to take advantage of the massive parallelism. As a result, BarraCUDA offers a magnitude of performance boost in alignment throughput when compared to a CPU core while delivering the same level of alignment fidelity. The software is also capable of supporting multiple CUDA devices in parallel to further accelerate the alignment throughput. Conclusions BarraCUDA is designed to take advantage of the parallelism of GPU to accelerate the alignment of millions of sequencing reads generated by NGS instruments. By doing this, we could, at least in part streamline the current bioinformatics pipeline such that the wider scientific community could benefit from the sequencing technology. BarraCUDA is currently available from http:\\/\\/seqbarracuda.sf.net

  15. BarraCUDA - a fast short read sequence aligner using graphics processing units

    Directory of Open Access Journals (Sweden)

    Klus Petr

    2012-01-01

    Full Text Available Abstract Background With the maturation of next-generation DNA sequencing (NGS technologies, the throughput of DNA sequencing reads has soared to over 600 gigabases from a single instrument run. General purpose computing on graphics processing units (GPGPU, extracts the computing power from hundreds of parallel stream processors within graphics processing cores and provides a cost-effective and energy efficient alternative to traditional high-performance computing (HPC clusters. In this article, we describe the implementation of BarraCUDA, a GPGPU sequence alignment software that is based on BWA, to accelerate the alignment of sequencing reads generated by these instruments to a reference DNA sequence. Findings Using the NVIDIA Compute Unified Device Architecture (CUDA software development environment, we ported the most computational-intensive alignment component of BWA to GPU to take advantage of the massive parallelism. As a result, BarraCUDA offers a magnitude of performance boost in alignment throughput when compared to a CPU core while delivering the same level of alignment fidelity. The software is also capable of supporting multiple CUDA devices in parallel to further accelerate the alignment throughput. Conclusions BarraCUDA is designed to take advantage of the parallelism of GPU to accelerate the alignment of millions of sequencing reads generated by NGS instruments. By doing this, we could, at least in part streamline the current bioinformatics pipeline such that the wider scientific community could benefit from the sequencing technology. BarraCUDA is currently available from http://seqbarracuda.sf.net

  16. The certification process of the LHCb distributed computing software

    CERN Document Server

    CERN. Geneva

    2015-01-01

    DIRAC contains around 200 thousand lines of python code, and LHCbDIRAC around 120 thousand. The testing process for each release consists of a number of steps, that includes static code analysis, unit tests, integration tests, regression tests, system tests. We dubbed the full p...

  17. Bioinformation processing a primer on computational cognitive science

    CERN Document Server

    Peterson, James K

    2016-01-01

    This book shows how mathematics, computer science and science can be usefully and seamlessly intertwined. It begins with a general model of cognitive processes in a network of computational nodes, such as neurons, using a variety of tools from mathematics, computational science and neurobiology. It then moves on to solve the diffusion model from a low-level random walk point of view. It also demonstrates how this idea can be used in a new approach to solving the cable equation, in order to better understand the neural computation approximations. It introduces specialized data for emotional content, which allows a brain model to be built using MatLab tools, and also highlights a simple model of cognitive dysfunction.

  18. 78 FR 47011 - Software Unit Testing for Digital Computer Software Used in Safety Systems of Nuclear Power Plants

    Science.gov (United States)

    2013-08-02

    ... COMMISSION Software Unit Testing for Digital Computer Software Used in Safety Systems of Nuclear Power Plants..., ``Software Unit Testing for Digital Computer Software Used in Safety Systems of Nuclear Power Plants.'' This... Plans for Digital Computer Software used in Safety Systems of Nuclear Power Plants,'' issued for...

  19. Evidence of a sensory processing unit in the mammalian macula

    Science.gov (United States)

    Chimento, T. C.; Ross, M. D.

    1996-01-01

    We cut serial sections through the medial part of the rat vestibular macula for transmission electron microscopic (TEM) examination, computer-assisted 3-D reconstruction, and compartmental modeling. The ultrastructural research showed that many primary vestibular neurons have an unmyelinated segment, often branched, that extends between the heminode (putative site of the spike initiation zone) and the expanded terminal(s) (calyx, calyces). These segments, termed the neuron branches, and the calyces frequently have spine-like processes of various dimensions with bouton endings that morphologically are afferent, efferent, or reciprocal to other macular neural elements. The major questions posed by this study were whether small details of morphology, such as the size and location of neuronal processes or synapses, could influence the output of a vestibular afferent, and whether a knowledge of morphological details could guide the selection of values for simulation parameters. The conclusions from our simulations are (1) values of 5.0 k omega cm2 for membrane resistivity and 1.0 nS for synaptic conductance yield simulations that best match published physiological results; (2) process morphology has little effect on orthodromic spread of depolarization from the head (bouton) to the spike initiation zone (SIZ); (3) process morphology has no effect on antidromic spread of depolarization to the process head; (4) synapses do not sum linearly; (5) synapses are electrically close to the SIZ; and (6) all whole-cell simulations should be run with an active SIZ.

  20. Non-parallel processing: Gendered attrition in academic computer science

    Science.gov (United States)

    Cohoon, Joanne Louise Mcgrath

    2000-10-01

    This dissertation addresses the issue of disproportionate female attrition from computer science as an instance of gender segregation in higher education. By adopting a theoretical framework from organizational sociology, it demonstrates that the characteristics and processes of computer science departments strongly influence female retention. The empirical data identifies conditions under which women are retained in the computer science major at comparable rates to men. The research for this dissertation began with interviews of students, faculty, and chairpersons from five computer science departments. These exploratory interviews led to a survey of faculty and chairpersons at computer science and biology departments in Virginia. The data from these surveys are used in comparisons of the computer science and biology disciplines, and for statistical analyses that identify which departmental characteristics promote equal attrition for male and female undergraduates in computer science. This three-pronged methodological approach of interviews, discipline comparisons, and statistical analyses shows that departmental variation in gendered attrition rates can be explained largely by access to opportunity, relative numbers, and other characteristics of the learning environment. Using these concepts, this research identifies nine factors that affect the differential attrition of women from CS departments. These factors are: (1) The gender composition of enrolled students and faculty; (2) Faculty turnover; (3) Institutional support for the department; (4) Preferential attitudes toward female students; (5) Mentoring and supervising by faculty; (6) The local job market, starting salaries, and competitiveness of graduates; (7) Emphasis on teaching; and (8) Joint efforts for student success. This work contributes to our understanding of the gender segregation process in higher education. In addition, it contributes information that can lead to effective solutions for an

  1. Computational fluid dynamics evaluation of liquid food thermal process in a brick shaped package

    Directory of Open Access Journals (Sweden)

    Pedro Esteves Duarte Augusto

    2012-03-01

    Full Text Available Food processes must ensure safety and high-quality products for a growing demand consumer creating the need for better knowledge of its unit operations. The Computational Fluid Dynamics (CFD has been widely used for better understanding the food thermal processes, and it is one of the safest and most frequently used methods for food preservation. However, there is no single study in the literature describing thermal process of liquid foods in a brick shaped package. The present study evaluated such process and the influence of its orientation on the process lethality. It demonstrated the potential of using CFD to evaluate thermal processes of liquid foods and the importance of rheological characterization and convection in thermal processing of liquid foods. It also showed that packaging orientation does not result in different sterilization values during thermal process of the evaluated fluids in the brick shaped package.

  2. Developments in medical image processing and computational vision

    CERN Document Server

    Jorge, Renato

    2015-01-01

    This book presents novel and advanced topics in Medical Image Processing and Computational Vision in order to solidify knowledge in the related fields and define their key stakeholders. It contains extended versions of selected papers presented in VipIMAGE 2013 – IV International ECCOMAS Thematic Conference on Computational Vision and Medical Image, which took place in Funchal, Madeira, Portugal, 14-16 October 2013.  The twenty-two chapters were written by invited experts of international recognition and address important issues in medical image processing and computational vision, including: 3D vision, 3D visualization, colour quantisation, continuum mechanics, data fusion, data mining, face recognition, GPU parallelisation, image acquisition and reconstruction, image and video analysis, image clustering, image registration, image restoring, image segmentation, machine learning, modelling and simulation, object detection, object recognition, object tracking, optical flow, pattern recognition, pose estimat...

  3. Conceptual Design for the Pilot-Scale Plutonium Oxide Processing Unit in the Radiochemical Processing Laboratory

    Energy Technology Data Exchange (ETDEWEB)

    Lumetta, Gregg J.; Meier, David E.; Tingey, Joel M.; Casella, Amanda J.; Delegard, Calvin H.; Edwards, Matthew K.; Jones, Susan A.; Rapko, Brian M.

    2014-08-05

    This report describes a conceptual design for a pilot-scale capability to produce plutonium oxide for use as exercise and reference materials, and for use in identifying and validating nuclear forensics signatures associated with plutonium production. This capability is referred to as the Pilot-scale Plutonium oxide Processing Unit (P3U), and it will be located in the Radiochemical Processing Laboratory at the Pacific Northwest National Laboratory. The key unit operations are described, including plutonium dioxide (PuO2) dissolution, purification of the Pu by ion exchange, precipitation, and conversion to oxide by calcination.

  4. Computer-Aided Process Model For Carbon/Phenolic Materials

    Science.gov (United States)

    Letson, Mischell A.; Bunker, Robert C.

    1996-01-01

    Computer program implements thermochemical model of processing of carbon-fiber/phenolic-matrix composite materials into molded parts of various sizes and shapes. Directed toward improving fabrication of rocket-engine-nozzle parts, also used to optimize fabrication of other structural components, and material-property parameters changed to apply to other materials. Reduces costs by reducing amount of laboratory trial and error needed to optimize curing processes and to predict properties of cured parts.

  5. Quantum computation and the physical computation level of biological information processing

    CERN Document Server

    Castagnoli, Giuseppe

    2009-01-01

    On the basis of introspective analysis, we establish a crucial requirement for the physical computation basis of consciousness: it should allow processing a significant amount of information together at the same time. Classical computation does not satisfy the requirement. At the fundamental physical level, it is a network of two body interactions, each the input-output transformation of a universal Boolean gate. Thus, it cannot process together at the same time more than the three bit input of this gate - many such gates in parallel do not count since the information is not processed together. Quantum computation satisfies the requirement. At the light of our recent explanation of the speed up, quantum measurement of the solution of the problem is analogous to a many body interaction between the parts of a perfect classical machine, whose mechanical constraints represent the problem to be solved. The many body interaction satisfies all the constraints together at the same time, producing the solution in one ...

  6. A control unit for a laser module of optoelectronic computing environment with dynamic architecture

    Directory of Open Access Journals (Sweden)

    Lipinskii A. Y.

    2013-06-01

    Full Text Available The paper presents the developed control unit of laser modules of optoelectronic acousto-optic computing environment. The unit is based on ARM micro¬con¬troller of Cortex M3 family, and allows alternating between recording (erase and reading modes in accordance with a predetermined algorithm and settings — exposure time and intensity. The principal electric circuit of the presented device, the block diagram of microcontroller algorithm, and the example application of the developed control unit in the layout of the experimental setup are provided.

  7. Computer simulation program is adaptable to industrial processes

    Science.gov (United States)

    Schultz, F. E.

    1966-01-01

    The Reaction kinetics ablation program /REKAP/, developed to simulate ablation of various materials, provides mathematical formulations for computer programs which can simulate certain industrial processes. The programs are based on the use of nonsymmetrical difference equations that are employed to solve complex partial differential equation systems.

  8. Future Information Processing Technology--1983, Computer Science and Technology.

    Science.gov (United States)

    Kay, Peg, Ed.; Powell, Patricia, Ed.

    Developed by the Institute for Computer Sciences and Technology and the Defense Intelligence Agency with input from other federal agencies, this detailed document contains the 1983 technical forecast for the information processing industry through 1997. Part I forecasts the underlying technologies of hardware and software, discusses changes in the…

  9. Tutorial: Signal Processing in Brain-Computer Interfaces

    NARCIS (Netherlands)

    Garcia Molina, G.

    2010-01-01

    Research in Electroencephalogram (EEG) based Brain-Computer Interfaces (BCIs) has been considerably expanding during the last few years. Such an expansion owes to a large extent to the multidisciplinary and challenging nature of BCI research. Signal processing undoubtedly constitutes an essential co

  10. Molecular epidemiology of Acinetobacter baumannii in central intensive care unit in Kosova teaching hospital

    Directory of Open Access Journals (Sweden)

    Lul Raka

    2009-12-01

    Full Text Available Infections caused by bacteria of genus Acinetobacter pose a significant health care challenge worldwide. Information on molecular epidemiological investigation of outbreaks caused by Acinetobacter species in Kosova is lacking. The present investigation was carried out to enlight molecular epidemiology of Acinetobacterbaumannii in the Central Intensive Care Unit (CICU of a University hospital in Kosova using pulse field gel electrophoresis (PFGE. During March - July 2006, A. baumannii was isolated from 30 patients, of whom 22 were infected and 8 were colonised. Twenty patients had ventilator-associated pneumonia, one patient had meningitis, and two had coinfection with bloodstream infection and surgical site infection. The most common diagnoses upon admission to the ICU were politrauma and cerebral hemorrhage. Bacterial isolates were most frequently recovered from endotracheal aspirate (86.7%. First isolation occurred, on average, on day 8 following admission (range 1-26 days. Genotype analysis of A. baumannii isolates identified nine distinct PFGE patterns, with predominance of PFGE clone E represented by isolates from 9 patients. Eight strains were resistant to carbapenems. The genetic relatedness of Acinetobacter baumannii was high, indicating cross-transmission within the ICU setting. These results emphasize the need for measures to prevent nosocomial transmission of A. baumannii in ICU.

  11. Well Completion Report for Corrective Action Unit 443 Central Nevada Test Area Nye County, Nevada

    Energy Technology Data Exchange (ETDEWEB)

    None

    2009-12-01

    The drilling program described in this report is part of a new corrective action strategy for Corrective Action Unit (CAU) 443 at the Central Nevada Test Area (CNTA). The drilling program included drilling two boreholes, geophysical well logging, construction of two monitoring/validation (MV) wells with piezometers (MV-4 and MV-5), development of monitor wells and piezometers, recompletion of two existing wells (HTH-1 and UC-1-P-1S), removal of pumps from existing wells (MV-1, MV-2, and MV-3), redevelopment of piezometers associated with existing wells (MV-1, MV-2, and MV-3), and installation of submersible pumps. The new corrective action strategy includes initiating a new 5-year proof-of-concept monitoring period to validate the compliance boundary at CNTA (DOE 2007). The new 5-year proof-of-concept monitoring period begins upon completion of the new monitor wells and collection of samples for laboratory analysis. The new strategy is described in the Corrective Action Decision Document/Corrective Action Plan addendum (DOE 2008a) that the Nevada Division of Environmental Protection approved (NDEP 2008).

  12. Ground Motion Prediction Equations for the Central and Eastern United States

    Science.gov (United States)

    Seber, D.; Graizer, V.

    2015-12-01

    New ground motion prediction equations (GMPE) G15 model for the Central and Eastern United States (CEUS) is presented. It is based on the modular filter based approach developed by Graizer and Kalkan (2007, 2009) for active tectonic environment in the Western US (WUS). The G15 model is based on the NGA-East database for the horizontal peak ground acceleration and 5%-damped pseudo spectral acceleration RotD50 component (Goulet et al., 2014). In contrast to active tectonic environment the database for the CEUS is not sufficient for creating purely empirical GMPE covering the range of magnitudes and distances required for seismic hazard assessments. Recordings in NGA-East database are sparse and cover mostly range of Mindustry (Vs=2800 m/s). The number of model predictors is limited to a few measurable parameters: moment magnitude M, closest distance to fault rupture plane R, average shear-wave velocity in the upper 30 m of the geological profile VS30, and anelastic attenuation factor Q0. Incorporating anelastic attenuation Q0 as an input parameter allows adjustments based on the regional crustal properties. The model covers the range of magnitudes 4.010 Hz) and is within the range of other models for frequencies lower than 2.5 Hz

  13. Impact of climate variability on runoff in the north-central United States

    Science.gov (United States)

    Ryberg, Karen R.; Lin, Wei; Vecchia, Aldo V.

    2014-01-01

    Large changes in runoff in the north-central United States have occurred during the past century, with larger floods and increases in runoff tending to occur from the 1970s to the present. The attribution of these changes is a subject of much interest. Long-term precipitation, temperature, and streamflow records were used to compare changes in precipitation and potential evapotranspiration (PET) to changes in runoff within 25 stream basins. The basins studied were organized into four groups, each one representing basins similar in topography, climate, and historic patterns of runoff. Precipitation, PET, and runoff data were adjusted for near-decadal scale variability to examine longer-term changes. A nonlinear water-balance analysis shows that changes in precipitation and PET explain the majority of multidecadal spatial/temporal variability of runoff and flood magnitudes, with precipitation being the dominant driver. Historical changes in climate and runoff in the region appear to be more consistent with complex transient shifts in seasonal climatic conditions than with gradual climate change. A portion of the unexplained variability likely stems from land-use change.

  14. Carbon Flux of Down Woody Materials in Forests of the North Central United States

    International Nuclear Information System (INIS)

    Across large scales, the carbon (C) flux of down woody material (DWM) detrital pools has largely been simulated based on forest stand attributes (e.g., stand age and forest type). The annual change in forest DWM C stocks and other attributes (e.g., size and decay class changes) was assessed using a forest inventory in the north central United States to provide an empirical assessment of strategic-scale DWM C flux. Using DWM inventory data from the USDA Forest Service's Forest Inventory and Analysis program, DWM C stocks were found to be relatively static across the study region with an annual flux rate not statistically different from zero. Mean C flux rates across the study area were -0.25, -0.12, -0.01, and -0.04 (Mg/ha/yr) for standing live trees, standing dead trees, coarse woody debris, and fine woody debris, respectively. Flux rates varied in their both magnitude and status (emission/sequestration) by forest types, latitude, and DWM component size. Given the complex dynamics of DWM C flux, early implementation of inventory re measurement, and relatively low sample size, numerous future research directions are suggested.

  15. Validity of Drought Indices as Drought Predictors in the South-Central United States

    Science.gov (United States)

    Rohli, R. V.; Bushra, N.; Lam, N.; Zou, L.; Mihunov, V.; Reams, M.; Argote, J.

    2015-12-01

    Drought is among the most insidious types of natural disasters and can have tremendous economic and human health impacts. This research analyzes the relationship between two readily-accessible drought indices - the Palmer Drought Severity Index (PDSI) and Palmer Hydrologic Drought Index (PHDI) - and the damage incurred by such droughts in terms of monetary loss, over the 1975-2010 time period on monthly basis, for five states in the south-central U.S.A. Because drought damage in the Spatial Hazards Events and Losses Database for the United States (SHELDUSTM) is reported at the county level, statistical downscaling techniques were used to estimate the county-level PDSI and PHDI. Correlation analysis using the downscaled indices suggests that although relatively few months contain drought damage reports, in general drought indices can be useful predictors of drought damage at the monthly temporal scale extended to 12 months and at the county-wide spatial scale. The varying time lags between occurrence of drought and reporting of damage, perhaps due to varying resilience to drought intensity and duration by crop types across space, irrigation methods, and adaptation measures of the community to drought varies over space and time, are thought to contribute to weakened correlations. These results present a reminder of the complexities of anticipating the effects of drought but they contribute to the effort to improve our ability to mitigate the effects of incipient drought.

  16. Final design of the Switching Network Units for the JT-60SA Central Solenoid

    Energy Technology Data Exchange (ETDEWEB)

    Lampasi, Alessandro, E-mail: alessandro.lampasi@enea.it [National Agency for New Technologies, Energy and Sustainable Economic Development (ENEA), Frascati (Italy); Coletti, Alberto; Novello, Luca [Fusion for Energy (F4E) Broader Fusion Development Department, Garching (Germany); Matsukawa, Makoto [Japan Atomic Energy Agency, Naka Fusion Institute, Mukouyama, Naka-si, Ibaraki-ken (Japan); Burini, Filippo; Taddia, Giuseppe; Tenconi, Sandro [OCEM Energy Technology, San Giorgio Di Piano (Italy)

    2014-04-15

    This paper describes the approved detailed design of the four Switching Network Units (SNUs) of the superconducting Central Solenoid of JT-60SA, the satellite tokamak that will be built in Naka, Japan, in the framework of the “Broader Approach” cooperation agreement between Europe and Japan. The SNUs can interrupt a current of 20 kA DC in less than 1 ms in order to produce a voltage of 5 kV. Such performance is obtained by inserting an electronic static circuit breaker in parallel to an electromechanical contactor and by matching and coordinating their operations. Any undesired transient overvoltage is limited by an advanced snubber circuit optimized for this application. The SNU resistance values can be adapted to the specific operation scenario. In particular, after successful plasma breakdown, the SNU resistance can be reduced by a making switch. The design choices of the main SNU elements are justified by showing and discussing the performed calculations and simulations. In most cases, the developed design is expected to exceed the performances required by the JT-60SA project.

  17. Imaging Rayleigh wave attenuation and phase velocity in the western and central United States

    Science.gov (United States)

    Bao, X.; Dalton, C. A.; Jin, G.; Gaherty, J. B.

    2013-12-01

    The EarthScope USArray provides an opportunity to obtain detailed images of the continental upper mantle at an unprecedented scale. The majority of mantle models derived from USArray data to date contain spatial variations in seismic-wave speed; however, little is known about the attenuation structure of the North American upper mantle. Joint interpretation of seismic attenuation and velocity models can improve upon the interpretations based only on velocity, and provide important constraints on the temperature, composition, melt content, and volatile content of the mantle. We jointly invert Rayleigh wave phase and amplitude observations for phase velocity and attenuation maps for the western and central United States using USArray data. This approach exploits the amplitudes' sensitivity to velocity and the phase delays' sensitivity to attenuation. The phase and amplitude data are measured in the period range 20--100 s using a new interstation cross-correlation approach, based on the Generalized Seismological Data Functional algorithm, that takes advantage of waveform similarity at nearby stations. The Rayleigh waves are generated from 670 large teleseismic earthquakes that occurred between 2006 and 2012, and measured from all available Transportable Array stations. We consider two separate and complementary approaches for imaging attenuation variations: (1) the Helmholtz tomography (Lin et al., 2012) and (2) two-station path tomography. Results obtained from the two methods are contrasted. We provide a preliminary interpretation based on the observed relationship between Rayleigh wave attenuation and phase velocity.

  18. Application of Paleoseismology to Seismic Hazard Analysis in the Central and Eastern United States (CEUS)

    International Nuclear Information System (INIS)

    Paleoseismology techniques have been applied across the CEUS (Central and Eastern United States) to augment seismic data and to improve seismic hazard analyses. Considering paleoseismic data along with historic data may increase the number of events and their maximum magnitudes (Mmax), which may decrease the recurrence time of seismic events included in hazard calculations. More importantly, paleoseismic studies extend the length of the earthquake record often by 1000s–10,000s of years and reduce uncertainties related to sources, magnitude, and recurrence times of earthquakes. The CEUS Seismic Source Characterization (Technical Report, [108]) uses a lot of paleoseismic data in building the source model for seismic hazard analyses. Most of these data are derived through study of paleoliquefaction features. Appendix E of the Technical Report compiles data from ten distinct regions in eastern North America where paleoliquefaction features have been used to improve knowledge of regional seismic history. These regions are shown. Paleoliquefaction data can significantly impact seismic hazard calculations by better defining earthquake sources, Mmax for those sources, and recurrence rates of large earthquakes

  19. Molecular epidemiology of Acinetobacter baumannii in central intensive care unit in Kosova Teaching Hospital.

    Science.gov (United States)

    Raka, Lul; Kalenć, Smilja; Bosnjak, Zrinka; Budimir, Ana; Katić, Stjepan; Sijak, Dubravko; Mulliqi-Osmani, Gjyle; Zoutman, Dick; Jaka, Arbëresha

    2009-12-01

    Infections caused by bacteria of genus Acinetobacter pose a significant health care challenge worldwide. Information on molecular epidemiological investigation of outbreaks caused by Acinetobacter species in Kosova is lacking. The present investigation was carried out to enlight molecular epidemiology of Acinetobacter baumannii in the Central Intensive Care Unit (CICU) of a University hospital in Kosova using pulse field gel electrophoresis (PFGE). During March - July 2006, A. baumannii was isolated from 30 patients, of whom 22 were infected and 8 were colonised. Twenty patients had ventilator-associated pneumonia, one patient had meningitis, and two had coinfection with bloodstream infection and surgical site infection. The most common diagnoses upon admission to the ICU were politrauma and cerebral hemorrhage. Bacterial isolates were most frequently recovered from endotracheal aspirate (86.7%). First isolation occurred, on average, on day 8 following admission (range 1-26 days). Genotype analysis of A. baumannii isolates identified nine distinct PFGE patterns, with predominance of PFGE clone E represented by isolates from 9 patients. Eight strains were resistant to carbapenems. The genetic relatedness of Acinetobacter baumannii was high, indicating cross-transmission within the ICU setting. These results emphasize the need for measures to prevent nosocomial transmission of A. baumannii in ICU. PMID:20464330

  20. Late-Stage Ductile Deformation in Xiongdian-Suhe HP Metamorphic Unit, North-Western Dabie Shan, Central China

    Institute of Scientific and Technical Information of China (English)

    Suo Shutian; Zhong Zengqiu; Zhou Hanwen; You Zhendong

    2004-01-01

    New structural and petrological data unveil a very complicated ductile deformation history of the Xiongdian-Suhe HP metamorphic unit, north-western Dabie Shan, central China. The fine-grained symplectic amphibolite-facies assemblage and coronal structure enveloping eclogite-facies garnet, omphacite and phengite etc., representing strain-free decompression and retrogressive metamorphism, are considered as the main criteria to distinguish between the early-stage deformation under HP metamorphic conditions related to the continental deep subduction and collision, and the late-stage deformation under amphibolite to greenschist-facies conditions occurred in the post-eclogite exhumation processes. Two late-stages of widely developed, sequential ductile deformations D3 and D4, are recognized on the basis of penetrative fabrics and mineral aggregates in the Xiongdian-Suhe HP metamorphic unit, which shows clear, regionally, consistent overprinting relationships. D3 fabrics are best preserved in the Suhe tract of low post-D3 deformation intensity and characterized by steeply dipping layered mylonitic amphibolites associated with doubly vergent folds. They are attributed to a phase of tectonism linked to the initial exhumation of the HP rocks and involved crustal shortening with the development of upright structures and the widespread emplacement of garnet-bearing granites and felsic dikes. D4 structures are attributed to the main episode of ductile extension (D14) with a gently dipping foliation to the north and common intrafolial, recumbent folds in the Xiongdian tract, followed by normal sense top-to-the north ductile shearing (D24) along an important tectonic boundary, the so-called Majiawa-Hexiwan fault (MHF), the westward continuation of the Balifan-Mozitan-Xiaotian fault (BMXF) of the northern Dabie Shan. It is indicated that the two stages of ductile deformation observed in the Xiongdian-Suhe HP metamorphic unit, reflecting the post-eclogite compressional or extrusion

  1. In-silico design of computational nucleic acids for molecular information processing.

    Science.gov (United States)

    Ramlan, Effirul Ikhwan; Zauner, Klaus-Peter

    2013-05-07

    Within recent years nucleic acids have become a focus of interest for prototype implementations of molecular computing concepts. During the same period the importance of ribonucleic acids as components of the regulatory networks within living cells has increasingly been revealed. Molecular computers are attractive due to their ability to function within a biological system; an application area extraneous to the present information technology paradigm. The existence of natural information processing architectures (predominately exemplified by protein) demonstrates that computing based on physical substrates that are radically different from silicon is feasible. Two key principles underlie molecular level information processing in organisms: conformational dynamics of macromolecules and self-assembly of macromolecules. Nucleic acids support both principles, and moreover computational design of these molecules is practicable. This study demonstrates the simplicity with which one can construct a set of nucleic acid computing units using a new computational protocol. With the new protocol, diverse classes of nucleic acids imitating the complete set of boolean logical operators were constructed. These nucleic acid classes display favourable thermodynamic properties and are significantly similar to the approximation of successful candidates implemented in the laboratory. This new protocol would enable the construction of a network of interconnecting nucleic acids (as a circuit) for molecular information processing.

  2. In-silico design of computational nucleic acids for molecular information processing.

    Science.gov (United States)

    Ramlan, Effirul Ikhwan; Zauner, Klaus-Peter

    2013-01-01

    Within recent years nucleic acids have become a focus of interest for prototype implementations of molecular computing concepts. During the same period the importance of ribonucleic acids as components of the regulatory networks within living cells has increasingly been revealed. Molecular computers are attractive due to their ability to function within a biological system; an application area extraneous to the present information technology paradigm. The existence of natural information processing architectures (predominately exemplified by protein) demonstrates that computing based on physical substrates that are radically different from silicon is feasible. Two key principles underlie molecular level information processing in organisms: conformational dynamics of macromolecules and self-assembly of macromolecules. Nucleic acids support both principles, and moreover computational design of these molecules is practicable. This study demonstrates the simplicity with which one can construct a set of nucleic acid computing units using a new computational protocol. With the new protocol, diverse classes of nucleic acids imitating the complete set of boolean logical operators were constructed. These nucleic acid classes display favourable thermodynamic properties and are significantly similar to the approximation of successful candidates implemented in the laboratory. This new protocol would enable the construction of a network of interconnecting nucleic acids (as a circuit) for molecular information processing. PMID:23647621

  3. A sampler of useful computational tools for applied geometry, computer graphics, and image processing foundations for computer graphics, vision, and image processing

    CERN Document Server

    Cohen-Or, Daniel; Ju, Tao; Mitra, Niloy J; Shamir, Ariel; Sorkine-Hornung, Olga; Zhang, Hao (Richard)

    2015-01-01

    A Sampler of Useful Computational Tools for Applied Geometry, Computer Graphics, and Image Processing shows how to use a collection of mathematical techniques to solve important problems in applied mathematics and computer science areas. The book discusses fundamental tools in analytical geometry and linear algebra. It covers a wide range of topics, from matrix decomposition to curvature analysis and principal component analysis to dimensionality reduction.Written by a team of highly respected professors, the book can be used in a one-semester, intermediate-level course in computer science. It

  4. Parallel Memetic Algorithm for VLSI Circuit Partitioning Problem using Graphical Processing Units

    Directory of Open Access Journals (Sweden)

    P. Sivakumar

    2012-01-01

    Full Text Available Problem statement: Memetic Algorithm (MA is a form of population-based hybrid Genetic Algorithm (GA coupled with an individual learning procedure capable of performing local refinements. Here we used genetic algorithm to explore the search space and simulated annealing as a local search method to exploit the information in the search region for the optimization of VLSI netlist bi-Partitioning problem. However, they may execute for a long time, because several fitness evaluations must be performed. A promising approach to overcome this limitation is to parallelize this algorithms. General Purpose computing over Graphical Processing Units (GPGPUs is a huge shift of paradigm in parallel computing that promises a dramatic increase in performance. Approach: In this study, we propose to implement a parallel MA using graphics cards. Graphics Processor Units (GPUs have emerged as powerful parallel processors in recent years. Using of Graphics Processing Units (GPUs equipped computers; it is possible to accelerate the evaluation of individuals in Genetic Programming. Program compilation, fitness case data and fitness execution are spread over the cores of GPU, allowing for the efficient processing of very large datasets. Results: We perform experiments to compare our parallel MA with a Sequential MA and demonstrate that the former is much more effective than the latter. Our results, implemented on a NVIDIA GeForce GTX 9400 GPU card. Conclusion: Its indicates that our approach is on average 5×faster when compared to a CPU based implementation. With the Tesla C1060 GPU server, our approach would be potentially 10×faster. The correctness of the GPU based MA has been verified by comparing its result with a CPU based MA.

  5. Centralization process in procurement of maintenance, repair and operations (mro) items: Case company X

    OpenAIRE

    Hoang, Huong

    2016-01-01

    This thesis documents the process of a centralization project for Maintenance, Repair and Operations (MRO) procurement and the incentives behind the project, as well as discusses the problem attributes, and recommends solutions on how to improve the operational sides of the project in company X. The research questions seek answers for a particular and standardized process to implement the centralization procurement process for MRO items, the reason why MRO items, especially the packaging ...

  6. Computer-Assisted Scheduling of Army Unit Training: An Application of Simulated Annealing.

    Science.gov (United States)

    Hart, Roland J.; Goehring, Dwight J.

    This report of an ongoing research project intended to provide computer assistance to Army units for the scheduling of training focuses on the feasibility of simulated annealing, a heuristic approach for solving scheduling problems. Following an executive summary and brief introduction, the document is divided into three sections. First, the Army…

  7. Our U.S. Energy Future, Student Guide. Computer Technology Program Environmental Education Units.

    Science.gov (United States)

    Northwest Regional Educational Lab., Portland, OR.

    This is the student guide in a set of five computer-oriented environmental/energy education units. Contents are organized into the following parts or lessons: (1) Introduction to the U.S. Energy Future; (2) Description of the "FUTURE" programs; (3) Effects of "FUTURE" decisions; and (4) Exercises on the U.S. energy future. This guide supplements a…

  8. Parallel processing using an optical delay-based reservoir computer

    Science.gov (United States)

    Van der Sande, Guy; Nguimdo, Romain Modeste; Verschaffelt, Guy

    2016-04-01

    Delay systems subject to delayed optical feedback have recently shown great potential in solving computationally hard tasks. By implementing a neuro-inspired computational scheme relying on the transient response to optical data injection, high processing speeds have been demonstrated. However, reservoir computing systems based on delay dynamics discussed in the literature are designed by coupling many different stand-alone components which lead to bulky, lack of long-term stability, non-monolithic systems. Here we numerically investigate the possibility of implementing reservoir computing schemes based on semiconductor ring lasers. Semiconductor ring lasers are semiconductor lasers where the laser cavity consists of a ring-shaped waveguide. SRLs are highly integrable and scalable, making them ideal candidates for key components in photonic integrated circuits. SRLs can generate light in two counterpropagating directions between which bistability has been demonstrated. We demonstrate that two independent machine learning tasks , even with different nature of inputs with different input data signals can be simultaneously computed using a single photonic nonlinear node relying on the parallelism offered by photonics. We illustrate the performance on simultaneous chaotic time series prediction and a classification of the Nonlinear Channel Equalization. We take advantage of different directional modes to process individual tasks. Each directional mode processes one individual task to mitigate possible crosstalk between the tasks. Our results indicate that prediction/classification with errors comparable to the state-of-the-art performance can be obtained even with noise despite the two tasks being computed simultaneously. We also find that a good performance is obtained for both tasks for a broad range of the parameters. The results are discussed in detail in [Nguimdo et al., IEEE Trans. Neural Netw. Learn. Syst. 26, pp. 3301-3307, 2015

  9. The University Next Door: Developing a Centralized Unit That Strategically Cultivates Community Engagement at an Urban University

    Science.gov (United States)

    Holton, Valerie L.; Early, Jennifer L.; Resler, Meghan; Trussell, Audrey; Howard, Catherine

    2016-01-01

    Using Kotter's model of change as a framework, this case study will describe the structure and efforts of a centralized unit within an urban, research university to deepen and extend the institutionalization of community engagement. The change model will be described along with details about the implemented strategies and practices that fall…

  10. A test of wearable computer equipment for process plant personnel

    Energy Technology Data Exchange (ETDEWEB)

    Nystad, Espen; Olsen, Asle; Pirus, Dominique

    2006-03-15

    Work performed by process plant personnel may be supported by wearable computers to give improved safety and reduced workload. A test of various types of computer equipment has been performed to evaluate the usability of each type of equipment for process plant tasks. Eight participants tested two kinds of displays, a head-mounted display (HMD) and a touch-sensitive LCD display worn in a belt, two kinds of keyboard, a wrist keyboard and a software keyboard operated with a stylus, and two kinds of cursor control devices, a track pad and a stylus. The equipment was evaluated by subjective usability ratings, performance errors, and performance time.The usability ratings showed a clear preference for the wrist keyboard over the software keyboard, and for the touch screen over the HMD, but no preference regarding cursor control device (track pad or stylus). The software keyboard had the most typing errors. A computer configuration with HMD, wrist keyboard and track pad had the highest task performance time.Some usability problems were registered for each piece of equipment tested in the study. These problems were related both to the isolated interaction with the equipment, and to using the equipment for specific tasks in a specific context. The report gives some recommendations for use of wearable computer equipment by process plant personnel.(auth)

  11. Computer-Aided Multiscale Modelling for Chemical Process Engineering

    DEFF Research Database (Denmark)

    Morales Rodriguez, Ricardo; Gani, Rafiqul

    2007-01-01

    Chemical processes are generally modeled through monoscale approaches, which, while not adequate, satisfy a useful role in product-process design. In this case, use of a multi-dimensional and multi-scale model-based approach has importance in product-process development. A computer-aided framework......T) for model translation, analysis and solution. The integration of ModDev, MoT and ICAS or any other external software or process simulator (using COM-Objects) permits the generation of different models and/or process configurations for purposes of simulation, design and analysis. Consequently, it is possible...... for model generation, analysis, solution and implementation is necessary for the development and application of the desired model-based approach for product-centric process design/analysis. This goal is achieved through the combination of a system for model development (ModDev), and a modelling tool (Mo...

  12. A learnable parallel processing architecture towards unity of memory and computing

    Science.gov (United States)

    Li, H.; Gao, B.; Chen, Z.; Zhao, Y.; Huang, P.; Ye, H.; Liu, L.; Liu, X.; Kang, J.

    2015-08-01

    Developing energy-efficient parallel information processing systems beyond von Neumann architecture is a long-standing goal of modern information technologies. The widely used von Neumann computer architecture separates memory and computing units, which leads to energy-hungry data movement when computers work. In order to meet the need of efficient information processing for the data-driven applications such as big data and Internet of Things, an energy-efficient processing architecture beyond von Neumann is critical for the information society. Here we show a non-von Neumann architecture built of resistive switching (RS) devices named “iMemComp”, where memory and logic are unified with single-type devices. Leveraging nonvolatile nature and structural parallelism of crossbar RS arrays, we have equipped “iMemComp” with capabilities of computing in parallel and learning user-defined logic functions for large-scale information processing tasks. Such architecture eliminates the energy-hungry data movement in von Neumann computers. Compared with contemporary silicon technology, adder circuits based on “iMemComp” can improve the speed by 76.8% and the power dissipation by 60.3%, together with a 700 times aggressive reduction in the circuit area.

  13. A learnable parallel processing architecture towards unity of memory and computing.

    Science.gov (United States)

    Li, H; Gao, B; Chen, Z; Zhao, Y; Huang, P; Ye, H; Liu, L; Liu, X; Kang, J

    2015-01-01

    Developing energy-efficient parallel information processing systems beyond von Neumann architecture is a long-standing goal of modern information technologies. The widely used von Neumann computer architecture separates memory and computing units, which leads to energy-hungry data movement when computers work. In order to meet the need of efficient information processing for the data-driven applications such as big data and Internet of Things, an energy-efficient processing architecture beyond von Neumann is critical for the information society. Here we show a non-von Neumann architecture built of resistive switching (RS) devices named "iMemComp", where memory and logic are unified with single-type devices. Leveraging nonvolatile nature and structural parallelism of crossbar RS arrays, we have equipped "iMemComp" with capabilities of computing in parallel and learning user-defined logic functions for large-scale information processing tasks. Such architecture eliminates the energy-hungry data movement in von Neumann computers. Compared with contemporary silicon technology, adder circuits based on "iMemComp" can improve the speed by 76.8% and the power dissipation by 60.3%, together with a 700 times aggressive reduction in the circuit area. PMID:26271243

  14. A learnable parallel processing architecture towards unity of memory and computing.

    Science.gov (United States)

    Li, H; Gao, B; Chen, Z; Zhao, Y; Huang, P; Ye, H; Liu, L; Liu, X; Kang, J

    2015-08-14

    Developing energy-efficient parallel information processing systems beyond von Neumann architecture is a long-standing goal of modern information technologies. The widely used von Neumann computer architecture separates memory and computing units, which leads to energy-hungry data movement when computers work. In order to meet the need of efficient information processing for the data-driven applications such as big data and Internet of Things, an energy-efficient processing architecture beyond von Neumann is critical for the information society. Here we show a non-von Neumann architecture built of resistive switching (RS) devices named "iMemComp", where memory and logic are unified with single-type devices. Leveraging nonvolatile nature and structural parallelism of crossbar RS arrays, we have equipped "iMemComp" with capabilities of computing in parallel and learning user-defined logic functions for large-scale information processing tasks. Such architecture eliminates the energy-hungry data movement in von Neumann computers. Compared with contemporary silicon technology, adder circuits based on "iMemComp" can improve the speed by 76.8% and the power dissipation by 60.3%, together with a 700 times aggressive reduction in the circuit area.

  15. Test bank to accompany Computers data and processing

    CERN Document Server

    Deitel, Harvey M

    1980-01-01

    Test Bank to Accompany Computers and Data Processing provides a variety of questions from which instructors can easily custom tailor exams appropriate for their particular courses. This book contains over 4000 short-answer questions that span the full range of topics for introductory computing course.This book is organized into five parts encompassing 19 chapters. This text provides a very large number of questions so that instructors can produce different exam testing essentially the same topics in succeeding semesters. Three types of questions are included in this book, including multiple ch

  16. Integrating post-Newtonian equations on graphics processing units

    Energy Technology Data Exchange (ETDEWEB)

    Herrmann, Frank; Tiglio, Manuel [Department of Physics, Center for Fundamental Physics, and Center for Scientific Computation and Mathematical Modeling, University of Maryland, College Park, MD 20742 (United States); Silberholz, John [Center for Scientific Computation and Mathematical Modeling, University of Maryland, College Park, MD 20742 (United States); Bellone, Matias [Facultad de Matematica, Astronomia y Fisica, Universidad Nacional de Cordoba, Cordoba 5000 (Argentina); Guerberoff, Gustavo, E-mail: tiglio@umd.ed [Facultad de Ingenieria, Instituto de Matematica y Estadistica ' Prof. Ing. Rafael Laguardia' , Universidad de la Republica, Montevideo (Uruguay)

    2010-02-07

    We report on early results of a numerical and statistical study of binary black hole inspirals. The two black holes are evolved using post-Newtonian approximations starting with initially randomly distributed spin vectors. We characterize certain aspects of the distribution shortly before merger. In particular we note the uniform distribution of black hole spin vector dot products shortly before merger and a high correlation between the initial and final black hole spin vector dot products in the equal-mass, maximally spinning case. More than 300 million simulations were performed on graphics processing units, and we demonstrate a speed-up of a factor 50 over a more conventional CPU implementation. (fast track communication)

  17. PO*WW*ER mobile treatment unit process hazards analysis

    Energy Technology Data Exchange (ETDEWEB)

    Richardson, R.B.

    1996-06-01

    The objective of this report is to demonstrate that a thorough assessment of the risks associated with the operation of the Rust Geotech patented PO*WW*ER mobile treatment unit (MTU) has been performed and documented. The MTU was developed to treat aqueous mixed wastes at the US Department of Energy (DOE) Albuquerque Operations Office sites. The MTU uses evaporation to separate organics and water from radionuclides and solids, and catalytic oxidation to convert the hazardous into byproducts. This process hazards analysis evaluated a number of accident scenarios not directly related to the operation of the MTU, such as natural phenomena damage and mishandling of chemical containers. Worst case accident scenarios were further evaluated to determine the risk potential to the MTU and to workers, the public, and the environment. The overall risk to any group from operation of the MTU was determined to be very low; the MTU is classified as a Radiological Facility with low hazards.

  18. PO*WW*ER mobile treatment unit process hazards analysis

    International Nuclear Information System (INIS)

    The objective of this report is to demonstrate that a thorough assessment of the risks associated with the operation of the Rust Geotech patented PO*WW*ER mobile treatment unit (MTU) has been performed and documented. The MTU was developed to treat aqueous mixed wastes at the US Department of Energy (DOE) Albuquerque Operations Office sites. The MTU uses evaporation to separate organics and water from radionuclides and solids, and catalytic oxidation to convert the hazardous into byproducts. This process hazards analysis evaluated a number of accident scenarios not directly related to the operation of the MTU, such as natural phenomena damage and mishandling of chemical containers. Worst case accident scenarios were further evaluated to determine the risk potential to the MTU and to workers, the public, and the environment. The overall risk to any group from operation of the MTU was determined to be very low; the MTU is classified as a Radiological Facility with low hazards

  19. On the Computational Complexity of Degenerate Unit Distance Representations of Graphs

    Science.gov (United States)

    Horvat, Boris; Kratochvíl, Jan; Pisanski, Tomaž

    Some graphs admit drawings in the Euclidean k-space in such a (natural) way, that edges are represented as line segments of unit length. Such embeddings are called k-dimensional unit distance representations. The embedding is strict if the distances of points representing nonadjacent pairs of vertices are different than 1. When two non-adjacent vertices are drawn in the same point, we say that the representation is degenerate. Computational complexity of nondegenerate embeddings has been studied before. We initiate the study of the computational complexity of (possibly) degenerate embeddings. In particular we prove that for every k ≥ 2, deciding if an input graph has a (possibly) degenerate k-dimensional unit distance representation is NP-hard.

  20. A RANDOM FUNCTIONAL CENTRAL LIMIT THEOREM FOR PROCESSES OF PRODUCT SUMS OF LINEAR PROCESSES GENERATED BY MARTINGALE DIFFERENCES

    Institute of Scientific and Technical Information of China (English)

    WANG YUEBAO; YANG YANG; ZHOU HAIYANG

    2003-01-01

    A random functional central limit theorem is obtained for processes of partial sums andproduct sums of linear processes generated by non-stationary martingale differences. It devel-ops and improves some corresponding results on processes of partial sums of linear processesgenerated by strictly stationary martingale differences, which can be found in [5].

  1. Recognition of oral spelling is diagnostic of the central reading processes.

    Science.gov (United States)

    Schubert, Teresa; McCloskey, Michael

    2015-01-01

    The task of recognition of oral spelling (stimulus: "C-A-T", response: "cat") is often administered to individuals with acquired written language disorders, yet there is no consensus about the underlying cognitive processes. We adjudicate between two existing hypotheses: Recognition of oral spelling uses central reading processes, or recognition of oral spelling uses central spelling processes in reverse. We tested the recognition of oral spelling and spelling to dictation abilities of a single individual with acquired dyslexia and dysgraphia. She was impaired relative to matched controls in spelling to dictation but unimpaired in recognition of oral spelling. Recognition of oral spelling for exception words (e.g., colonel) and pronounceable nonwords (e.g., larth) was intact. Our results were predicted by the hypothesis that recognition of oral spelling involves the central reading processes. We conclude that recognition of oral spelling is a useful tool for probing the integrity of the central reading processes. PMID:25885676

  2. Implementation of central venous catheter bundle in an intensive care unit in Kuwait: Effect on central line-associated bloodstream infections.

    Science.gov (United States)

    Salama, Mona F; Jamal, Wafaa; Al Mousa, Haifa; Rotimi, Vincent

    2016-01-01

    Central line-associated bloodstream infection (CLABSIs) is an important healthcare-associated infection in the critical care units. It causes substantial morbidity, mortality and incurs high costs. The use of central venous line (CVL) insertion bundle has been shown to decrease the incidence of CLABSIs. Our aim was to study the impact of CVL insertion bundle on incidence of CLABSI and study the causative microbial agents in an intensive care unit in Kuwait. Surveillance for CLABSI was conducted by trained infection control team using National Health Safety Network (NHSN) case definitions and device days measurement methods. During the intervention period, nursing staff used central line care bundle consisting of (1) hand hygiene by inserter (2) maximal barrier precautions upon insertion by the physician inserting the catheter and sterile drape from head to toe to the patient (3) use of a 2% chlorohexidine gluconate (CHG) in 70% ethanol scrub for the insertion site (4) optimum catheter site selection. (5) Examination of the daily necessity of the central line. During the pre-intervention period, there were 5367 documented catheter-days and 80 CLABSIs, for an incidence density of 14.9 CLABSIs per 1000 catheter-days. After implementation of the interventions, there were 5052 catheter-days and 56 CLABSIs, for an incidence density of 11.08 per 1000 catheter-days. The reduction in the CLABSI/1000 catheter days was not statistically significant (P=0.0859). This study demonstrates that implementation of a central venous catheter post-insertion care bundle was associated with a reduction in CLABSI in an intensive care area setting.

  3. Acceleration of High Angular Momentum Electron Repulsion Integrals and Integral Derivatives on Graphics Processing Units.

    Science.gov (United States)

    Miao, Yipu; Merz, Kenneth M

    2015-04-14

    We present an efficient implementation of ab initio self-consistent field (SCF) energy and gradient calculations that run on Compute Unified Device Architecture (CUDA) enabled graphical processing units (GPUs) using recurrence relations. We first discuss the machine-generated code that calculates the electron-repulsion integrals (ERIs) for different ERI types. Next we describe the porting of the SCF gradient calculation to GPUs, which results in an acceleration of the computation of the first-order derivative of the ERIs. However, only s, p, and d ERIs and s and p derivatives could be executed simultaneously on GPUs using the current version of CUDA and generation of NVidia GPUs using a previously described algorithm [Miao and Merz J. Chem. Theory Comput. 2013, 9, 965-976.]. Hence, we developed an algorithm to compute f type ERIs and d type ERI derivatives on GPUs. Our benchmarks shows the performance GPU enable ERI and ERI derivative computation yielded speedups of 10-18 times relative to traditional CPU execution. An accuracy analysis using double-precision calculations demonstrates that the overall accuracy is satisfactory for most applications. PMID:26574356

  4. Computer Science Teacher Professional Development in the United States: A Review of Studies Published between 2004 and 2014

    Science.gov (United States)

    Menekse, Muhsin

    2015-01-01

    While there has been a remarkable interest to make computer science a core K-12 academic subject in the United States, there is a shortage of K-12 computer science teachers to successfully implement computer sciences courses in schools. In order to enhance computer science teacher capacity, training programs have been offered through teacher…

  5. A computer-assisted process for supersonic aircraft conceptual design

    Science.gov (United States)

    Johnson, V. S.

    1985-01-01

    Design methodology was developed and existing major computer codes were selected to carry out the conceptual design of supersonic aircraft. A computer-assisted design process resulted from linking the codes together in a logical manner to implement the design methodology. The process does not perform the conceptual design of a supersonic aircraft but it does provide the designer with increased flexibility, especially in geometry generation and manipulation. Use of the computer-assisted process for the conceptual design of an advanced technology Mach 3.5 interceptor showed the principal benefit of the process to be the ability to use a computerized geometry generator and then directly convert the geometry between formats used in the geometry code and the aerodynamics codes. Results from the interceptor study showed that a Mach 3.5 standoff interceptor with a 1000 nautical-mile mission radius and a payload of eight Phoenix missiles appears to be feasible with the advanced technologies considered. A sensitivity study showed that technologies affecting the empty weight and propulsion system would be critical in the final configuration characteristics with aerodynamics having a lesser effect for small perturbations around the baseline.

  6. AUV Localization Process using Computing Cells on FPGA

    Directory of Open Access Journals (Sweden)

    Abid Yahya

    2012-09-01

    Full Text Available Concurrent processors incorporated on a single chip for performing computations can be troublesome because as the size of problems increase, the execution time also increases. In order to make sure that the execution time is reduced, a traditional approach is used that implies that the problem will be computed by an array of processors that are interconnected over a high-speed interconnection network. A Duplicated process is presented in this paper to handle the complexity of computing arithmetic of AUV localization operation. A single clock cycle is sufficient for performing all the operations in this process. This type of process is referred to as Duplicated because each cell incorporates a variety of duplicated multipliers and adders. The minimum number of clock cycles are needed to complete each iteration. However, each cell takes up more area on the chip. The calculation rate of the duplicated process is high despite the fact that its throughput is significantly limited by the small number of cells that can be accommodated by the FPGA.

  7. Accessible high performance computing solutions for near real-time image processing for time critical applications

    Science.gov (United States)

    Bielski, Conrad; Lemoine, Guido; Syryczynski, Jacek

    2009-09-01

    High Performance Computing (HPC) hardware solutions such as grid computing and General Processing on a Graphics Processing Unit (GPGPU) are now accessible to users with general computing needs. Grid computing infrastructures in the form of computing clusters or blades are becoming common place and GPGPU solutions that leverage the processing power of the video card are quickly being integrated into personal workstations. Our interest in these HPC technologies stems from the need to produce near real-time maps from a combination of pre- and post-event satellite imagery in support of post-disaster management. Faster processing provides a twofold gain in this situation: 1. critical information can be provided faster and 2. more elaborate automated processing can be performed prior to providing the critical information. In our particular case, we test the use of the PANTEX index which is based on analysis of image textural measures extracted using anisotropic, rotation-invariant GLCM statistics. The use of this index, applied in a moving window, has been shown to successfully identify built-up areas in remotely sensed imagery. Built-up index image masks are important input to the structuring of damage assessment interpretation because they help optimise the workload. The performance of computing the PANTEX workflow is compared on two different HPC hardware architectures: (1) a blade server with 4 blades, each having dual quad-core CPUs and (2) a CUDA enabled GPU workstation. The reference platform is a dual CPU-quad core workstation and the PANTEX workflow total computing time is measured. Furthermore, as part of a qualitative evaluation, the differences in setting up and configuring various hardware solutions and the related software coding effort is presented.

  8. Computational simulation of multi-strut central lobed injection of hydrogen in a scramjet combustor

    Directory of Open Access Journals (Sweden)

    Gautam Choubey

    2016-09-01

    Full Text Available Multi-strut injection is an approach to increase the overall performance of Scramjet while reducing the risk of thermal choking in a supersonic combustor. Hence computational simulation of Scramjet combustor at Mach 2.5 through multiple central lobed struts (three struts have been presented and discussed in the present research article. The geometry and model used here is slight modification of the DLR (German Aerospace Center scramjet model. Present results show that the presence of three struts injector improves the performance of scramjet combustor as compared to single strut injector. The combustion efficiency is also found to be highest in case of three strut fuel injection system. In order to validate the results, the numerical data for single strut injection is compared with experimental result which is taken from the literature.

  9. Computational Tools for Accelerating Carbon Capture Process Development

    Energy Technology Data Exchange (ETDEWEB)

    Miller, David

    2013-01-01

    The goals of the work reported are: to develop new computational tools and models to enable industry to more rapidly develop and deploy new advanced energy technologies; to demonstrate the capabilities of the CCSI Toolset on non-proprietary case studies; and to deploy the CCSI Toolset to industry. Challenges of simulating carbon capture (and other) processes include: dealing with multiple scales (particle, device, and whole process scales); integration across scales; verification, validation, and uncertainty; and decision support. The tools cover: risk analysis and decision making; validated, high-fidelity CFD; high-resolution filtered sub-models; process design and optimization tools; advanced process control and dynamics; process models; basic data sub-models; and cross-cutting integration tools.

  10. First International Conference Multimedia Processing, Communication and Computing Applications

    CERN Document Server

    Guru, Devanur

    2013-01-01

    ICMCCA 2012 is the first International Conference on Multimedia Processing, Communication and Computing Applications and the theme of the Conference is chosen as ‘Multimedia Processing and its Applications’. Multimedia processing has been an active research area contributing in many frontiers of today’s science and technology. This book presents peer-reviewed quality papers on multimedia processing, which covers a very broad area of science and technology. The prime objective of the book is to familiarize readers with the latest scientific developments that are taking place in various fields of multimedia processing and is widely used in many disciplines such as Medical Diagnosis, Digital Forensic, Object Recognition, Image and Video Analysis, Robotics, Military, Automotive Industries, Surveillance and Security, Quality Inspection, etc. The book will assist the research community to get the insight of the overlapping works which are being carried out across the globe at many medical hospitals and instit...

  11. Computer aided microbial safety design of food processes.

    Science.gov (United States)

    Schellekens, M; Martens, T; Roberts, T A; Mackey, B M; Nicolaï, B M; Van Impe, J F; De Baerdemaeker, J

    1994-12-01

    To reduce the time required for product development, to avoid expensive experimental tests, and to quantify safety risks for fresh products and the consequence of processing there is a growing interest in computer aided food process design. This paper discusses the application of hybrid object-oriented and rule-based expert system technology to represent the data and knowledge of microbial experts and food engineers. Finite element models for heat transfer calculation routines, microbial growth and inactivation models and texture kinetics are combined with food composition data, thermophysical properties, process steps and expert knowledge on type and quantity of microbial contamination. A prototype system has been developed to evaluate changes in food composition, process steps and process parameters on microbiological safety and textual quality of foods.

  12. A distributed computing model for telemetry data processing

    Science.gov (United States)

    Barry, Matthew R.; Scott, Kevin L.; Weismuller, Steven P.

    1994-05-01

    We present a new approach to distributing processed telemetry data among spacecraft flight controllers within the control centers at NASA's Johnson Space Center. This approach facilitates the development of application programs which integrate spacecraft-telemetered data and ground-based synthesized data, then distributes this information to flight controllers for analysis and decision-making. The new approach combines various distributed computing models into one hybrid distributed computing model. The model employs both client-server and peer-to-peer distributed computing models cooperating to provide users with information throughout a diverse operations environment. Specifically, it provides an attractive foundation upon which we are building critical real-time monitoring and control applications, while simultaneously lending itself to peripheral applications in playback operations, mission preparations, flight controller training, and program development and verification. We have realized the hybrid distributed computing model through an information sharing protocol. We shall describe the motivations that inspired us to create this protocol, along with a brief conceptual description of the distributed computing models it employs. We describe the protocol design in more detail, discussing many of the program design considerations and techniques we have adopted. Finally, we describe how this model is especially suitable for supporting the implementation of distributed expert system applications.

  13. Assessment of processes affecting low-flow water quality of Cedar Creek, west-central Illinois

    Science.gov (United States)

    Schmidt, Arthur R.; Freeman, W.O.; McFarlane, R.D.

    1989-01-01

    Water quality and the processes that affect dissolved oxygen, nutrient (nitrogen and phosphorus species), and algal concentrations were evaluated for a 23.8-mile reach of Cedar Creek near Galesburg, west-central Illinois, during periods of warm-weather, low-flow conditions. Water quality samples were collected and stream conditions were measured over a diel (24 hour) period on three occasions during July and August 1985. Analysis of data from the diel-sampling periods indicates that concentrations of iron, copper, manganese, phenols, and total dissolved-solids exceeded Illinois ' general-use water quality standards in some locations. Dissolved-oxygen concentrations were less than the State minimum standard throughout much of the study reach. These data were used to calibrate and verify a one-dimensional, steady-state, water quality model. The computer model was used to assess the relative effects on low-flow water quality of processes such as algal photosynthesis and respiration, ammonia oxidation, biochemical oxygen demand, sediment oxygen demand, and stream reaeration. Results from model simulations and sensitivity analysis indicate that sediment oxygen demand is the principal cause of low dissolved-oxygen concentrations in the creek. (USGS)

  14. Monte Carlo method for neutron transport calculations in graphics processing units (GPUs)

    International Nuclear Information System (INIS)

    Monte Carlo simulation is well suited for solving the Boltzmann neutron transport equation in an inhomogeneous media for complicated geometries. However, routine applications require the computation time to be reduced to hours and even minutes in a desktop PC. The interest in adopting Graphics Processing Units (GPUs) for Monte Carlo acceleration is rapidly growing. This is due to the massive parallelism provided by the latest GPU technologies which is the most promising solution to the challenge of performing full-size reactor core analysis on a routine basis. In this study, Monte Carlo codes for a fixed-source neutron transport problem were developed for GPU environments in order to evaluate issues associated with computational speedup using GPUs. Results obtained in this work suggest that a speedup of several orders of magnitude is possible using the state-of-the-art GPU technologies. (author)

  15. A Fast MHD Code for Gravitationally Stratified Media using Graphical Processing Units: SMAUG

    Indian Academy of Sciences (India)

    M. K. Griffiths; V. Fedun; R.Erdélyi

    2015-03-01

    Parallelization techniques have been exploited most successfully by the gaming/graphics industry with the adoption of graphical processing units (GPUs), possessing hundreds of processor cores. The opportunity has been recognized by the computational sciences and engineering communities, who have recently harnessed successfully the numerical performance of GPUs. For example, parallel magnetohydrodynamic (MHD) algorithms are important for numerical modelling of highly inhomogeneous solar, astrophysical and geophysical plasmas. Here, we describe the implementation of SMAUG, the Sheffield Magnetohydrodynamics Algorithm Using GPUs. SMAUG is a 1–3D MHD code capable of modelling magnetized and gravitationally stratified plasma. The objective of this paper is to present the numerical methods and techniques used for porting the code to this novel and highly parallel compute architecture. The methods employed are justified by the performance benchmarks and validation results demonstrating that the code successfully simulates the physics for a range of test scenarios including a full 3D realistic model of wave propagation in the solar atmosphere.

  16. Ferrofluid Simulations with the Barnes-Hut Algorithm on Graphics Processing Units

    CERN Document Server

    Polyakov, A Yu; Denisov, S; Reva, V V; Hanggi, P

    2012-01-01

    We present an approach to molecular-dynamics simulations of dilute ferrofluids on graphics processing units (GPUs). Our numerical scheme is based on a GPU-oriented modification of the Barnes-Hut (BH) algorithm designed to increase the parallelism of computations. For an ensemble consisting of one million of ferromagnetic particles, the performance of the proposed algorithm on a Tesla M2050 GPU demonstrated a computational-time speed-up of four order of magnitude compared to the performance of the sequential All-Pairs (AP) algorithm on a single-core CPU, and two order of magnitude compared to the performance of the optimized AP algorithm on the GPU. The accuracy of the scheme is corroborated by comparing theoretical predictions with the results of numerical simulations.

  17. The ATLAS Fast TracKer Processing Units

    CERN Document Server

    Krizka, Karol; The ATLAS collaboration

    2016-01-01

    The Fast Tracker is a hardware upgrade to the ATLAS trigger and data-acquisition system, with the goal of providing global track reconstruction by the start of the High Level Trigger starts. The Fast Tracker can process incoming data from the whole inner detector at full first level trigger rate, up to 100 kHz, using custom electronic boards. At the core of the system is a Processing Unit installed in a VMEbus crate, formed by two sets of boards: the Associative Memory Board and a powerful rear transition module called the Auxiliary card, while the second set is the Second Stage board. The associative memories perform the pattern matching looking for correlations within the incoming data, compatible with track candidates at coarse resolution. The pattern matching task is performed using custom application specific integrated circuits, called associative memory chips. The auxiliary card prepares the input and reject bad track candidates obtained from from the Associative Memory Board using the full precision a...

  18. Stakeholder consultations regarding centralized power procurement processes in Ontario

    International Nuclear Information System (INIS)

    In 2004, Ontario held 4 Requests for Proposals (RFPs) to encourage the development of new clean renewable combined heat and power generation and the implementation of conservation and demand management programs. Details of a stakeholder consultation related to the RFP process were presented were in this paper. The aim of the consultation was to synthesize stakeholder comments and to provide appropriate recommendations for future RFPs held by the Ontario Power Authority (OPA). The financial burden of bidding was discussed, as well as communications procedures and contract ambiguities. Issues concerning the criteria used for qualifying potential bidders and evaluating project submissions were reviewed. Recommendations for future processes included prequalification, a simplification in collusion requirements, and a fixed time response. It was also recommended that the process should not emphasize financing as lenders do not make firm commitments to bidders prior to a bid being accepted. It was suggested that the amount of bid security should vary with the project size and phase of development, and that the contracts for differences format should be refined to allow participants to propose parameters. Issues concerning audit procedures and performance deviations were reviewed. It was suggested that contract terms should be compatible with gas markets. It was also suggested that the OPA should adopt a more simplified approach to co-generation proposals, where proponents can specify amounts of energy and required prices. The adoption of the Swiss challenge approach of allowing other vendors an opportunity to match or beat terms on an offer was recommended. It was suggested that renewables should be acquired through a targeted and volume limited standard-offer process to be set yearly. Conservation and demand management recommendations were also presented. It was suggested that the OPA should serve as a facilitator of clean development mechanism (CDM) programs. It was

  19. Meshing scheme in the computation of spontaneous fault rupture process

    Institute of Scientific and Technical Information of China (English)

    LIU Qi-ming; HEN Xiao-fei

    2008-01-01

    The choice of spatial grid size has been being a crucial issue in all kinds of numerical algorithms. By using BIEM (Boundary Integral Equation Method) to calculate the rupture process of a planar fault embedded in an isotropic and homogeneous full space with simple discretization scheme, this paper focuses on what grid size should be applied to control the error as well as maintaining the computing efficiency for different parameter combinations of (Dc, Te), where Dc is the critical slip-weakening dis-tance and Te is the initial stress on the fault plane. We have preliminarily found the way of properly choosing the spatial grid size, which is of great significance in the computation of seismic source rup-ture process with BIEM.

  20. GPU-Based FFT Computation for Multi-Gigabit WirelessHD Baseband Processing

    Directory of Open Access Journals (Sweden)

    Nicholas Hinitt

    2010-01-01

    Full Text Available The next generation Graphics Processing Units (GPUs are being considered for non-graphics applications. Millimeter wave (60 Ghz wireless networks that are capable of multi-gigabit per second (Gbps transfer rates require a significant baseband throughput. In this work, we consider the baseband of WirelessHD, a 60 GHz communications system, which can provide a data rate of up to 3.8 Gbps over a short range wireless link. Thus, we explore the feasibility of achieving gigabit baseband throughput using the GPUs. One of the most computationally intensive functions commonly used in baseband communications, the Fast Fourier Transform (FFT algorithm, is implemented on an NVIDIA GPU using their general-purpose computing platform called the Compute Unified Device Architecture (CUDA. The paper, first, investigates the implementation of an FFT algorithm using the GPU hardware and exploiting the computational capability available. It then outlines the limitations discovered and the methods used to overcome these challenges. Finally a new algorithm to compute FFT is proposed, which reduces interprocessor communication. It is further optimized by improving memory access, enabling the processing rate to exceed 4 Gbps, achieving a processing time of a 512-point FFT in less than 200 ns using a two-GPU solution.

  1. Synthesis of computational structures for analog signal processing

    CERN Document Server

    Popa, Cosmin Radu

    2011-01-01

    Presents the most important classes of computational structures for analog signal processing, including differential or multiplier structures, squaring or square-rooting circuits, exponential or Euclidean distance structures and active resistor circuitsIntroduces the original concept of the multifunctional circuit, an active structure that is able to implement, starting from the same circuit core, a multitude of continuous mathematical functionsCovers mathematical analysis, design and implementation of a multitude of function generator structures

  2. Computer-simulated development process of Chinese characters font cognition

    Science.gov (United States)

    Chen, Jing; Mu, Zhichun; Sun, Dehui; Hu, Dunli

    2008-10-01

    The research of Chinese characters cognition is an important research aspect of cognitive science and computer science, especially artificial intelligence. In this paper, according as the traits of Chinese characters the database of Chinese characters font representations and the model of computer simulation of Chinese characters font cognition are constructed from the aspect of cognitive science. The font cognition of Chinese characters is actual a gradual process and there is the accumulation of knowledge. Through using the method of computer simulation, the development model of Chinese characters cognition was constructed. And this is the important research content of Chinese characters cognition. This model is based on self-organizing neural network and adaptive resonance theory (ART) neural network. By Combining the SOFM and ART2 network, two sets of input were trained. Through training and testing methods, the development process of Chinese characters cognition based on Chinese characters cognition was simulated. Then the results from this model and could be compared with the results that were obtained only using SOFM. By analyzing the results, this simulation suggests that the model is able to account for some empirical results. So, the model can simulate the development process of Chinese characters cognition in a way.

  3. Solar augmentation for process heat with central receiver technology

    Science.gov (United States)

    Kotzé, Johannes P.; du Toit, Philip; Bode, Sebastian J.; Larmuth, James N.; Landman, Willem A.; Gauché, Paul

    2016-05-01

    Coal fired boilers are currently one of the most widespread ways to deliver process heat to industry. John Thompson Boilers (JTB) offer industrial steam supply solutions for industry and utility scale applications in Southern Africa. Transport cost add significant cost to the coal price in locations far from the coal fields in Mpumalanga, Gauteng and Limpopo. The Helio100 project developed a low cost, self-learning, wireless heliostat technology that requires no ground preparation. This is attractive as an augmentation alternative, as it can easily be installed on any open land that a client may have available. This paper explores the techno economic feasibility of solar augmentation for JTB coal fired steam boilers by comparing the fuel savings of a generic 2MW heliostat field at various locations throughout South Africa.

  4. Central pain processing in chronic tension-type headache

    DEFF Research Database (Denmark)

    Lindelof, Kim; Ellrich, Jens; Jensen, Rigmor;

    2009-01-01

    ) reflects neuronal excitability due to nociceptive input in the brainstem. The aim of this study was to investigate nociceptive processing at the level of the brainstem in an experimental pain model of CTTH symptoms. METHODS: The effect of conditioning pain, 5 min infusion of hypertonic saline into the neck......: A combined homotopic and heterotopic effect of the conditioning pain onto the blink reflex could account for this finding....... muscles, was investigated in 20 patients with CTTH and 20 healthy controls. In addition, a pilot study with isotonic saline was performed with 5 subjects in each group. The BR was elicited by electrical stimuli with an intensity of four times the pain threshold, with a superficial concentric electrode. We...

  5. Perceptual weights for loudness reflect central spectral processing

    DEFF Research Database (Denmark)

    Joshi, Suyash Narendra; Jesteadt, Walt

    2011-01-01

    Weighting patterns for loudness obtained using the reverse correlation method are thought to reveal the relative contributions of different frequency regions to total loudness, the equivalent of specific loudness. Current models of loudness assume that specific loudness is determined by peripheral...... processes such as compression and masking. Here we test this hypothesis using 20-tone harmonic complexes (200Hz f0, 200 to 4000Hz, 250 ms, 65 dB/Component) added in opposite phase relationships (Schroeder positive and negative). Due to the varying degree of envelope modulations, these time-reversed harmonic...... differences in the stimuli, a similar level rove (68 dB) was introduced on each component of Schroeder positive and negative forward maskers. The Schroder negative maskers continued to be more effective. These results suggest that perceptual weights for loudness are not completely determined by peripheral...

  6. Cassava Processing and Marketing by Rural Women in the Central Region of Cameroon

    OpenAIRE

    SHIOYA, Akiyo

    2013-01-01

    This study examines the development of rural women's commercial activities in Central Cameroon, particularly the Department of Lekié, which is adjacent to Yaoundé, the capital of Cameroon. I focused on cassava processing technologies and the sale of cassavabased processed foods undertaken by women in a suburban farming village. Cassava is one of the main staple foods in central Cameroon, including in urban areas. One of its characteristics is that it keeps for a long period in the ground but ...

  7. Experimental determination of the segregation process using computer tomography

    Directory of Open Access Journals (Sweden)

    Konstantin Beckmann

    2016-08-01

    Full Text Available Modelling methods such as DEM and CFD are increasingly used for developing high efficient combine cleaning systems. For this purpose it is necessary to verify the complex segregation and separation processes in the combine cleaning system. One way is to determine the segregation and separation function using 3D computer tomography (CT. This method makes it possible to visualize and analyse the movement behaviour of the components of the mixture during the segregation and separation process as well as the derivation of descriptive process parameters. A mechanically excited miniature test rig was designed and built at the company CLAAS Selbstfahrende Erntemaschinen GmbH to achieve this aim. The investigations were carried out at the Fraunhofer Institute for Integrated Circuits IIS. Through the evaluation of the recorded images the segregation process is described visually. A more detailed analysis enabled the development of segregation and separation function based on the different densities of grain and material other than grain.

  8. Experimental determination of the segregation process using computer tomography

    Directory of Open Access Journals (Sweden)

    Konstantin Beckmann

    2016-07-01

    Full Text Available Modelling methods such as DEM and CFD are increasingly used for developing high efficient combine cleaning systems. For this purpose it is necessary to verify the complex segregation and separation processes in the combine cleaning system. One way is to determine the segregation and separation function using 3D computer tomography (CT. This method makes it possible to visualize and analyse the movement behaviour of the components of the mixture during the segregation and separation process as well as the derivation of descriptive process parameters. A mechanically excited miniature test rig was designed and built at the company CLAAS Selbstfahrende Erntemaschinen GmbH to achieve this aim. The investigations were carried out at the Fraunhofer Institute for Integrated Circuits IIS. Through the evaluation of the recorded images the segregation process is described visually. A more detailed analysis enabled the development of segregation and separation function based on the different densities of grain and material other than grain.

  9. Multiple-User, Multitasking, Virtual-Memory Computer System

    Science.gov (United States)

    Generazio, Edward R.; Roth, Don J.; Stang, David B.

    1993-01-01

    Computer system designed and programmed to serve multiple users in research laboratory. Provides for computer control and monitoring of laboratory instruments, acquisition and anlaysis of data from those instruments, and interaction with users via remote terminals. System provides fast access to shared central processing units and associated large (from megabytes to gigabytes) memories. Underlying concept of system also applicable to monitoring and control of industrial processes.

  10. The AMchip04 and the Processing Unit Prototype for the FastTracker

    CERN Document Server

    Andreani, A; The ATLAS collaboration; Beretta, M; Bogdan, M; Citterio, M; Alberti, F; Giannetti, P; Lanza, A; Magalotti, D; Piendibene, M; Shochet, M; Stabile, A; Tang, J; Tompkins, L; Volpi, G

    2012-01-01

    Modern experiments search for extremely rare processes hidden in much larger background levels. As the experiment complexity and the accelerator backgrounds and luminosity increase we need increasingly complex and exclusive selections. We present the first prototype of a new Processing Unit, the core of the FastTracker processor for Atlas, whose computing power is such that a couple of hundreds of them will be able to reconstruct all the tracks with transverse momentum above 1 GeV in the ATLAS events up to Phase II instantaneous luminosities (5×1034 cm-2 s-1) with an event input rate of 100 kHz and a latency below hundreds of microseconds. We plan extremely powerful, very compact and low consumption units for the far future, essential to increase efficiency and purity of the Level 2 selected samples through the intensive use of tracking. This strategy requires massive computing power to minimize the online execution time of complex tracking algorithms. The time consuming pattern recognition problem, generall...

  11. Computation studies into architecture and energy transfer properties of photosynthetic units from filamentous anoxygenic phototrophs

    Energy Technology Data Exchange (ETDEWEB)

    Linnanto, Juha Matti [Institute of Physics, University of Tartu, Riia 142, 51014 Tartu (Estonia); Freiberg, Arvi [Institute of Physics, University of Tartu, Riia 142, 51014 Tartu, Estonia and Institute of Molecular and Cell Biology, University of Tartu, Riia 23, 51010 Tartu (Estonia)

    2014-10-06

    We have used different computational methods to study structural architecture, and light-harvesting and energy transfer properties of the photosynthetic unit of filamentous anoxygenic phototrophs. Due to the huge number of atoms in the photosynthetic unit, a combination of atomistic and coarse methods was used for electronic structure calculations. The calculations reveal that the light energy absorbed by the peripheral chlorosome antenna complex transfers efficiently via the baseplate and the core B808–866 antenna complexes to the reaction center complex, in general agreement with the present understanding of this complex system.

  12. Children's Writing Processes when Using Computers: Insights Based on Combining Analyses of Product and Process

    Science.gov (United States)

    Gnach, Aleksandra; Wiesner, Esther; Bertschi-Kaufmann, Andrea; Perrin, Daniel

    2007-01-01

    Children and young people are increasingly performing a variety of writing tasks using computers, with word processing programs thus becoming their natural writing environment. The development of keystroke logging programs enables us to track the process of writing, without changing the writing environment for the writers. In the myMoment schools…

  13. Large-scale analytical Fourier transform of photomask layouts using graphics processing units

    Science.gov (United States)

    Sakamoto, Julia A.

    2015-10-01

    Compensation of lens-heating effects during the exposure scan in an optical lithographic system requires knowledge of the heating profile in the pupil of the projection lens. A necessary component in the accurate estimation of this profile is the total integrated distribution of light, relying on the squared modulus of the Fourier transform (FT) of the photomask layout for individual process layers. Requiring a layout representation in pixelated image format, the most common approach is to compute the FT numerically via the fast Fourier transform (FFT). However, the file size for a standard 26- mm×33-mm mask with 5-nm pixels is an overwhelming 137 TB in single precision; the data importing process alone, prior to FFT computation, can render this method highly impractical. A more feasible solution is to handle layout data in a highly compact format with vertex locations of mask features (polygons), which correspond to elements in an integrated circuit, as well as pattern symmetries and repetitions (e.g., GDSII format). Provided the polygons can decompose into shapes for which analytical FT expressions are possible, the analytical approach dramatically reduces computation time and alleviates the burden of importing extensive mask data. Algorithms have been developed for importing and interpreting hierarchical layout data and computing the analytical FT on a graphics processing unit (GPU) for rapid parallel processing, not assuming incoherent imaging. Testing was performed on the active layer of a 392- μm×297-μm virtual chip test structure with 43 substructures distributed over six hierarchical levels. The factor of improvement in the analytical versus numerical approach for importing layout data, performing CPU-GPU memory transfers, and executing the FT on a single NVIDIA Tesla K20X GPU was 1.6×104, 4.9×103, and 3.8×103, respectively. Various ideas for algorithm enhancements will be discussed.

  14. A Comparison of Dental Chartings Performed at the Joint POW/MIA Accounting Command Central Identification Laboratory and the Kokura Central Identification Unit on Remains Identified from the Korean War.

    Science.gov (United States)

    Shiroma, Calvin Y

    2016-01-01

    During the Korean War, the Office of the Quartermaster General's Graves Registration Service (GRS) was responsible for the recovery, processing, identification, and repatriation of US remains. In January 1951, the GRS established a Central Identification Unit (CIU) at Kokura, Japan. At the Kokura CIU, postmortem dental examinations were performed by the dental technicians. Thirty-nine postmortem dental examinations performed at the CIU were compared to the findings documented in the Forensic Odontology Reports written at the JPAC Central Identification Laboratory (CIL). Differences were noted in 20 comparisons (51%). The majority of the discrepancies was considered negligible and would not alter the JPAC decision to disinter a set of unknown remains. Charting discrepancies that were considered significant included the occasional failure of the Kokura technicians to identify teeth with inter-proximal or esthetic restorations and the misidentification of a mechanically prepared tooth (i.e., tooth prepared for a restoration) as a carious surface.

  15. 24 CFR 290.21 - Computing annual number of units eligible for substitution of tenant-based assistance or...

    Science.gov (United States)

    2010-04-01

    ... 24 Housing and Urban Development 2 2010-04-01 2010-04-01 false Computing annual number of units eligible for substitution of tenant-based assistance or alternative uses. 290.21 Section 290.21 Housing and... Multifamily Projects § 290.21 Computing annual number of units eligible for substitution of...

  16. In-Core Computation of Geometric Centralities with HyperBall: A Hundred Billion Nodes and Beyond

    CERN Document Server

    Boldi, Paolo

    2013-01-01

    Given a social network, which of its nodes are more central? This question has been asked many times in sociology, psychology and computer science, and a whole plethora of centrality measures (a.k.a. centrality indices, or rankings) were proposed to account for the importance of the nodes of a network. In this paper, we approach the problem of computing geometric centralities, such as closeness and harmonic centrality, on very large graphs; traditionally this task requires an all-pairs shortest-path computation in the exact case, or a number of breadth-first traversals for approximated computations, but these techniques yield very weak statistical guarantees on highly disconnected graphs. We rather assume that the graph is accessed in a semi-streaming fashion, that is, that adjacency lists are scanned almost sequentially, and that a very small amount of memory (in the order of a dozen bytes) per node is available in core memory. We leverage the newly discovered algorithms based on HyperLogLog counters, making...

  17. Computer Simulation Methods for Crushing Process in an Jaw Crusher

    Science.gov (United States)

    Il'ich Beloglazov, Ilia; Andreevich Ikonnikov, Dmitrii

    2016-08-01

    One of the trends at modern mining enterprises is the application of combined systems for extraction and transportation of the rock mass. Given technology involves the use the conveyor lines as a continuous link of combined technology. The application of a conveyor transport provides significant reduction of costs for energy resources, increase in labor productivity and process automation. However, the use of a conveyor transport provides for certain requirements for the quality of transported material. The maximum size of the rock mass pieces is one of the basic parameters for it. The crushing plants applies as a coarse crushing followed by crushing the material to the maximum size of piece which possible to use for conveyor transport. It is often represented by jaw crushers. Modelling of crushing process in jaw crushers allows to maximally optimize workflow and increase efficiency of the equipment at the further transportation and processing of rocks. We studied the interaction between walls of the jaw crusher and bulk material by using discrete element method (DEM) in this paper. The article examines the process of modeling by stages. It includes design of the crusher construction in solid and surface modeling system. Modelling of the crushing process based on the experimental data received via the crushing unit BOYD. The process of destruction and particle size distribution in the study was done. Analysis of research results shows a comparability of actual experiment and modeling process.

  18. GAMER: A GRAPHIC PROCESSING UNIT ACCELERATED ADAPTIVE-MESH-REFINEMENT CODE FOR ASTROPHYSICS

    International Nuclear Information System (INIS)

    We present the newly developed code, GPU-accelerated Adaptive-MEsh-Refinement code (GAMER), which adopts a novel approach in improving the performance of adaptive-mesh-refinement (AMR) astrophysical simulations by a large factor with the use of the graphic processing unit (GPU). The AMR implementation is based on a hierarchy of grid patches with an oct-tree data structure. We adopt a three-dimensional relaxing total variation diminishing scheme for the hydrodynamic solver and a multi-level relaxation scheme for the Poisson solver. Both solvers have been implemented in GPU, by which hundreds of patches can be advanced in parallel. The computational overhead associated with the data transfer between the CPU and GPU is carefully reduced by utilizing the capability of asynchronous memory copies in GPU, and the computing time of the ghost-zone values for each patch is diminished by overlapping it with the GPU computations. We demonstrate the accuracy of the code by performing several standard test problems in astrophysics. GAMER is a parallel code that can be run in a multi-GPU cluster system. We measure the performance of the code by performing purely baryonic cosmological simulations in different hardware implementations, in which detailed timing analyses provide comparison between the computations with and without GPU(s) acceleration. Maximum speed-up factors of 12.19 and 10.47 are demonstrated using one GPU with 40963 effective resolution and 16 GPUs with 81923 effective resolution, respectively.

  19. Undergraduate Game Degree Programs in the United Kingdom and United States: A Comparison of the Curriculum Planning Process

    Science.gov (United States)

    McGill, Monica M.

    2010-01-01

    Digital games are marketed, mass-produced, and consumed by an increasing number of people and the game industry is only expected to grow. In response, post-secondary institutions in the United Kingdom (UK) and the United States (US) have started to create game degree programs. Though curriculum theorists provide insight into the process of…

  20. Corrective Action Decision Document for Corrective Action Unit 417: Central Nevada Test Area Surface, Nevada

    Energy Technology Data Exchange (ETDEWEB)

    U.S. Department of Energy Nevada Operations Office

    1999-04-02

    This Corrective Action Decision Document (CADD) identifies and rationalizes the U.S. Department of Energy, Nevada Operations Office's selection of a recommended corrective action alternative (CAA) appropriate to facilitate the closure of Corrective Action Unit (CAU) 417: Central Nevada Test Area Surface, Nevada, under the Federal Facility Agreement and Consent Order. Located in Hot Creek Valley in Nye County, Nevada, and consisting of three separate land withdrawal areas (UC-1, UC-3, and UC-4), CAU 417 is comprised of 34 corrective action sites (CASs) including 2 underground storage tanks, 5 septic systems, 8 shaker pad/cuttings disposal areas, 1 decontamination facility pit, 1 burn area, 1 scrap/trash dump, 1 outlier area, 8 housekeeping sites, and 16 mud pits. Four field events were conducted between September 1996 and June 1998 to complete a corrective action investigation indicating that the only contaminant of concern was total petroleum hydrocarbon (TPH) which was found in 18 of the CASs. A total of 1,028 samples were analyzed. During this investigation, a statistical approach was used to determine which depth intervals or layers inside individual mud pits and shaker pad areas were above the State action levels for the TPH. Other related field sampling activities (i.e., expedited site characterization methods, surface geophysical surveys, direct-push geophysical surveys, direct-push soil sampling, and rotosonic drilling located septic leachfields) were conducted in this four-phase investigation; however, no further contaminants of concern (COCs) were identified. During and after the investigation activities, several of the sites which had surface debris but no COCs were cleaned up as housekeeping sites, two septic tanks were closed in place, and two underground storage tanks were removed. The focus of this CADD was to identify CAAs which would promote the prevention or mitigation of human exposure to surface and subsurface soils with contaminant

  1. Corrective Action Decision Document for Corrective Action Unit 417: Central Nevada Test Area Surface, Nevada

    International Nuclear Information System (INIS)

    This Corrective Action Decision Document (CADD) identifies and rationalizes the U.S. Department of Energy, Nevada Operations Office's selection of a recommended corrective action alternative (CAA) appropriate to facilitate the closure of Corrective Action Unit (CAU) 417: Central Nevada Test Area Surface, Nevada, under the Federal Facility Agreement and Consent Order. Located in Hot Creek Valley in Nye County, Nevada, and consisting of three separate land withdrawal areas (UC-1, UC-3, and UC-4), CAU 417 is comprised of 34 corrective action sites (CASs) including 2 underground storage tanks, 5 septic systems, 8 shaker pad/cuttings disposal areas, 1 decontamination facility pit, 1 burn area, 1 scrap/trash dump, 1 outlier area, 8 housekeeping sites, and 16 mud pits. Four field events were conducted between September 1996 and June 1998 to complete a corrective action investigation indicating that the only contaminant of concern was total petroleum hydrocarbon (TPH) which was found in 18 of the CASs. A total of 1,028 samples were analyzed. During this investigation, a statistical approach was used to determine which depth intervals or layers inside individual mud pits and shaker pad areas were above the State action levels for the TPH. Other related field sampling activities (i.e., expedited site characterization methods, surface geophysical surveys, direct-push geophysical surveys, direct-push soil sampling, and rotosonic drilling located septic leachfields) were conducted in this four-phase investigation; however, no further contaminants of concern (COCs) were identified. During and after the investigation activities, several of the sites which had surface debris but no COCs were cleaned up as housekeeping sites, two septic tanks were closed in place, and two underground storage tanks were removed. The focus of this CADD was to identify CAAs which would promote the prevention or mitigation of human exposure to surface and subsurface soils with contaminant

  2. Calibrated Multiple Event Relocations of the Central and Eastern United States

    Science.gov (United States)

    Yeck, W. L.; Benz, H.; McNamara, D. E.; Bergman, E.; Herrmann, R. B.; Myers, S. C.

    2015-12-01

    Earthquake locations are a first-order observable which form the basis of a wide range of seismic analyses. Currently, the ANSS catalog primarily contains published single-event earthquake locations that rely on assumed 1D velocity models. Increasing the accuracy of cataloged earthquake hypocenter locations and origin times and constraining their associated errors can improve our understanding of Earth structure and have a fundamental impact on subsequent seismic studies. Multiple-event relocation algorithms often increase the precision of relative earthquake hypocenters but are hindered by their limited ability to provide realistic location uncertainties for individual earthquakes. Recently, a Bayesian approach to the multiple event relocation problem has proven to have many benefits including the ability to: (1) handle large data sets; (2) easily incorporate a priori hypocenter information; (3) model phase assignment errors; and, (4) correct for errors in the assumed travel time model. In this study we employ bayseloc [Myers et al., 2007, 2009] to relocate earthquakes in the Central and Eastern United States from 1964-present. We relocate ~11,000 earthquakes with a dataset of ~439,000 arrival time observations. Our dataset includes arrival-time observations from the ANSS catalog supplemented with arrival-time data from the Reviewed ISC Bulletin (prior to 1981), targeted local studies, and arrival-time data from the TA Array. One significant benefit of the bayesloc algorithm is its ability to incorporate a priori constraints on the probability distributions of specific earthquake locations parameters. To constrain the inversion, we use high-quality calibrated earthquake locations from local studies, including studies from: Raton Basin, Colorado; Mineral, Virginia; Guy, Arkansas; Cheneville, Quebec; Oklahoma; and Mt. Carmel, Illinois. We also add depth constraints to 232 earthquakes from regional moment tensors. Finally, we add constraints from four historic (1964

  3. Shifting Forest Value Orientations in the United States, 1980-2001: A Computer Content Analysis

    OpenAIRE

    David N. Bengston; Trevor J. Webb; Fan, David P.

    2004-01-01

    This paper examines three forest value orientations - clusters of interrelated values and basic beliefs about forests - that emerged from an analysis of the public discourse about forest planning, management, and policy in the United States. The value orientations include anthropocentric, biocentric, and moral/spiritual/aesthetic orientations toward forests. Computer coded content analysis was used to identify shifts in the relative importance of these value orientations over the period 1980 ...

  4. Computer-assisted the optimisation of technological process

    Directory of Open Access Journals (Sweden)

    L.A. Dobrzański

    2009-04-01

    Full Text Available Purpose: One has worked out an application that allows to analyze the efficiency of technological process in aspect of nonmaterial values and has used neural networks to verify particle indicators of quality of a process operation. Indicators appointment makes it possible to evaluate the process efficiency, which can constitute an optimization basis of particular operation.Design/methodology/approach: The created model made it possible to analyze the chosen technological processes for the sake of efficiency criteria, which describe the relationships: operation - material, operation - machine, operation - man, operation - technological parameters.Findings: In order to automate the process, to determine the efficiency of technological operation (KiX and possibly to optimize it, one has applied one of artificial intelligence tools - neural networks.Practical implications: Application of neural networks allows to determine the value of technological efficiency of an operation. (KiX without the necessity of detailed analysis as well of the whole process as of the particular operation. It makes it also possible to optimize operation efficiency by means of checking value of operation efficiency in the case of change in value of particular partial efficiency indicators.Originality/value: Method of computer application makes it possible to point out the studied indicators and asses finally the process efficiency in order to plan optimization of particular operation.

  5. MycoperonDB: a database of computationally identified operons and transcriptional units in Mycobacteria

    Directory of Open Access Journals (Sweden)

    Ranjan Akash

    2006-12-01

    Full Text Available Abstract Background A key post genomics challenge is to identify how genes in an organism come together and perform physiological functions. An important first step in this direction is to identify transcriptional units, operons and regulons in a genome. Here we implement and report a strategy to computationally identify transcriptional units and operons of mycobacteria and construct a database-MycoperonDB. Description We have predicted transcriptional units and operons in mycobacteria and organized these predictions in the form of relational database called MycoperonDB. MycoperonDB database at present consists of 18053 genes organized as 8256 predicted operons and transcriptional units from five closely related species of mycobacteria. The database further provides literature links for experimentally characterized operons. All known promoters and related information is collected, analysed and stored. It provides a user friendly interface to allow a web based navigation of transcription units and operons. The web interface provides search tools to locate transcription factor binding DNA motif upstream to various genes. The reliability of operon prediction has been assessed by comparing the predicted operons with a set of known operons. Conclusion MycoperonDB is a publicly available structured relational database which has information about mycobacterial genes, transcriptional units and operons. We expect this database to assist molecular biologists/microbiologists in general, to hypothesize functional linkages between operonic genes of mycobacteria, their experimental characterization and validation. The database is freely available from our website http://www.cdfd.org.in/mycoperondb/index.html.

  6. The ATLAS Fast Tracker Processing Units - input and output data preparation

    CERN Document Server

    Bolz, Arthur Eugen; The ATLAS collaboration

    2016-01-01

    The ATLAS Fast Tracker is a hardware processor built to reconstruct tracks at a rate of up to 100 kHz and provide them to the high level trigger system. The Fast Tracker will allow the trigger to utilize tracking information from the entire detector at an earlier event selection stage than ever before, allowing for more efficient event rejection. The connection of the system from to the detector read-outs and to the high level trigger computing farms are made through custom boards implementing Advanced Telecommunications Computing Technologies standard. The input is processed by the Input Mezzanines and Data Formatter boards, designed to receive and sort the data coming from the Pixel and Semi-conductor Tracker. The Fast Tracker to Level-2 Interface Card connects the system to the computing farm. The Input Mezzanines are 128 boards, performing clustering, placed on the 32 Data Formatter mother boards that sort the information into 64 logical regions required by the downstream processing units. This necessitat...

  7. Application of Computer Simulation Modeling to Medication Administration Process Redesign

    Directory of Open Access Journals (Sweden)

    Nathan Huynh

    2012-01-01

    Full Text Available The medication administration process (MAP is one of the most high-risk processes in health care. MAP workflow redesign can precipitate both unanticipated and unintended consequences that can lead to new medication safety risks and workflow inefficiencies. Thus, it is necessary to have a tool to evaluate the impact of redesign approaches in advance of their clinical implementation. This paper discusses the development of an agent-based MAP computer simulation model that can be used to assess the impact of MAP workflow redesign on MAP performance. The agent-based approach is adopted in order to capture Registered Nurse medication administration performance. The process of designing, developing, validating, and testing such a model is explained. Work is underway to collect MAP data in a hospital setting to provide more complex MAP observations to extend development of the model to better represent the complexity of MAP.

  8. Global tree network for computing structures enabling global processing operations

    Science.gov (United States)

    Blumrich; Matthias A.; Chen, Dong; Coteus, Paul W.; Gara, Alan G.; Giampapa, Mark E.; Heidelberger, Philip; Hoenicke, Dirk; Steinmacher-Burow, Burkhard D.; Takken, Todd E.; Vranas, Pavlos M.

    2010-01-19

    A system and method for enabling high-speed, low-latency global tree network communications among processing nodes interconnected according to a tree network structure. The global tree network enables collective reduction operations to be performed during parallel algorithm operations executing in a computer structure having a plurality of the interconnected processing nodes. Router devices are included that interconnect the nodes of the tree via links to facilitate performance of low-latency global processing operations at nodes of the virtual tree and sub-tree structures. The global operations performed include one or more of: broadcast operations downstream from a root node to leaf nodes of a virtual tree, reduction operations upstream from leaf nodes to the root node in the virtual tree, and point-to-point message passing from any node to the root node. The global tree network is configurable to provide global barrier and interrupt functionality in asynchronous or synchronized manner, and, is physically and logically partitionable.

  9. Remote Maintenance Design Guide for Compact Processing Units

    Energy Technology Data Exchange (ETDEWEB)

    Draper, J.V.

    2000-07-13

    Oak Ridge National Laboratory (ORNL) Robotics and Process Systems (RPSD) personnel have extensive experience working with remotely operated and maintained systems. These systems require expert knowledge in teleoperation, human factors, telerobotics, and other robotic devices so that remote equipment may be manipulated, operated, serviced, surveyed, and moved about in a hazardous environment. The RPSD staff has a wealth of experience in this area, including knowledge in the broad topics of human factors, modular electronics, modular mechanical systems, hardware design, and specialized tooling. Examples of projects that illustrate and highlight RPSD's unique experience in remote systems design and application include the following: (1) design of a remote shear and remote dissolver systems in support of U.S. Department of Energy (DOE) fuel recycling research and nuclear power missions; (2) building remotely operated mobile systems for metrology and characterizing hazardous facilities in support of remote operations within those facilities; (3) construction of modular robotic arms, including the Laboratory Telerobotic Manipulator, which was designed for the National Aeronautics and Space Administration (NASA) and the Advanced ServoManipulator, which was designed for the DOE; (4) design of remotely operated laboratories, including chemical analysis and biochemical processing laboratories; (5) construction of remote systems for environmental clean up and characterization, including underwater, buried waste, underground storage tank (UST) and decontamination and dismantlement (D&D) applications. Remote maintenance has played a significant role in fuel reprocessing because of combined chemical and radiological contamination. Furthermore, remote maintenance is expected to play a strong role in future waste remediation. The compact processing units (CPUs) being designed for use in underground waste storage tank remediation are examples of improvements in systems

  10. The multistage nature of labour migration from Eastern and Central Europe (experience of Ukraine, Poland, United Kingdom and Germany during the 2002-2011 period)

    OpenAIRE

    Khrystyna FOGEL

    2015-01-01

    This article examines the consequences of the biggest round of EU Enlargement in 2004 on the labour migration flows from the new accession countries (A8) of the Eastern and Central Europe to Western Europe. The main focus of our research is the unique multistage nature of labour migration in the region. As a case study, we take labour migration from Poland to the United Kingdom and Germany and similar processes taking place in the labour migration from Ukraine to Poland. In particular, a new ...

  11. Grid Computing Application for Brain Magnetic Resonance Image Processing

    International Nuclear Information System (INIS)

    This work emphasizes the use of grid computing and web technology for automatic post-processing of brain magnetic resonance images (MRI) in the context of neuropsychiatric (Alzheimer's disease) research. Post-acquisition image processing is achieved through the interconnection of several individual processes into pipelines. Each process has input and output data ports, options and execution parameters, and performs single tasks such as: a) extracting individual image attributes (e.g. dimensions, orientation, center of mass), b) performing image transformations (e.g. scaling, rotation, skewing, intensity standardization, linear and non-linear registration), c) performing image statistical analyses, and d) producing the necessary quality control images and/or files for user review. The pipelines are built to perform specific sequences of tasks on the alphanumeric data and MRIs contained in our database. The web application is coded in PHP and allows the creation of scripts to create, store and execute pipelines and their instances either on our local cluster or on high-performance computing platforms. To run an instance on an external cluster, the web application opens a communication tunnel through which it copies the necessary files, submits the execution commands and collects the results. We present result on system tests for the processing of a set of 821 brain MRIs from the Alzheimer's Disease Neuroimaging Initiative study via a nonlinear registration pipeline composed of 10 processes. Our results show successful execution on both local and external clusters, and a 4-fold increase in performance if using the external cluster. However, the latter's performance does not scale linearly as queue waiting times and execution overhead increase with the number of tasks to be executed.

  12. Processing techniques for data from the Kuosheng Unit 1 shakedown safety-relief-valve tests

    International Nuclear Information System (INIS)

    This report describes techniques developed at the Lawrence Livermore National Laobratory, Livermore, CA for processing original data from the Taiwan Power Company's Kuosheng MKIII Unit 1 Safety Relief Valve Shakedown Tests conducted in April/May 1981. The computer codes used, TPSORT, TPPLOT, and TPPSD, form a special evaluation system for treating the data from its original packed binary form to ordered, calibrated ASCII transducer files and then to production of time-history plots, numerical output files, and spectral analyses. Using the data processing techniques described, a convenient means of independently examining and analyzing a unique data base for steam condensation phenomena in the MARKIII wetwell is described. The techniques developed for handling these data are applicable to the treatment of similar, but perhaps differently structured, experiment data sets

  13. Processing techniques for data from the Kuosheng Unit 1 shakedown safety-relief-valve tests

    Energy Technology Data Exchange (ETDEWEB)

    McCauley, E.W.; Rompel, S.L.; Weaver, H.J.; Altenbach, T.J.

    1982-08-01

    This report describes techniques developed at the Lawrence Livermore National Laobratory, Livermore, CA for processing original data from the Taiwan Power Company's Kuosheng MKIII Unit 1 Safety Relief Valve Shakedown Tests conducted in April/May 1981. The computer codes used, TPSORT, TPPLOT, and TPPSD, form a special evaluation system for treating the data from its original packed binary form to ordered, calibrated ASCII transducer files and then to production of time-history plots, numerical output files, and spectral analyses. Using the data processing techniques described, a convenient means of independently examining and analyzing a unique data base for steam condensation phenomena in the MARKIII wetwell is described. The techniques developed for handling these data are applicable to the treatment of similar, but perhaps differently structured, experiment data sets.

  14. [Correction of local and central disorders in patients with chronic abacterial prostatitis in usage of amus-01-intramag unit].

    Science.gov (United States)

    Glybochko, P V; Gol'braĭkh, G E; Raĭgorodskiĭ, Iu M; Valiev, A Z; Surikov, V N

    2007-01-01

    Combined use of dynamic magnetotherapy of local and central action in combination with antibacterial endourethral therapy and rectal administration of vitaprost arrests symptoms of chronic prostatitis. Combination of local and total dynamic magnetotherapy with application of AMUS-01-INTRAMAG unit improves erection quality in 81.8% males with psychogenic form of erectile dysfunction. There was also improvement of spermogram parameters and relieve of asthenovegetative syndrome. PMID:17915452

  15. Decreased Mexican and Central American labor migration to the United States in the context of the crisis

    OpenAIRE

    José Luis Hernández Suárez

    2016-01-01

    This article analyzes the migration of Mexican and Central American workers to the United States, based on the theory of imperialism and underdevelopment, especially as regards the absolute surplus workers given the chronic inability of the underdeveloped capitalist economy to absorb, and the expected depletion of the system, to make room for some of them through international migration, because the law of population of capital installed it makes international labor...

  16. Note on unit tangent vector computation for homotopy curve tracking on a hypercube

    Science.gov (United States)

    Chakraborty, A.; Allison, D. C. S.; Ribbens, C. J.; Watson, L. T.

    1991-01-01

    Probability-one homotopy methods are a class of methods for solving nonlinear systems of equations that are globally convergent from an arbitrary starting point. The essence of all such algorithms is the construction of an appropriate homotopy map and subsequent tracking of some smooth curve in the zero set of the homotopy map. Tracking a homotopy curve involves finding the unit tangent vector at different points along the zero curve, which amounts to calculating the kernel of the n x (n + 1) Jacobian matrix. While computing the tangent vector is just one part of the curve tracking algorithm, it can require a significant percentage of the total tracking time. This note presents computational results showing the performance of several different parallel orthogonal factorization/triangular system solving algorithms for the tangent vector computation on a hypercube.

  17. 2012 Groundwater Monitoring Report Central Nevada Test Area, Subsurface Corrective Action Unit 443

    Energy Technology Data Exchange (ETDEWEB)

    None

    2013-04-01

    The Central Nevada Test Area was the site of a 0.2- to 1-megaton underground nuclear test in 1968. The surface of the site has been closed, but the subsurface is still in the corrective action process. The corrective action alternative selected for the site was monitoring with institutional controls. Annual sampling and hydraulic head monitoring are conducted as part of the subsurface corrective action strategy. The site is currently in the fourth year of the 5-year proof-of-concept period that is intended to validate the compliance boundary. Analytical results from the 2012 monitoring are consistent with those of previous years. Tritium remains at levels below the laboratory minimum detectable concentration in all wells in the monitoring network. Samples collected from reentry well UC-1-P-2SR, which is not in the monitoring network but was sampled as part of supplemental activities conducted during the 2012 monitoring, indicate concentrations of tritium that are consistent with previous sampling results. This well was drilled into the chimney shortly after the detonation, and water levels continue to rise, demonstrating the very low permeability of the volcanic rocks. Water level data from new wells MV-4 and MV-5 and recompleted well HTH-1RC indicate that hydraulic heads are still recovering from installation and testing. Data from wells MV-4 and MV-5 also indicate that head levels have not yet recovered from the 2011 sampling event during which several thousand gallons of water were purged. It has been recommended that a low-flow sampling method be adopted for these wells to allow head levels to recover to steady-state conditions. Despite the lack of steady-state groundwater conditions, hydraulic head data collected from alluvial wells installed in 2009 continue to support the conceptual model that the southeast-bounding graben fault acts as a barrier to groundwater flow at the site.

  18. COMPUTING

    CERN Multimedia

    M. Kasemann

    CMS relies on a well functioning, distributed computing infrastructure. The Site Availability Monitoring (SAM) and the Job Robot submission have been very instrumental for site commissioning in order to increase availability of more sites such that they are available to participate in CSA07 and are ready to be used for analysis. The commissioning process has been further developed, including "lessons learned" documentation via the CMS twiki. Recently the visualization, presentation and summarizing of SAM tests for sites has been redesigned, it is now developed by the central ARDA project of WLCG. Work to test the new gLite Workload Management System was performed; a 4 times increase in throughput with respect to LCG Resource Broker is observed. CMS has designed and launched a new-generation traffic load generator called "LoadTest" to commission and to keep exercised all data transfer routes in the CMS PhE-DEx topology. Since mid-February, a transfer volume of about 12 P...

  19. Point to point processing of digital images using parallel computing

    Directory of Open Access Journals (Sweden)

    Eric Olmedo

    2012-05-01

    Full Text Available This paper presents an approach the point to point processing of digital images using parallel computing, particularly for grayscale, brightening, darkening, thresholding and contrast change. The point to point technique applies a transformation to each pixel on image concurrently rather than sequentially. This approach used CUDA as parallel programming tool on a GPU in order to take advantage of all available cores. Preliminary results show that CUDA obtains better results in most of the used filters. Except in the negative filter with lower resolutions images OpenCV obtained better ones, but using images in high resolutions CUDA performance is better.

  20. Plot-processing system in computer U-200, (1)

    International Nuclear Information System (INIS)

    In order to use terminal computer U-200 for plot-processing of experimental data, hardware and basic software have been prepared. The digital plotter, originally for system USC-3 can be used as an output device through the adapter consisting of decoding circuit and plot-driving pulse generator. Six subroutine subprograms were also prepared as basic software for U-200, which are called in FORTRAN langauge from the main routine of application programs: SUBROUTINE PLTSET, ... FACTOR, ... WHERE, ... PLOT, ... SYMBOL, and ... NUMBER. They are similar in functional structure to those for USC-3. (auth.)

  1. Computer model of one-dimensional equilibrium controlled sorption processes

    Science.gov (United States)

    Grove, D.B.; Stollenwerk, K.G.

    1984-01-01

    A numerical solution to the one-dimensional solute-transport equation with equilibrium-controlled sorption and a first-order irreversible-rate reaction is presented. The computer code is written in FORTRAN language, with a variety of options for input and output for user ease. Sorption reactions include Langmuir, Freundlich, and ion-exchange, with or without equal valance. General equations describing transport and reaction processes are solved by finite-difference methods, with nonlinearities accounted for by iteration. Complete documentation of the code, with examples, is included. (USGS)

  2. Evapotranspiration Units for the Diamond Valley Flow System Groundwater Discharge Area, Central Nevada, 2010

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — These data were created as part of a hydrologic study to characterize groundwater budgets and water quality in the Diamond Valley Flow System (DVFS), central...

  3. Aeromagrnetic study of the midcontinent gravity high of central United States

    Science.gov (United States)

    King, Elizabeth R.; Zietz, Isidore

    1971-01-01

    present Earth's field and differs from it radically in direction. This magnetization was acquired before the flows were tilted into their present positions. A computed magnetic profile shows that a trough of flows with such a magnetization and inward-dipping limbs can account for the observed persistent lows along the western edge of the block, the relatively low magnetic values along the axis of the block, and the large positive anomaly along the eastern side of the block. Flows as much as 1 mi thick near the base of the sequence have a remanent magnetization with a nearly opposite polarity. This reverse polarity has been measured on both sides of Lake Superior and is probably also present farther south, particularly in Iowa where the outer units of the block in an area north of Des Moines give rise to a prominent magnetic low. The axis of this long belt of Keweenawan mafic rocks cuts discordantly through the prevailing east-west-trending fabric of the older Precambrian terrane from southern Kansas to Lake Superior. This belt has several major left-lateral offsets, one of which produces a complete hiatus in the vicinity of the 40th parallel where an east-west transcontinental rift or fracture zone has been proposed. The axial basins of clastic rocks are outlined by linear magnetic anomalies and show a concordant relation to the structure of the mafic flows. These basins are oriented at an angle to the main axis, suggesting that the entire feature originated as a major rift composed of a series of short, linear, en echelon segments with offsets similar to the transform faults characterizing the present mid-ocean rift system. This midcontinent rift may well have been part of a Keweenawan global rift system with initial offsets consisting of transform faults along pre-existing fractures, but apparently it never fully developed laterally into an ocean basin, and the upwelling mafic material was localized along a relatively narrow belt.

  4. Aeromagnetic study of the midcontinent gravity high of central United States

    Science.gov (United States)

    King, Elizabeth R.; Zietz, Isidore

    1971-01-01

    present Earth's field and differs from it radically in direction. This magnetization was acquired before the flows were tilted into their present positions. A computed magnetic profile shows that a trough of flows with such a magnetization and inward-dipping limbs can account for the observed persistent lows along the western edge of the block, the relatively low magnetic values along the axis of the block, and the large positive anomaly along the eastern side of the block. Flows as much as 1 mi thick near the base of the sequence have a remanent magnetization with a nearly opposite polarity. This reverse polarity has been measured on both sides of Lake Superior and is probably also present farther south, particularly in Iowa where the outer units of the block in an area north of Des Moines give rise to a prominent magnetic low. The axis of this long belt of Keweenawan mafic rocks cuts discordantly through the prevailing east-west-trending fabric of the older Precambrian terrane from southern Kansas to Lake Superior. This belt has several major left-lateral offsets, one of which produces a complete hiatus in the vicinity of the 40th parallel where an east-west transcontinental rift or fracture zone has been proposed. The axial basins of clastic rocks are outlined by linear magnetic anomalies and show a concordant relation to the structure of the mafic flows. These basins are oriented at an angle to the main axis, suggesting that the entire feature originated as a major rift composed of a series of short, linear, en echelon segments with offsets similar to the transform faults characterizing the present mid-ocean rift system. This midcontinent rift may well have been part of a Keweenawan global rift system with initial offsets consisting of transform faults along pre-existing fractures, but apparently it never fully developed laterally into an ocean basin, and the upwelling mafic material was localized along a relatively narrow belt.

  5. 77 FR 58576 - Certain Wireless Communication Devices, Portable Music and Data Processing Devices, Computers...

    Science.gov (United States)

    2012-09-21

    ... COMMISSION Certain Wireless Communication Devices, Portable Music and Data Processing Devices, Computers, and... importation of certain wireless communication devices, portable music and data processing devices, computers... after importation of certain wireless communication devices, portable music and data processing...

  6. CENTRAL ASIA IN THE FOREIGN POLICY OF RUSSIA, THE UNITED STATES, AND THE EUROPEAN UNION

    OpenAIRE

    Omarov, Mels; Omarov, Noor

    2009-01-01

    The Soviet Union left behind a geopolitical vacuum in Central Asia which augmented the interest of outside powers in the region. Indeed, its advantageous geopolitical location, natural riches (oil and gas in particular), as well as transportation potential and the possibility of using it as a bridgehead in the counter-terrorist struggle have transformed Central Asia into one of the most attractive geopolitical areas. The great powers' highly divergent interests have led to their sharp rivalry...

  7. Marginal accuracy of four-unit zirconia fixed dental prostheses fabricated using different computer-aided design/computer-aided manufacturing systems.

    Science.gov (United States)

    Kohorst, Philipp; Brinkmann, Henrike; Li, Jiang; Borchers, Lothar; Stiesch, Meike

    2009-06-01

    Besides load-bearing capacity, marginal accuracy is a further crucial factor influencing the clinical long-term reliability of fixed dental prostheses (FDPs). The aim of this in vitro study was to evaluate the marginal fit of four-unit zirconia bridge frameworks fabricated using four different computer-aided design (CAD)/computer-aided manufacturing (CAM) systems. Ten frameworks were manufactured using each fabricating system. Three systems (inLab, Everest, Cercon) processed white-stage zirconia blanks, which had to be sintered to final density after milling, while with one system (Digident) restorations were directly milled from a fully sintered material. After manufacturing, horizontal and vertical marginal discrepancies, as well as the absolute marginal discrepancy, were determined by means of a replica technique. The absolute marginal discrepancy, which is considered to be the most suitable parameter reflecting restorations' misfit in the marginal area, had a mean value of 58 mum for the Digident system. By contrast, mean absolute marginal discrepancies for the three other systems, processing presintered blanks, differed significantly and ranged between 183 and 206 mum. Within the limitations of this study, it could be concluded that the marginal fit of zirconia FDPs is significantly dependent on the CAD/CAM system used, with restorations processed of fully sintered zirconia showing better fitting accuracy. PMID:19583762

  8. A snow hydroclimatology of the central and southern Appalachian Mountains, United States of America

    Science.gov (United States)

    Graybeal, Daniel Y.

    Background. A significant vulnerability to snowmelt-related flooding in the Appalachians was demonstrated by massive events in March, 1936; January, 1996; and January, 1998. Yet, no quantitative estimate of this vulnerability has been published for these mountains. High elevations extending far southward confound the extrapolation of snow hydroclimatology from adjacent regions. Objectives. The principal objective was to develop a complete snow hydroclimatology of the central and southern Appalachians, considering the deposition, detention, and depletion phases of snow cover. A snowfall climatology addressed whether and how often sufficient snow falls to create a flood hazard, while a snow cover climatology addressed whether and how often snow is allowed to build to floodrisk proportions. A snowmelt hydroclimatology addressed whether and how often snowmelt contributes directly to large peakflows in a representative watershed. Approach. Monthly and daily temperature, precipitation, and snow data were obtained from approximately 1000 cooperative-network stations with >=10 seasons (Oct-May) of snow data. Mean, maximum, percentiles, and interseasonal and monthly variability were mapped. Time series were analyzed, and proportions of seasonal snowfall from significant events determined, at select stations. A spatially distributed, index snow cover model facilitated classification of Cheat River, WV, peakflows by generating process. Confidence intervals about fitted peakflow frequency curves were used to evaluate differences among processes. Results. Climates in which snow significantly affects floods have been discriminated in the literature by 150 cm mean seasonal snowfall, 30 days mean snow cover duration, or 50 cm mean seasonal maximum snow depth. In the Appalachian Mountains south to North Carolina, these criteria lie within 95% confidence intervals about the median or mean values of these parameters. At return periods of 10 and 20 years, these thresholds are usually

  9. The Use of GPUs for Solving the Computed Tomography Problem

    Directory of Open Access Journals (Sweden)

    A.E. Kovtanyuk

    2014-07-01

    Full Text Available Computed tomography (CT is a widespread method used to study the internal structure of objects. The method has applications in medicine, industry and other fields of human activity. In particular, Electronic Imaging, as a species CT, can be used to restore the structure of nanosized objects. Accurate and rapid results are in high demand in modern science. However, there are computational limitations that bound the possible usefulness of CT. On the other hand, the introduction of high-performance calculations using Graphics Processing Units (GPUs provides improving quality and performance of computed tomography investigations. Moreover, parallel computing with GPUs gives significantly higher computation speeds when compared with (Central Processing Units CPUs, because of architectural advantages of the former. In this paper a computed tomography method of recovering the image using parallel computations powered by NVIDIA CUDA technology is considered. The implementation of this approach significantly reduces the required time for solving the CT problem.

  10. Central limit theorems for smoothed extreme value estimates of Poisson point processes boundaries

    OpenAIRE

    Girard, Stéphane; Menneteau, Ludovic

    2011-01-01

    In this paper, we give sufficient conditions to establish central limit theorems for boundary estimates of Poisson point processes. The considered estimates are obtained by smoothing some bias corrected extreme values of the point process. We show how the smoothing leads Gaussian asymptotic distributions and therefore pointwise confidence intervals. Some new unidimensional and multidimensional examples are provided.

  11. Central limit theorems for smoothed extreme value estimates of point processes boundaries

    OpenAIRE

    Girard, Stéphane; Menneteau, Ludovic

    2005-01-01

    In this paper, we give sufficient conditions to establish central limit theorems for boundary estimates of Poisson point processes. The considered estimates are obtained by smoothing some bias corrected extreme values of the point process. We show how the smoothing leads Gaussian asymptotic distributions and therefore pointwise confidence intervals. Some new unidimensional and multidimensional examples are provided.

  12. On a Multiprocessor Computer Farm for Online Physics Data Processing

    CERN Document Server

    Sinanis, N J

    1999-01-01

    The topic of this thesis is the design-phase performance evaluation of a large multiprocessor (MP) computer farm intended for the on-line data processing of the Compact Muon Solenoid (CMS) experiment. CMS is a high energy Physics experiment, planned to operate at CERN (Geneva, Switzerland) during the year 2005. The CMS computer farm is consisting of 1,000 MP computer systems and a 1,000 X 1,000 communications switch. The followed approach to the farm performance evaluation is through simulation studies and evaluation of small prototype systems building blocks of the farm. For the purposes of the simulation studies, we have developed a discrete-event, event-driven simulator that is capable to describe the high-level architecture of the farm and give estimates of the farm's performance. The simulator is designed in a modular way to facilitate the development of various modules that model the behavior of the farm building blocks in the desired level of detail. With the aid of this simulator, we make a particular...

  13. Comparison of measured and computed plasma loading resistance in the tandem mirror experiment-upgrade (TMX-U) central cell

    International Nuclear Information System (INIS)

    The plasma loading resistance vs density plots computed with McVey's Code XANTENA1, agree well with experimental measurements in the TMX-U central cell. The agreement is much better for frequencies where ω/ω/sub ci/ <1 than for ω/ω/sub ci/ greater than or equal to 1

  14. Radioimmunoassay data processing program for IBM PC computers

    International Nuclear Information System (INIS)

    The Medical Applications Section of the International Atomic Energy Agency (IAEA) has previously developed several programs for use on the Hewlett-Packard HP-41C programmable calculator to facilitate better quality control in radioimmunoassay through improved data processing. The program described in this document is designed for off-line analysis using an IBM PC (or compatible) for counting data from standards and unknown specimens (i.e. for analysis of counting data previously recorded by a counter), together with internal quality control (IQC) data both within and between batch. The greater computing power of the IBM PC has enabled the imprecision profile and IQC control curves which were unavailable on the HP-41C version. It is intended that the program would make available good data processing capability to laboratories having limited financial resources and serious problems of quality control. 3 refs

  15. Topics in medical image processing and computational vision

    CERN Document Server

    Jorge, Renato

    2013-01-01

      The sixteen chapters included in this book were written by invited experts of international recognition and address important issues in Medical Image Processing and Computational Vision, including: Object Recognition, Object Detection, Object Tracking, Pose Estimation, Facial Expression Recognition, Image Retrieval, Data Mining, Automatic Video Understanding and Management, Edges Detection, Image Segmentation, Modelling and Simulation, Medical thermography, Database Systems, Synthetic Aperture Radar and Satellite Imagery.   Different applications are addressed and described throughout the book, comprising: Object Recognition and Tracking, Facial Expression Recognition, Image Database, Plant Disease Classification, Video Understanding and Management, Image Processing, Image Segmentation, Bio-structure Modelling and Simulation, Medical Imaging, Image Classification, Medical Diagnosis, Urban Areas Classification, Land Map Generation.   The book brings together the current state-of-the-art in the various mul...

  16. Process of Market Strategy Optimization Using Distributed Computing Systems

    Directory of Open Access Journals (Sweden)

    Nowicki Wojciech

    2015-12-01

    Full Text Available If market repeatability is assumed, it is possible with some real probability to deduct short term market changes by making some calculations. The algorithm, based on logical and statistically reasonable scheme to make decisions about opening or closing position on a market, is called an automated strategy. Due to market volatility, all parameters are changing from time to time, so there is need to constantly optimize them. This article describes a team organization process when researching market strategies. Individual team members are merged into small groups, according to their responsibilities. The team members perform data processing tasks through a cascade organization, providing solutions to speed up work related to the use of remote computing resources. They also work out how to store results in a suitable way, according to the type of task, and facilitate the publication of a large amount of results.

  17. Security Evaluation of the Electronic Control Unit Software Update Process

    OpenAIRE

    Jaks, Liis

    2014-01-01

    A modern vehicle is controlled by a distributed network of embedded devices - Electronic Control Units. The software of these devices is updated over an easily accessible and standardised diagnostic interface. Their hardware capabilities are very low, and thereby the security implementations are fairly minimalistic. This thesis analyses the Electronic Control Units used in the heavy-duty vehicle company Scania for security vulnerabilities. First, a list of security requirements was compiled. ...

  18. Business process reengineering in the centralization of the industrial enterprises management

    Directory of Open Access Journals (Sweden)

    N.I. Chukhray

    2015-09-01

    Full Text Available The aim of the article. One of the important strategic directions of the powerful independent state with a stable economy is the development of the national economy, which requires the use of new upgraded tools that will enable to redesign and improve the production and management activities of the enterprise and make it more productive and competitive, while providing the economy of the financial, labor and other resources. However, this presupposes the use of partial restructuring and in some cases complete restructuring of the business processes. The aim of the article is to study business processes reengineering features at domestic enterprises and the development of business processes centralization algorithm on the example of JSC «Concern-Electron». To achieve this goal, the research identifies the following objectives: to summarize the main approaches to the business processes reengineering on the basis of centralization; to characterize the main stages in the reengineering implementation at industrial enterprise; to propose the selection mechanism of the subsidiary business processes for reengineering; to determine the algorithm of the business processes management centralization. The results of the analysis. The paper summarizes main approaches to business process reengineering based on the centralization; undertakes the characteristic advantages of its use at the industrial enterprises; proposes the stages of reengineering for Ukrainian industrial enterprises. Business process reengineering improves the efficiency of work organization at the JSC «Concern- Electron»: a generalized approach to the centralization of the industrial enterprise management, the algorithm of the business process management centralization that includes identification of the business processes that are duplicated; business process selection for the reengineering using the fuzzy set theory; making managerial decisions on reengineering. Conclusions and

  19. Reservoir computing: a photonic neural network for information processing

    Science.gov (United States)

    Paquot, Yvan; Dambre, Joni; Schrauwen, Benjamin; Haelterman, Marc; Massar, Serge

    2010-06-01

    At the boundaries between photonics and dynamic systems theory, we combine recent advances in neural networks with opto-electronic nonlinearities to demonstrate a new way to perform optical information processing. The concept of reservoir computing arose recently as a powerful solution to the issue of training recurrent neural networks. Indeed, it is comparable to, or even outperforms, other state of the art solutions for tasks such as speech recognition or time series prediction. As it is based on a static topology, it allows making the most of very simple physical architectures having complex nonlinear dynamics. The method is inherently robust to noise and does not require explicit programming operations. It is therefore particularly well adapted for analog realizations. Among the various implementations of the concept that have been proposed, we focus on the field of optics. Our experimental reservoir computer is based on opto-electronic technology, and can be viewed as an intermediate step towards an all optical device. Our fiber optics system is based on a nonlinear feedback loop operating at the threshold of chaos. In its present preliminary stage it is already capable of complicated tasks like modeling nonlinear systems with memory. Our aim is to demonstrate that such an analog reservoir can have performances comparable to state of the art digital implementations of Neural Networks. Furthermore, our system can in principle be operated at very high frequencies thanks to the high speed of photonic devices. Thus one could envisage targeting applications such as online information processing in broadband telecommunications.

  20. [DNA computing].

    Science.gov (United States)

    Błasiak, Janusz; Krasiński, Tadeusz; Popławski, Tomasz; Sakowski, Sebastian

    2011-01-01

    Biocomputers can be an alternative for traditional "silicon-based" computers, which continuous development may be limited due to further miniaturization (imposed by the Heisenberg Uncertainty Principle) and increasing the amount of information between the central processing unit and the main memory (von Neuman bottleneck). The idea of DNA computing came true for the first time in 1994, when Adleman solved the Hamiltonian Path Problem using short DNA oligomers and DNA ligase. In the early 2000s a series of biocomputer models was presented with a seminal work of Shapiro and his colleguas who presented molecular 2 state finite automaton, in which the restriction enzyme, FokI, constituted hardware and short DNA oligomers were software as well as input/output signals. DNA molecules provided also energy for this machine. DNA computing can be exploited in many applications, from study on the gene expression pattern to diagnosis and therapy of cancer. The idea of DNA computing is still in progress in research both in vitro and in vivo and at least promising results of these research allow to have a hope for a breakthrough in the computer science. PMID:21735816

  1. Manyscale Computing for Sensor Processing in Support of Space Situational Awareness

    Science.gov (United States)

    Schmalz, M.; Chapman, W.; Hayden, E.; Sahni, S.; Ranka, S.

    2014-09-01

    Increasing image and signal data burden associated with sensor data processing in support of space situational awareness implies continuing computational throughput growth beyond the petascale regime. In addition to growing applications data burden and diversity, the breadth, diversity and scalability of high performance computing architectures and their various organizations challenge the development of a single, unifying, practicable model of parallel computation. Therefore, models for scalable parallel processing have exploited architectural and structural idiosyncrasies, yielding potential misapplications when legacy programs are ported among such architectures. In response to this challenge, we have developed a concise, efficient computational paradigm and software called Manyscale Computing to facilitate efficient mapping of annotated application codes to heterogeneous parallel architectures. Our theory, algorithms, software, and experimental results support partitioning and scheduling of application codes for envisioned parallel architectures, in terms of work atoms that are mapped (for example) to threads or thread blocks on computational hardware. Because of the rigor, completeness, conciseness, and layered design of our manyscale approach, application-to-architecture mapping is feasible and scalable for architectures at petascales, exascales, and above. Further, our methodology is simple, relying primarily on a small set of primitive mapping operations and support routines that are readily implemented on modern parallel processors such as graphics processing units (GPUs) and hybrid multi-processors (HMPs). In this paper, we overview the opportunities and challenges of manyscale computing for image and signal processing in support of space situational awareness applications. We discuss applications in terms of a layered hardware architecture (laboratory > supercomputer > rack > processor > component hierarchy). Demonstration applications include

  2. Design of the Laboratory-Scale Plutonium Oxide Processing Unit in the Radiochemical Processing Laboratory

    Energy Technology Data Exchange (ETDEWEB)

    Lumetta, Gregg J. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Meier, David E. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Tingey, Joel M. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Casella, Amanda J. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Delegard, Calvin H. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Edwards, Matthew K. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Orton, Robert D. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Rapko, Brian M. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Smart, John E. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2015-05-01

    This report describes a design for a laboratory-scale capability to produce plutonium oxide (PuO2) for use in identifying and validating nuclear forensics signatures associated with plutonium production, as well as for use as exercise and reference materials. This capability will be located in the Radiochemical Processing Laboratory at the Pacific Northwest National Laboratory. The key unit operations are described, including PuO2 dissolution, purification of the Pu by ion exchange, precipitation, and re-conversion to PuO2 by calcination.

  3. Design of the Laboratory-Scale Plutonium Oxide Processing Unit in the Radiochemical Processing Laboratory

    International Nuclear Information System (INIS)

    This report describes a design for a laboratory-scale capability to produce plutonium oxide (PuO2) for use in identifying and validating nuclear forensics signatures associated with plutonium production, as well as for use as exercise and reference materials. This capability will be located in the Radiochemical Processing Laboratory at the Pacific Northwest National Laboratory. The key unit operations are described, including PuO2 dissolution, purification of the Pu by ion exchange, precipitation, and re-conversion to PuO2 by calcination.

  4. Central Weighted Non-Oscillatory (CWENO) and Operator Splitting Schemes in Computational Astrophysics

    Science.gov (United States)

    Ivanovski, Stavro

    2011-05-01

    High-resolution shock-capturing schemes (HRSC) are known to be the most adequate and advanced technique used for numerical approximation to the solution of hyperbolic systems of conservation laws. Since most of the astrophysical phenomena can be described by means of system of (M)HD conservation equations, finding most ac- curate, computationally not expensive and robust numerical approaches for their solution is a task of great importance for numerical astrophysics. Based on the Central Weighted Non-Oscillatory (CWENO) reconstruction approach, which relies on the adaptive choice of the smoothest stencil for resolving strong shocks and discontinuities in central framework on staggered grid, we present a new algorithm for systems of conservation laws using the key idea of evolving the intermediate stages in the Runge Kutta time discretization in primitive variables . In this thesis, we introduce a new so-called conservative-primitive variables strategy (CPVS) by integrating the latter into the earlier proposed Central Runge Kutta schemes (Pareschi et al., 2005). The advantages of the new shock-capturing algorithm with respect to the state-of-the-art HRSC schemes used in astrophysics like upwind Godunov-type schemes can be summarized as follows: (i) Riemann-solver-free central approach; (ii) favoring dissipation (especially needed for multidimensional applications in astrophysics) owing to the diffusivity coming from the design of the scheme; (iii) high accuracy and speed of the method. The latter stems from the fact that the advancing in time in the predictor step does not need inversion between the primitive and conservative variables and is essential in applications where the conservative variables are neither trivial to compute nor to invert in the set of primitive ones as it is in relativistic hydrodynamics. The main objective of the research adopted in the thesis is to outline the promising application of the CWENO (with CPVS) in the problems of the

  5. Computational modelling of a thermoforming process for thermoplastic starch

    Science.gov (United States)

    Szegda, D.; Song, J.; Warby, M. K.; Whiteman, J. R.

    2007-05-01

    Plastic packaging waste currently forms a significant part of municipal solid waste and as such is causing increasing environmental concerns. Such packaging is largely non-biodegradable and is particularly difficult to recycle or to reuse due to its complex composition. Apart from limited recycling of some easily identifiable packaging wastes, such as bottles, most packaging waste ends up in landfill sites. In recent years, in an attempt to address this problem in the case of plastic packaging, the development of packaging materials from renewable plant resources has received increasing attention and a wide range of bioplastic materials based on starch are now available. Environmentally these bioplastic materials also reduce reliance on oil resources and have the advantage that they are biodegradable and can be composted upon disposal to reduce the environmental impact. Many food packaging containers are produced by thermoforming processes in which thin sheets are inflated under pressure into moulds to produce the required thin wall structures. Hitherto these thin sheets have almost exclusively been made of oil-based polymers and it is for these that computational models of thermoforming processes have been developed. Recently, in the context of bioplastics, commercial thermoplastic starch sheet materials have been developed. The behaviour of such materials is influenced both by temperature and, because of the inherent hydrophilic characteristics of the materials, by moisture content. Both of these aspects affect the behaviour of bioplastic sheets during the thermoforming process. This paper describes experimental work and work on the computational modelling of thermoforming processes for thermoplastic starch sheets in an attempt to address the combined effects of temperature and moisture content. After a discussion of the background of packaging and biomaterials, a mathematical model for the deformation of a membrane into a mould is presented, together with its

  6. New fortran computer programs to acquire and process isotopic mass spectrometric data

    International Nuclear Information System (INIS)

    This report describes in some detail the operation of newly written programs that acquire and process isotopic mass spectrometric data. Both functional and overall design aspects are addressed. The three basic program units - file manipulation, data acquisition, and data processing - are discussed in turn. Step-by-step instructions are included where appropriate, and each subsection is described in enough detail to give a clear picture of its function. Organization of the file structure, which is central to the entire concept, is extensively discussed with the help of numerous tables. Appendices contain flow charts and definitions of variables to help a programmer unfamiliar with the programs to alter them with a minimum of lost time. 12 figures, 17 tables

  7. Design of a Distributed Control System Using a Personal Computer and Micro Control Units for Humanoid Robots

    Directory of Open Access Journals (Sweden)

    Mitsuhiro Yamano

    2010-01-01

    Full Text Available Problem statement: Humanoid robots have many motors and sensors and many control methods are used to carry out complicated tasks of the robots. Therefore, efficient control systems are required for the robots. Approach: This study presented a distributed control system using a Personal Computer (PC and Micro Control Units (MCUs for humanoid robots. Distributed control systems have the advantages that parallel processing using multiple computers is possible and cables in the system can be short. For the control of the humanoid robots, required functions of the control system were discussed. Based on the discussion, the hardware of the system including a PC and MCUs was proposed. The system was designed to carry out the process of the robot control efficiently. The system can be expanded easily by increasing the number of MCU boards. The software of the system for feedback control of the motors and the communication between the computers was proposed. Flexible switching of motor control methods can be achieved easily. Results: Experiments were performed to show the effectiveness of the system. The sampling frequency of the whole system can be about 0.5 kHz and that in local MCUs can be about 10 kHz. Control method of the motor can be changed during the motion in an experiment controlling four joints of the robot. Conclusion: The results of the experiments showed that the distributed control system proposed in this study is effective for humanoid robots.

  8. Sector spanning agrifood process transparency with Direct Computer Mapping

    Directory of Open Access Journals (Sweden)

    Mónika Varga

    2010-11-01

    Full Text Available Agrifood processes are built from multiscale, time-varied networks that span many sectors from cultivation, through animal breeding, food industry and trade to the consumers. The sector spanning traceability has not yet been solved, because neither the“one-step backward, one-step forward” passing of IDs, nor the large sophisticated databases give a feasible solution. In our approach, the transparency of process networks is based on the generic description of dynamic mass balances. The solution of this,apparently more difficult task, makes possible the unified acquisition of the data from the different ERP systems, and the scalable storage of these simplified process models. Inaddition, various task specific intensive parameters (e.g. concentrations, prices, etc. can also be carried with the mass flows. With the knowledge of these structured models, theplanned Agrifood Interoperability Centers can serve tracing and tracking results for the actors and for the public authorities. Our methodology is based on the Direct Computer Mapping of process models. The software is implemented in open source code GNUPrologand C++ languages. In the first, preliminary phase we have studied a couple of consciously different realistic actors, as well as an example for the sector spanning chain,combined from these realistic elements.

  9. Emergency healthcare process automation using mobile computing and cloud services.

    Science.gov (United States)

    Poulymenopoulou, M; Malamateniou, F; Vassilacopoulos, G

    2012-10-01

    Emergency care is basically concerned with the provision of pre-hospital and in-hospital medical and/or paramedical services and it typically involves a wide variety of interdependent and distributed activities that can be interconnected to form emergency care processes within and between Emergency Medical Service (EMS) agencies and hospitals. Hence, in developing an information system for emergency care processes, it is essential to support individual process activities and to satisfy collaboration and coordination needs by providing readily access to patient and operational information regardless of location and time. Filling this information gap by enabling the provision of the right information, to the right people, at the right time fosters new challenges, including the specification of a common information format, the interoperability among heterogeneous institutional information systems or the development of new, ubiquitous trans-institutional systems. This paper is concerned with the development of an integrated computer support to emergency care processes by evolving and cross-linking institutional healthcare systems. To this end, an integrated EMS cloud-based architecture has been developed that allows authorized users to access emergency case information in standardized document form, as proposed by the Integrating the Healthcare Enterprise (IHE) profile, uses the Organization for the Advancement of Structured Information Standards (OASIS) standard Emergency Data Exchange Language (EDXL) Hospital Availability Exchange (HAVE) for exchanging operational data with hospitals and incorporates an intelligent module that supports triaging and selecting the most appropriate ambulances and hospitals for each case. PMID:22205383

  10. Cranial computed tomography findings in patients admitted to the emergency unit of Hospital Universitario Cajuru

    International Nuclear Information System (INIS)

    Objective: to identify and analyze the prevalence of cranial computed tomography findings in patients admitted to the emergency unit of Hospital Universitario Cajuru. Materials and methods: cross-sectional study analyzing 200 consecutive non contrast-enhanced cranial computed tomography reports of patients admitted to the emergency unit of Hospital Universitario Cajuru. Results: alterations were observed in 76.5% of the patients. Among them, the following findings were most frequently observed: extracranial soft tissue swelling (22%), bone fracture (16.5%), subarachnoid hemorrhage (15%), nonspecific hypodensity (14.5%), paranasal sinuses opacification (11.5%), diffuse cerebral edema (10.5%), subdural hematoma (9.5%), cerebral contusion (8.5%), hydrocephalus (8%), retractable hypodensity /gliosis/ encephalomalacia (8%). Conclusion: the authors recognize that the most common findings in emergency departments reported in the literature are similar to the ones described in the present study. This information is important for professionals to recognize the main changes to be identified at cranial computed tomography, and for future planning and hospital screening aiming at achieving efficiency and improvement in services. (author)

  11. Parallel design of JPEG-LS encoder on graphics processing units

    Science.gov (United States)

    Duan, Hao; Fang, Yong; Huang, Bormin

    2012-01-01

    With recent technical advances in graphic processing units (GPUs), GPUs have outperformed CPUs in terms of compute capability and memory bandwidth. Many successful GPU applications to high performance computing have been reported. JPEG-LS is an ISO/IEC standard for lossless image compression which utilizes adaptive context modeling and run-length coding to improve compression ratio. However, adaptive context modeling causes data dependency among adjacent pixels and the run-length coding has to be performed in a sequential way. Hence, using JPEG-LS to compress large-volume hyperspectral image data is quite time-consuming. We implement an efficient parallel JPEG-LS encoder for lossless hyperspectral compression on a NVIDIA GPU using the computer unified device architecture (CUDA) programming technology. We use the block parallel strategy, as well as such CUDA techniques as coalesced global memory access, parallel prefix sum, and asynchronous data transfer. We also show the relation between GPU speedup and AVIRIS block size, as well as the relation between compression ratio and AVIRIS block size. When AVIRIS images are divided into blocks, each with 64×64 pixels, we gain the best GPU performance with 26.3x speedup over its original CPU code.

  12. Accelerating large-scale protein structure alignments with graphics processing units

    Directory of Open Access Journals (Sweden)

    Pang Bin

    2012-02-01

    Full Text Available Abstract Background Large-scale protein structure alignment, an indispensable tool to structural bioinformatics, poses a tremendous challenge on computational resources. To ensure structure alignment accuracy and efficiency, efforts have been made to parallelize traditional alignment algorithms in grid environments. However, these solutions are costly and of limited accessibility. Others trade alignment quality for speedup by using high-level characteristics of structure fragments for structure comparisons. Findings We present ppsAlign, a parallel protein structure Alignment framework designed and optimized to exploit the parallelism of Graphics Processing Units (GPUs. As a general-purpose GPU platform, ppsAlign could take many concurrent methods, such as TM-align and Fr-TM-align, into the parallelized algorithm design. We evaluated ppsAlign on an NVIDIA Tesla C2050 GPU card, and compared it with existing software solutions running on an AMD dual-core CPU. We observed a 36-fold speedup over TM-align, a 65-fold speedup over Fr-TM-align, and a 40-fold speedup over MAMMOTH. Conclusions ppsAlign is a high-performance protein structure alignment tool designed to tackle the computational complexity issues from protein structural data. The solution presented in this paper allows large-scale structure comparisons to be performed using massive parallel computing power of GPU.

  13. Evaluation of Selected Resource Allocation and Scheduling Methods in Heterogeneous Many-Core Processors and Graphics Processing Units

    OpenAIRE

    Ciznicki Milosz; Kurowski Krzysztof; Węglarz Jan

    2014-01-01

    Heterogeneous many-core computing resources are increasingly popular among users due to their improved performance over homogeneous systems. Many developers have realized that heterogeneous systems, e.g. a combination of a shared memory multi-core CPU machine with massively parallel Graphics Processing Units (GPUs), can provide significant performance opportunities to a wide range of applications. However, the best overall performance can only be achieved if application tasks are efficiently ...

  14. Improving the Quotation Process of an After-Sales Unit

    OpenAIRE

    Matilainen, Janne

    2013-01-01

    The purpose of this study was to model and analyze the quotation process of area managers at a global company. Process improvement requires understanding the fundamentals of the process. The study was conducted a case study. Data comprised of internal documentation of the case company, literature, and semi-structured, themed interviews of process performers and stakeholders. The objective was to produce model of the current state of the process. The focus was to establish a holistic view o...

  15. Preventing central venous catheter-related infection in a surgical intensive-care unit

    NARCIS (Netherlands)

    Bijma, R; Girbes, AR; Kleijer, DJ; Zwaveling, JH

    1999-01-01

    The cumulative effect of five measures (introduction of hand disinfection with alcohol, a new type of dressing, a one-bag system for parenteral nutrition, a new intravenous connection device, and surveillance by an infection control practitioner) on central venous catheter colonization and bacteremi

  16. High Performance Direct Gravitational N-body Simulations on Graphics Processing Unit

    CERN Document Server

    Zwart, S P; Geldof, P; Zwart, Simon Portegies; Belleman, Robert; Geldof, Peter

    2007-01-01

    We present the results of gravitational direct $N$-body simulations using the commercial graphics processing units (GPU) NVIDIA Quadro FX1400 and GeForce 8800GTX, and compare the results with GRAPE-6Af special purpose hardware. The force evaluation of the $N$-body problem was implemented in Cg using the GPU directly to speed-up the calculations. The integration of the equations of motions were, running on the host computer, implemented in C using the 4th order predictor-corrector Hermite integrator with block time steps. We find that for a large number of particles ($N \\apgt 10^4$) modern graphics processing units offer an attractive low cost alternative to GRAPE special purpose hardware. A modern GPU continues to give a relatively flat scaling with the number of particles, comparable to that of the GRAPE. Using the same time step criterion the total energy of the $N$-body system was conserved better than to one in $10^6$ on the GPU, which is only about an order of magnitude worse than obtained with GRAPE. Fo...

  17. Quantum Chemistry for Solvated Molecules on Graphical Processing Units (GPUs)using Polarizable Continuum Models

    CERN Document Server

    Liu, Fang; Kulik, Heather J; Martínez, Todd J

    2015-01-01

    The conductor-like polarization model (C-PCM) with switching/Gaussian smooth discretization is a widely used implicit solvation model in chemical simulations. However, its application in quantum mechanical calculations of large-scale biomolecular systems can be limited by computational expense of both the gas phase electronic structure and the solvation interaction. We have previously used graphical processing units (GPUs) to accelerate the first of these steps. Here, we extend the use of GPUs to accelerate electronic structure calculations including C-PCM solvation. Implementation on the GPU leads to significant acceleration of the generation of the required integrals for C-PCM. We further propose two strategies to improve the solution of the required linear equations: a dynamic convergence threshold and a randomized block-Jacobi preconditioner. These strategies are not specific to GPUs and are expected to be beneficial for both CPU and GPU implementations. We benchmark the performance of the new implementat...

  18. AN APPROACH TO EFFICIENT FEM SIMULATIONS ON GRAPHICS PROCESSING UNITS USING CUDA

    Directory of Open Access Journals (Sweden)

    Björn Nutti

    2014-04-01

    Full Text Available The paper presents a highly efficient way of simulating the dynamic behavior of deformable objects by means of the finite element method (FEM with computations performed on Graphics Processing Units (GPU. The presented implementation reduces bottlenecks related to memory accesses by grouping the necessary data per node pairs, in contrast to the classical way done per element. This strategy reduces the memory access patterns that are not suitable for the GPU memory architecture. Furthermore, the presented implementation takes advantage of the underlying sparse-block-matrix structure, and it has been demonstrated how to avoid potential bottlenecks in the algorithm. To achieve plausible deformational behavior for large local rotations, the objects are modeled by means of a simplified co-rotational FEM formulation.

  19. Spatiotemporal processing of linear acceleration: primary afferent and central vestibular neuron responses

    Science.gov (United States)

    Angelaki, D. E.; Dickman, J. D.

    2000-01-01

    Spatiotemporal convergence and two-dimensional (2-D) neural tuning have been proposed as a major neural mechanism in the signal processing of linear acceleration. To examine this hypothesis, we studied the firing properties of primary otolith afferents and central otolith neurons that respond exclusively to horizontal linear accelerations of the head (0.16-10 Hz) in alert rhesus monkeys. Unlike primary afferents, the majority of central otolith neurons exhibited 2-D spatial tuning to linear acceleration. As a result, central otolith dynamics vary as a function of movement direction. During movement along the maximum sensitivity direction, the dynamics of all central otolith neurons differed significantly from those observed for the primary afferent population. Specifically at low frequencies (neurons peaked in phase with linear velocity, in contrast to primary afferents that peaked in phase with linear acceleration. At least three different groups of central response dynamics were described according to the properties observed for motion along the maximum sensitivity direction. "High-pass" neurons exhibited increasing gains and phase values as a function of frequency. "Flat" neurons were characterized by relatively flat gains and constant phase lags (approximately 20-55 degrees ). A few neurons ("low-pass") were characterized by decreasing gain and phase as a function of frequency. The response dynamics of central otolith neurons suggest that the approximately 90 degrees phase lags observed at low frequencies are not the result of a neural integration but rather the effect of nonminimum phase behavior, which could arise at least partly through spatiotemporal convergence. Neither afferent nor central otolith neurons discriminated between gravitational and inertial components of linear acceleration. Thus response sensitivity was indistinguishable during 0.5-Hz pitch oscillations and fore-aft movements. The fact that otolith-only central neurons with "high

  20. Quality control and dosimetry in computed tomography units; Controle de qualidade e dosimetria em equipamentos de tomografia computadorizada

    Energy Technology Data Exchange (ETDEWEB)

    Pina, Diana Rodrigues de; Ribeiro, Sergio Marrone [UNESP, Botucatu, SP (Brazil). Faculdade de Medicina], e-mail: drpina@fmb.unesp.br; Duarte, Sergio Barbosa [Centro Brasileiro e Pesquisas Fisicas (CBPF), Rio de Janeiro, RJ (Brazil); Netto, Thomaz Ghilardi [Universidade de Sao Paulo (USP), Ribeirao Preto, SP (Brazil). Hospital das Clinicas. Centro de Ciencias das Imagens e Fisica Medica; Morceli, Jose [UNESP, Botucatu, SP (Brazil). Faculdade de Medicina. Secao de Diagnostico por Imagem; Carbi, Eros Duarte Ortigoso; Costa Neto, Andre; Souza, Rafael Toledo Fernandes de [UNESP, Botucatu, SP (Brazil). Inst. de Biociencias

    2009-05-15

    Objective: Evaluation of equipment conditions and dosimetry in computed tomography services utilizing protocols for head, abdomen, and lumbar spine in adult patients (in three different units) and pediatric patients up to 18 months of age (in one of the units evaluated). Materials and methods: Computed tomography dose index and multiple-scan average dose were estimated in studies of adult patients with three different units. Additionally, entrance surface doses as well as absorbed dose were estimated in head studies for both adult and pediatric patients in a single computed tomography unit. Results: Mechanical quality control tests were performed, demonstrating that computed tomography units comply with the equipment-use specifications established by the current standards. Dosimetry results have demonstrated that the multiplescan average dose values were in excess of up to 109.0% the reference levels, presenting considerable variation amongst the computed tomography units evaluated in the present study. Absorbed doses obtained with pediatric protocols are lower than those with adult patients, presenting a reduction of up to 51.0% in the thyroid gland. Conclusion: The present study has analyzed the operational conditions of three computed tomography units, establishing which parameters should be set for the deployment of a quality control program in the institutions where this study was developed. (author)